Server died trying to recover

  • I had 2 drive in a Mirror configuration and I'm trying to get the data off.
    I did a fresh install of OMV 3 on a new computer, the 2 disks are shown in the Physical Disks tabs, they do not show up in the Filesystem Tab.
    Do I just recreate a Raid Mirror with the 2 drives? Will this destroy the data on them? Thanks

    • Offizieller Beitrag

    Do I just recreate a Raid Mirror with the 2 drives? Will this destroy the data on them?

    Yes, that will destroy the data on them. Degraded or missing raid array questions

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks, had a hardware and main disk failure. Reinstalled OMV 0.3 on a new computer and drive, added the 2 data drives that were in a Raid Mirror configuration. Disk are showing up in the webgui but there is no entry in the Raid tab or filesystem tab.


    Code
    root@openmed:~# cat /proc/mdstat
    Personalities :
    unused devices: <none>
    Code
    root@openmed:~# mdadm --detail --scan --verbose
    root@openmed:~#
    Code
    root@openmed:~# fdisk -l | grep "Disk "
    Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Disk identifier: EF96B751-DE69-41D6-8C42-5F549B9659FD
    Disk /dev/sdb: 149 GiB, 160000000000 bytes, 312500000 sectors
    Disk identifier: 0xcef83c13
    Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Disk identifier: 28977303-C2A8-494B-842F-77F5A8749AA5
    root@openmed:~#
    Code
    root@openmed:~# blkid
    /dev/sdb1: UUID="653298a5-ccbb-43ea-b83f-758051e80a17" TYPE="ext4" PARTUUID="cef                                                                             83c13-01"
    /dev/sdb5: UUID="c31a7cba-2b9e-4a86-8cf0-c6f2b44fa654" TYPE="swap" PARTUUID="cef                                                                             83c13-05"
    /dev/sda1: PARTUUID="fcd39769-4b4b-43c8-b954-b1ef45634b5a"
    /dev/sdc1: PARTUUID="473d90b7-9f33-47a9-be69-6042e007601d"
    root@openmed:~#
    • Offizieller Beitrag

    If the 500 GB drives are the drives that were in the array, you didn't wipe them before creating the array with OMV or you didn't create the array with OMV? That part is important for me to give you commands to re-assemble.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • yes the 2 500 GB drives are they ones in the array. I have used Testdisk to get data off them so the data is still there.


    I created the array in the OMV webgui on the original install. I have only connected them to the new install of OMV.


    Thanks

    • Offizieller Beitrag

    This should re-assemble the array and hopefully the file system still is there.
    mdadm --assemble --verbose --force /dev/md127 /dev/sd[ac]

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Thanks will give that a try, will this damage the data in anyway if it doesn't work?
    I might pull all the data off with testdisk first.

    It could but doubtful since it is a mirror - writing same data to both disks. If you are worried, go the testdisk route first or if you have the space, you could dd the image of one of the drives to a saved location to run testdisk on.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    root@openmed:~# mdadm --assemble --verbose --force /dev/md127 /dev/sd[ac]
    mdadm: looking for devices for /dev/md127
    mdadm: Cannot assemble mbr metadata on /dev/sda
    mdadm: /dev/sda has no superblock - assembly aborted

    I received this message.

    • Offizieller Beitrag

    Not usually good. You could try starting it in degraded mode with: mdadm --assemble --verbose --force /dev/md127 /dev/sdc

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    root@openmed:~# mdadm --assemble --verbose --force /dev/md127 /dev/sdc
    mdadm: looking for devices for /dev/md127
    mdadm: Cannot assemble mbr metadata on /dev/sdc
    mdadm: /dev/sdc has no superblock - assembly aborted

    Got the same.

    • Offizieller Beitrag

    I think you are out of luck if mdadm can't find a superblock on either drive.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    dd if=/dev/sdc of=/path/to/backup/location/sdc.dd bs=1M status=progress

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!