Raid5 Missing After Restart

  • I had 4 x 6.0TB in a Raid5 Array (with Data) and i added a 5th 6.0TB using the Grow option in the Raid Management. A couple of days after the drive finished initializing into the Raid Array, i restarted OMV and the Complete Raid5 Array is missing.


    All Drives are listed in the Physical Disk section, Raid Management is empty, and the File System states the Raid5 Array is N/A.All hard drives are connected via SATA cable to the motherboard and not by USB or a separate raid card.
    ]
    I have looked over several other post and haven't found anything in particular that has helped resolve the issue. I might need a decent amount of help as i don't know anything about the workings of Linux or OMV,


    Running OMV Release: 2.1.17 (Stone Burner)


    =+=+=+=+=+=+root@OMV:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : inactive sdc[1] sdd[2]
    11720783024 blocks super 1.2


    unused devices: <none>


    =+=+=+=+=+=+
    root@OMV:~# blkid
    /dev/sdb1: LABEL="Miso1" UUID="40c96134-e696-4673-9b94-f790cc9bb666" TYPE="ext4"
    /dev/sde1: LABEL="Archive" UUID="184aa9e4-a2e8-40bf-9fcd-2e0d6c761593" TYPE="ext4"
    /dev/sdb5: UUID="ad33ba0f-7959-419a-a0be-0fe0ca61fc15" TYPE="swap"
    /dev/sdc: UUID="aafa1dcb-68a8-2561-99fb-c5d05f161971" UUID_SUB="fb66e142-3cb5-f9e7-c181-d73ae5c2a020" LABEL="OMV:Riso" TYPE="linux_raid_member"
    /dev/sdd: UUID="aafa1dcb-68a8-2561-99fb-c5d05f161971" UUID_SUB="e6be8e26-cfbe-6785-491b-3ad2c07f8387" LABEL="OMV:Riso" TYPE="linux_raid_member"


    =+=+=+=+=+=+
    root@OMV:~# fdisk -l
    Disk /dev/sda: 6001.2 GB, 6001175126016 bytes
    256 heads, 63 sectors/track, 726751 cylinders, total 11721045168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Device Boot Start End Blocks Id System
    /dev/sda1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.


    Disk /dev/sdb: 500.1 GB, 500107862016 bytes
    255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000a3c0a


    Device Boot Start End Blocks Id System
    /dev/sdb1 * 2048 963907583 481952768 83 Linux
    /dev/sdb2 963909630 976771071 6430721 5 Extended
    /dev/sdb5 963909632 976771071 6430720 82 Linux swap / Solaris


    Disk /dev/sdc: 6001.2 GB, 6001175126016 bytes
    255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sdd: 6001.2 GB, 6001175126016 bytes
    255 heads, 63 sectors/track, 729601 cylinders, total 11721045168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Disk /dev/sde: 4000.8 GB, 4000787030016 bytes
    256 heads, 63 sectors/track, 484501 cylinders, total 7814037168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Device Boot Start End Blocks Id System
    /dev/sde1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.


    Disk /dev/sdf: 6001.2 GB, 6001175126016 bytes
    256 heads, 63 sectors/track, 726751 cylinders, total 11721045168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Device Boot Start End Blocks Id System
    /dev/sdf1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.


    Disk /dev/sdg: 6001.2 GB, 6001175126016 bytes
    256 heads, 63 sectors/track, 726751 cylinders, total 11721045168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 4096 bytes
    I/O size (minimum/optimal): 4096 bytes / 4096 bytes
    Disk identifier: 0x00000000


    Device Boot Start End Blocks Id System
    /dev/sdg1 1 4294967295 2147483647+ ee GPT
    Partition 1 does not start on physical sector boundary.


    =+=+=+=+=+=+
    root@OMV:~# mdadm --assemble--scan


    =+=+=+=+=+=+
    root@OMV:~# mdadm --detail --scan
    ARRAY /dev/md0 metadata=1.2 name=OMV:Riso UUID=aafa1dcb:68a82561:99fbc5d0:5f161971


    =+=+=+=+=+=+
    root@OMV:~# cat /etc/fstab
    # /etc/fstab: static file system information.## Use 'blkid' to print the universally unique identifier for a
    # device; this may be used with UUID= as a more robust way to name devices
    # that works even if disks are added and removed. See fstab(5).
    #
    # <file system> <mount point> <type> <options> <dump> <pass>
    proc /proc proc defaults 0 0
    # / was on /dev/sda1 during installation
    UUID=40c96134-e696-4673-9b94-f790cc9bb666 / ext4 errors=remount-ro 0 1
    # swap was on /dev/sda5 during installation
    UUID=ad33ba0f-7959-419a-a0be-0fe0ca61fc15 none swap sw 0 0
    /dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
    /dev/scd0 /media/floppy0 auto rw,user,noauto 0 0
    tmpfs /tmp tmpfs defaults 0 0
    # >>> [openmediavault]
    UUID=184aa9e4-a2e8-40bf-9fcd-2e0d6c761593 /media/184aa9e4-a2e8-40bf-9fcd-2e0d6c761593 ext4 defaults,nofail,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0 0 2
    UUID=2977dbd6-ec84-4a9b-ad47-dd0cd4919a87 /media/2977dbd6-ec84-4a9b-ad47-dd0cd4919a87 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
    # <<< [openmediavault]


    =+=+=+=+=+=+
    root@OMV:~# mdadm --stop /dev/md0
    mdadm: stopped /dev/md0


    =+=+=+=+=+=+
    root@OMV:~# mdadm --assemble /dev/md0 /dev/sd[acdfg] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: Cannot assemble mbr metadata on /dev/sda
    mdadm: /dev/sda has no superblock - assembly aborted

    • Offizieller Beitrag

    It will be dangerous but I don't see many other options. Try:


    mdadm --stop /dev/md0
    dd if=/dev/zero of=/dev/sda bs=512 count=10000
    mdadm --assemble /dev/md0 /dev/sd[acdfg] --verbose --force

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • =+=+=+=+=+=+
    root@OMV:~# mdadm --stop /dev/md0
    mdadm: stopped /dev/md0


    =+=+=+=+=+=+
    root@OMV:~# dd if=/dev/zero of=/dev/sda bs=512 count=10000
    10000+0 records in
    10000+0 records out
    5120000 bytes (5.1 MB) copied, 0.197132 s, 26.0 MB/s


    =+=+=+=+=+=+
    root@OMV:~# mdadm --assemble /dev/md0 /dev/sd[acdfg] --verbose --force
    mdadm: looking for devices for /dev/md0
    mdadm: Cannot assemble mbr metadata on /dev/sda
    mdadm: /dev/sda has no superblock - assembly aborted


    Still not available

  • I'm in the same position. I'm really disappointed by the lack of support available on this forum. Do you know of any other forums where people have the requisite skills to assist?

    • Offizieller Beitrag

    I'm in the same position. I'm really disappointed by the lack of support available on this forum.


    That is disappointing to read... If you look through this forum, I and others have helped probably hundred of times with failed arrays. We volunteer our time and can be busy often. Sometimes they are able to be fixed. Sometimes not. You will have a hard time finding this much support on any other forum. I disagree that there a "lot" of posts about mdadm issues. I use it myself on multiple OMV boxes with NO issues. You don't see posts about mdadm working correctly so it seems like all you see is problems. The percentage of people having issues is very low.


    That said, this is not an OMV issue. All OMV does is setup the array and OMV is debian. mdadm is very sensitive and does seem to do much better on systems that are not shutdown often. It also has problems with some drives. The bigger the drives get, the harder the recovery seems to be. I will say again that raid is not backup. I personally have a second server for backup. People will say that this is expensive but how much is your data worth??


    I'm sorry the commands I gave you do not work. I'm not sure what else to try. No command I can give you will fix a failing drive. Any post on the internet about mdadm on linux could possibly help since all Linux OSes using mdadm are the same type of array.


  • That is disappointing to read... If you look through this forum, I and others have helped probably hundred of times with failed arrays. We volunteer our time and can be busy often. Sometimes they are able to be fixed. Sometimes not. You will have a hard time finding this much support on any other forum. I disagree that there a "lot" of posts about mdadm issues. I use it myself on multiple OMV boxes with NO issues. You don't see posts about mdadm working correctly so it seems like all you see is problems. The percentage of people having issues is very low.


    That said, this is not an OMV issue. All OMV does is setup the array and OMV is debian. mdadm is very sensitive and does seem to do much better on systems that are not shutdown often. It also has problems with some drives. The bigger the drives get, the harder the recovery seems to be. I will say again that raid is not backup. I personally have a second server for backup. People will say that this is expensive but how much is your data worth??


    I'm sorry the commands I gave you do not work. I'm not sure what else to try. No command I can give you will fix a failing drive. Any post on the internet about mdadm on linux could possibly help since all Linux OSes using mdadm are the same type of array.


    I understand that the most of people in the forums are having issues and that the majority of the people are fine and aren't on the forum. I am not upset with OMV. i originally planned on just trying it out, and never decided to do anything with it because it was setup and working. I found some OMV features easy to use, a lot i didn't use, some difficult to setup and others stumped me completely. When using the OMV GUI i felt that a lot of things were not explained very well and that the GUI is designed for a system admins and not novices like me. I did a lot of clicking to see what things did initial to get going.


    Occassionally, i would have updating issues and i found the answers i needed in the forum by searching. I just couldn't find my solution for this raid array issue.


    Persoanally i don't understand linux at all. Windows on the other hand, i can read the codes or symptoms and understand what is going on most of the time.


    Your right that raid is not a backup, and luckily my data was not critical for me. but it did take me 1 years to amass all of it. and i found myself needing storage faster than i could possible realize. I originally had my system setup as JBOD and I tried Raid5 to protect me from a hard drive failure. but by doing so, i introduced another possible failure point that i didn't think about.


    i Had 4 x 6TB drives in a Raid5. I used the Grow option to add the 5th 6TB drive. I restarted the OMV system because after installing and initializing the 5th drive, the raid array didn't show any difference. so i thought that it needed a restart to complete initialization. And POOF, it is all gone.

  • I'm in the same position. I'm really disappointed by the lack of support available on this forum. Do you know of any other forums where people have the requisite skills to assist?



    I have to say... this is complete BS. Since moving to larger capacities drives have become less reliable IMO. This has nothing to do with anyone on this forum. If you do not backup your data whose fault is it really???? Look in the mirror guys.

    • Offizieller Beitrag

    Hello.
    Please remember about URE when you use raid 5! Read and remember http://www.zdnet.com/article/has-raid5-stopped-working/ . This should be stressed in every R5 topic.


    The paranoia in that article is not what is causing this issue of arrays not starting at boot. And I say paranoia because I've added four drives (2tb) to my array (8x2tb) causing full resyncs and rebuilt it three times. According to the article, I should've had multiple catastrophic failures by now. Maybe I have had one bad block but all it would do is cause a glitch in a video that you couldn't see or a spreadsheet not to open. You should still have backups when using raid anyway.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!