Raid5 Missing, File System Missing

  • Hello, all.


    I added a new drive (6 x 1Tb total drives now), expanded my array and then resized the file system. I lost network connectivity during the resize and had to reboot. Now it appears I've lost my array. Below is the output based on ryecoaaron request in his sticky thread:


    Code
    root@omv:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : inactive sdd[0] sdb[5] sda[3] sdf[2] sde[1]
    4883157560 blocks super 1.2
    
    
    unused devices: <none>


    Code
    root@omv:~# blkid
    /dev/sdd: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="e072462a-09bb-7769-b0cd-88f58f9bb5ca" LABEL="omv:R5Array" TYPE="linux_raid_member"
    /dev/sdb: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="b0bd798e-80b4-db90-4e35-5d832793fe30" LABEL="omv:R5Array" TYPE="linux_raid_member"
    /dev/sde: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="ea552547-e7ab-f516-5b71-5b6a5234da89" LABEL="omv:R5Array" TYPE="linux_raid_member"
    /dev/sdg1: UUID="1d42ce8e-7dca-4183-8723-926eafc7c182" TYPE="ext4"
    /dev/sdg5: UUID="7bbc0b57-05ef-4b1b-8d19-2e1a6c71d8b5" TYPE="swap"
    /dev/sdf: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="bbf79cbf-98e4-2acb-37f9-2301d9fe9c1d" LABEL="omv:R5Array" TYPE="linux_raid_member"
    /dev/sda: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="f092b520-1794-41f6-5ca7-33251544ddd9" LABEL="omv:R5Array" TYPE="linux_raid_member"




    Code
    root@omv:~# mdadm --detail --scan --verbose
    mdadm: cannot open /dev/md/R5Array: No such file or directory


    I've rebooted a few times but do not see the array in the GUI. See attachments for more details on Physical Disks, Raid Management, and File System screenshots.





    Any help is GREATLY appreciated.

  • Well, a little more digging and I came up with this:


    Code
    root@omv:/# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127


    Code
    root@omv:/# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdef]
    mdadm: looking for devices for /dev/md0
    mdadm: no RAID superblock on /dev/sdc
    mdadm: /dev/sdc has no superblock - assembly aborted



    My array showed up and I tried to mount the file system. The system has now locked up and cat /proc/mdstat doesn't return anything.


    I'm going to leave it overnight and check it again the morning.

  • I restarted the box and went through the process in post #2. I now see the array in the GUI and the file system shows unmounted.


    If I click mount, how long should it take? I tried it again and it locks up and "cat /proc/mdstat" doesn't show anything and clicking on RAID Management or File System in the GUI shows a loading button and nothing else.


    What would be the best way to proceed from here?

    • Offizieller Beitrag

    That is not a good sign that cat /proc/mdstat doesn't return anything. You will through the right steps to fix it. Sounds like the system isn't stable or a drive is failing. Anything in dmesg?


    I would ask yourself two questions:
    1 - Do you really need raid? Raid isn't backup and there are other ways to pool drives.
    2 - Can you back up the array? Raid isn't backup.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • That is not a good sign that cat /proc/mdstat doesn't return anything. You will through the right steps to fix it. Sounds like the system isn't stable or a drive is failing. Anything in dmesg?


    I would ask yourself two questions:
    1 - Do you really need raid? Raid isn't backup and there are other ways to pool drives.
    2 - Can you back up the array? Raid isn't backup.

    Thanks for the reply!


    1) Going forward, I don't care what the pooling method is but for now I need to recover the array/data if possible
    2) I'm aware a RAID doesn't provide backups but I was in the process of consolidating everything so I could take a backup. At this point, I have none, unfortunately.


    I've made some progress in troubleshooting, I think.


    I put the 2 drives that were in after I expanded and reshaped the array. The following information is with these 2 drives in:


  • I bought a new 1TB drive and put it in along with the original drives and the previously "new" drive that was in. This info is from that setup:



    I think my array is still in tact. I'm a bit confused on the naming though: is it /dev/md0, /dev/md127, or /dev/R5Array?


    At this point, I'm getting desperate. I'm willing to work with someone remotely and pay for their time.

  • This is probably my latest update for the night as I think I've made progress and the array might be reshaping right now.


    I removed the 6th disk I added and ran the following:



    Following that, I did this:

    Code
    mkdir -p /mnt/md127
    
    
    mount /dev/md127 /mnt/md127


    At this point my console just has a blinking cursor as if it was doing something.


    I logged in to another session and ran cat /proc/mdstat but get the blinking cursor again.


    Before I tried to mount the array, I saw it mounted in the GUI when I clicked on RAID Management and it showed "clean,degraded".


    After I mounted it, I get "Loading..." when I click on RAID Management and File System.


    I'm hoping the array is reshaping and will just take some time but I'm not sure how I can confirm that other than just waiting it out.


    Attached is the latest dmesg.

  • Ok, maybe it wasn't my last post - sorry. Just thought of something...


    Is it possible that the reshaping never finished [Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)]? I'm thinking if I power down, plug in the 6th drive, boot the machine, stop md127, and then run mdadm --assemble --force /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde that might do the trick?

  • Well, I did just that. Once I added the new drive and added it to the array it started to resync. Once the resync finished it started to recover. After about 18 hours total, my array is now online and I have access to all my data!


    First order of business, transfer everything to backblaze and a 6TB drive!


    After that, I'll start my research into other alternatives.


    Hope this helps someone in the future!

    • Offizieller Beitrag

    First order of business, transfer everything to backblaze and a 6TB drive!

    In a word, "smart".


    I've noticed that Home and small business RAID users usually fall into one of two different camps.
    1. Those who have never had a problem, or minor problems, that they easily recovered from (using, perhaps, an online hot spare). They love RAID and promote it.
    2. Those who had a major problem, where they lost their entire array. These folks, almost without fail, "had an array" in the past.


    (There's a 3rd group who actually backup their arrays and are ready for a full array failure, but among home and small business users, they seem to be exceedingly rare.)


    With the sizes of drives that are available these days (up to 8TB), I don't understand why NAS users would feel the need to pool disks. Does administering a NAS become easier, somehow, with a common mount point? If it does, things become a bit more complicated and inconvenient when the inevitable physical disk problem crops up, in the pool.
    ________________________________


    In a JBOD config - it's easy enough to divide up data folders, in a logical manner, over different physical drives. When a NAS puts shares out to the network, the source physical drive is irrelevant.


    Further, it's easy enough to Rsync network shares / folders to a local destination or a remote server - or even entire drives - without risking the quarks of running a RAID1 broken mirror. Rsync provides true backup, versus the false sense of security that users believe they're getting with RAID.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!