Lost Raid 5 configuration and one disk not showing at File System to recreate it

  • Hello, I am definitely a beginner at OMV and Linux, and trying to learn as much as possible, specially now, to keep some sanity and my head a little bit away from those strange days that are ahead of us... Best of wishes to all.


    I did my best during last week searching on threads, forums and discussions to see if I could figure out and solve myself how to address the issue I am having, but i think I hit a wall of my knowledge limitation.


    I am running OMV in a Raspberry Pi4. Testing to learn something new but useful.

    Created a USB3 RAID5 array (I know is not , super advisable, but no money to do something better now)

    The Raid ran just fine and with no issues for several months, but I had a problem when turning off the PI for a long absence.

    The Array seems corrupted. Disappeared. I think the problem is likely connected to shutting down the drives and the Raspberry PI - I think i powered drives off while Pi was not completely down

    (yeap... everyone does stupid stuff eventually)

    Upon powering up, the Raid configuration disappeared but the "shared folder" was still there.


    For what I read, it may be possible to redo the Raid array again, and eventually, it will find the partitions and the logic to rebuild. With some luck, may still have the data i saved on it back.

    After some back and for I figured out how to remove the "referenced" on shares, and delete the original share.


    My issue now is that one of the drives /dev/sdb1 (first one at array) that shows as healthy disk at the "disks" list but not at "file system".

    It is available to be "re-added" using "Create" but that will "reformat" the disk


    The result of blkid seems to indicate it "lost" its original LABEL (SEA2T2) and now is showing as still part of Raid (that does not exist anymore)

    Here is what i have from blkid:


    /dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="5203-DB74" TYPE="vfat" PARTUUID="6c586e13-01"

    /dev/mmcblk0p2: LABEL="rootfs" UUID="2ab3f8e1-7dc6-43f5-b0db-dd5759d51d4e" TYPE="ext4" PARTUUID="6c586e13-02"

    /dev/sda1: LABEL="4TSeaNAS2" UUID="ac0b6fb5-d21b-437b-8d6d-dcb977bf8093" TYPE="ext4" PARTUUID="97158e46-8353-4ccb-85da-0f86878b30b3"

    /dev/sdb1: UUID="c5524020-59ed-6513-a993-15c5c5324bc1" UUID_SUB="e6dbc4a2-7c1d-a04f-88b5-21b777c840ab" LABEL="raspberrypi:0" TYPE="linux_raid_member" PARTUUID="56050e59-4a98-493a-859f-a5c76131c106"

    /dev/sdc1: LABEL="SEA2T1" UUID="f75f98a8-f566-4a8b-8074-33c397b9f4e8" TYPE="ext4" PARTUUID="bc663f22-c19b-4691-b9fc-d78a9182538d"

    /dev/sdd1: LABEL="WD2T1" UUID="75feefd8-686a-4442-bf41-515331ea264c" TYPE="ext4" PARTUUID="157407ad-db42-4c47-b2d1-a63e31f329a9"

    /dev/sde1: LABEL="WD2T2" UUID="7c767abf-a261-4459-b7ad-d0e72c8fda90" TYPE="ext4" PARTUUID="d0851103-007b-4f07-a8ca-a9ab2a5092ee"


    Disks sbd1 to sde1 (4 disks) were part of of the original Raid5 array.


    Wondering if someone can help me to perhaps fix the incorrect LABEL and TYPE above - hopefully without reformatting the drive.

    If the drive will show again at "FIle System" I think I have a chance to rebuild the RAID.


    In case the only way is to reformat the Drive, is there any chance to re-set the RAID5 using the drive that missed the LABEL and the other 3 that are now standalone and have the RAID5 array to rebuild?


    Any suggestions will be greatly appreciated!

    Note: his is my first post ever... not young fellow anymore. Normally I was able to carve the solution out from others posting.:)

    Cheers

    Brimo60

  • Thanks Geaves for willing to help, here you go ...

    (if RAID is not back up, what we call RAID1? Parachute? - smile!!)

    Code
    root@raspberrypi:/dev# cat /proc/mdstatPersonalities :md0 : inactive sdb1[4](S) 1953381447 blocks super 1.2
    Code
    root@raspberrypi:/home/pi# blkid
    /dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="5203-DB74" TYPE="vfat" PARTUUID="6c586e13-01"
    /dev/mmcblk0p2: LABEL="rootfs" UUID="2ab3f8e1-7dc6-43f5-b0db-dd5759d51d4e" TYPE="ext4" PARTUUID="6c586e13-02"
    /dev/sda1: LABEL="4TSeaNAS2" UUID="ac0b6fb5-d21b-437b-8d6d-dcb977bf8093" TYPE="ext4" PARTUUID="97158e46-8353-4ccb-85da-0f86878b30b3"
    /dev/sdb1: UUID="c5524020-59ed-6513-a993-15c5c5324bc1" UUID_SUB="e6dbc4a2-7c1d-a04f-88b5-21b777c840ab" LABEL="raspberrypi:0" TYPE="linux_raid_member" PARTUUID="56050e59-4a98-493a-859f-a5c76131c106"
    /dev/sdc1: LABEL="SEA2T1" UUID="f75f98a8-f566-4a8b-8074-33c397b9f4e8" TYPE="ext4" PARTUUID="bc663f22-c19b-4691-b9fc-d78a9182538d"
    /dev/sdd1: LABEL="WD2T1" UUID="75feefd8-686a-4442-bf41-515331ea264c" TYPE="ext4" PARTUUID="157407ad-db42-4c47-b2d1-a63e31f329a9"
    /dev/sde1: LABEL="WD2T2" UUID="7c767abf-a261-4459-b7ad-d0e72c8fda90" TYPE="ext4" PARTUUID="d0851103-007b-4f07-a8ca-a9ab2a5092ee
    /dev/mmcblk0: PTUUID="6c586e13" PTTYPE="dos"
    Code
    root@raspberrypi:/home/pi# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md0 num-devices=1 metadata=1.2 name=raspberrypi:0 UUID=c5524020:59ed6513:a99315c5:c5324bc1
       devices=/dev/sdb1

    Was a Raid5 with 4 external HDs (2 TB each). Two HD a(re Seagate and Two are WD Element)

    Those HD connected to Raspberry Pi4 using a single USB3 port via Atolla USB3 power hub.


    The second USB3 port of PI is connected to an Seagate Plus 4TB external HD. This drive is/was not part of the RAID5


    All was fine for about 3 month.

    Until I decided to turn off the Pi and HD - staying away for couple weeks.

    Don't recall the correct sequence but my PC was already packed and did not took it off to SSH on PI and shutdown. I believe I turned the Pi off using the clicker button and then turned each Drive off (hub has individual power buttons). Could be the other way around, but I am 90% sure I force the PI4 down (like will happen in a power outage).


    Once back from trip I turn everything up again. No problem with the stand alone 4T Seagate drive, but the Raid5 was damaged.

    I had image of the PI and put that back at microSD card, but no success to fix the issue.


    I was able to "delete" the Raid listed that was referenced but as mentioned seems sbd1 still having part of Raid configuration

    • Official Post

    It's unrecoverable;


    1. raid 5 allows for one drive failure

    2. mdstat shows the raid md0 as inactive (not a problem) but only one drive sdb

    3. blkid references only one drive sdb as TYPE="linux_raid_member" sd[cde] do not have have that signature

    4. mdadm conf has no reference to a raid array


    I can only surmise that during the shut down there was some corruption.


    If you want confirmation of what the above should be then I'm happy to post that information.

  • I see. Not necessary.

    The next question Is: in order to start over, should I re-add sdb1 at "File format" and reformat it?

    As it is now it shows as "disk" but not at file format.


    This was a "test" for me. Never used Raid5 before, and the first experience evidently was not positive.

    Will get back to "safety" and consider having 2 independent arrays but Raid1 instaed

    Any advice in contrair based on your experience?

    Best

  • Last but not least.

    For the next time of needing a shutdown, what would be the right process?

    Can i simply shutdown the Pi via OMV HTML interface, or is necessary to SSH and cmd the unit shutdown?

    Assume that drives will be the last thing to turn off - after PI will be powered off.

    And sure thing to prevent any power outage add and APC...

    Thanks for helping

    Stay safe and healthy

    • Official Post

    To answer your questions you have to ask yourself why do you need a raid setup, what do you understand about mdadm raid software.


    A better way to do this is to use rsync which will copy/update the data from one drive to another, you have 4x2Tb and 1x4Tb that's a lot of drive space connected to an SBC via a hub.


    There are UPS's available for a Pi these are powered by rechargeable Lithium batteries, but would they be enough to keep the drives up, probably not. The way to shutdown safely is to shutdown via the WebUI wait for the Pi to shutdown so that the drives are in effect disconnected then power them off.

  • Can i simply shutdown the Pi via OMV HTML interface, or is necessary to SSH and cmd the unit shutdown?

    Both is possíble. Sometimes I shutdown my NAS by the OMV WebUI if I am already logged in, sometimes by a shutdown command from the shell shutdown -h now. That shouldn´t really make a difference.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • One one request - when I tried to re-add the drive sdb1 to file system (the one that shows still part of Raid) via reformatting, I am having the following error:

    What would be the best way to release the Disk? Seems is still consider part of the Array that is now gone.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!