Hello, I am definitely a beginner at OMV and Linux, and trying to learn as much as possible, specially now, to keep some sanity and my head a little bit away from those strange days that are ahead of us... Best of wishes to all.
I did my best during last week searching on threads, forums and discussions to see if I could figure out and solve myself how to address the issue I am having, but i think I hit a wall of my knowledge limitation.
I am running OMV in a Raspberry Pi4. Testing to learn something new but useful.
Created a USB3 RAID5 array (I know is not , super advisable, but no money to do something better now)
The Raid ran just fine and with no issues for several months, but I had a problem when turning off the PI for a long absence.
The Array seems corrupted. Disappeared. I think the problem is likely connected to shutting down the drives and the Raspberry PI - I think i powered drives off while Pi was not completely down
(yeap... everyone does stupid stuff eventually)
Upon powering up, the Raid configuration disappeared but the "shared folder" was still there.
For what I read, it may be possible to redo the Raid array again, and eventually, it will find the partitions and the logic to rebuild. With some luck, may still have the data i saved on it back.
After some back and for I figured out how to remove the "referenced" on shares, and delete the original share.
My issue now is that one of the drives /dev/sdb1 (first one at array) that shows as healthy disk at the "disks" list but not at "file system".
It is available to be "re-added" using "Create" but that will "reformat" the disk
The result of blkid seems to indicate it "lost" its original LABEL (SEA2T2) and now is showing as still part of Raid (that does not exist anymore)
Here is what i have from blkid:
/dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="5203-DB74" TYPE="vfat" PARTUUID="6c586e13-01"
/dev/mmcblk0p2: LABEL="rootfs" UUID="2ab3f8e1-7dc6-43f5-b0db-dd5759d51d4e" TYPE="ext4" PARTUUID="6c586e13-02"
/dev/sda1: LABEL="4TSeaNAS2" UUID="ac0b6fb5-d21b-437b-8d6d-dcb977bf8093" TYPE="ext4" PARTUUID="97158e46-8353-4ccb-85da-0f86878b30b3"
/dev/sdb1: UUID="c5524020-59ed-6513-a993-15c5c5324bc1" UUID_SUB="e6dbc4a2-7c1d-a04f-88b5-21b777c840ab" LABEL="raspberrypi:0" TYPE="linux_raid_member" PARTUUID="56050e59-4a98-493a-859f-a5c76131c106"
/dev/sdc1: LABEL="SEA2T1" UUID="f75f98a8-f566-4a8b-8074-33c397b9f4e8" TYPE="ext4" PARTUUID="bc663f22-c19b-4691-b9fc-d78a9182538d"
/dev/sdd1: LABEL="WD2T1" UUID="75feefd8-686a-4442-bf41-515331ea264c" TYPE="ext4" PARTUUID="157407ad-db42-4c47-b2d1-a63e31f329a9"
/dev/sde1: LABEL="WD2T2" UUID="7c767abf-a261-4459-b7ad-d0e72c8fda90" TYPE="ext4" PARTUUID="d0851103-007b-4f07-a8ca-a9ab2a5092ee"
Disks sbd1 to sde1 (4 disks) were part of of the original Raid5 array.
Wondering if someone can help me to perhaps fix the incorrect LABEL and TYPE above - hopefully without reformatting the drive.
If the drive will show again at "FIle System" I think I have a chance to rebuild the RAID.
In case the only way is to reformat the Drive, is there any chance to re-set the RAID5 using the drive that missed the LABEL and the other 3 that are now standalone and have the RAID5 array to rebuild?
Any suggestions will be greatly appreciated!
Note: his is my first post ever... not young fellow anymore. Normally I was able to carve the solution out from others posting.