RAID Unable to mount after update and reboot [SOLVED]

  • [Reposted with updated info]

    When I went to access my SMB share on my computer I noticed it was stuck in read only mode for some reason. So I decided to update the system via the web interface and reboot to see if that would fix it. Now when I boot the system the RAID array doesn't even mount.


    I have had issues with power outages over the last few months as well which probably hasn't been helping.


    I have 4 hard drvies sda sdb sdc and sdd which are in the RAID 0 array.

    I'm currently running version 5.6.26-1.



    I keep getting the error below when trying to mount the raid array in the file systems tab


    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mount -v --source '/dev/disk/by-uuid/6286e1c0-da5a-481e-98e5-5efc57f0463c' 2>&1' with exit code '32': mount: /srv/dev-disk-by-label-CorePoolData: /dev/md0 already mounted or mount point busy.


    Here are the results of a few other commands as suggested on the RAID questions template.


    "cat /proc/mdstat"

    Code
    Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : active raid0 sdd[3] sda[0] sdb[1] sdc[2]
          10743784960 blocks super 1.2 512k chunks
          
    unused devices: <none>


    "blkid"

    Code
    /dev/sde1: UUID="0925-8731" TYPE="vfat" PARTUUID="8df33f5f-045b-40e0-9069-417541e8b206"
    /dev/sde2: UUID="e82406a6-2d1e-40a6-8407-39e6244667a3" TYPE="ext4" PARTUUID="270da3fa-168e-4b5a-8660-ad5ee2087e92"
    /dev/sde3: UUID="c7521f20-4245-4779-897a-a168f27daf98" TYPE="swap" PARTUUID="c33c815f-d140-411f-8be2-ebaadf671173"
    /dev/sdc: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="d81adc61-5679-43cb-44d4-897697ae6797" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdb: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="72eca10b-a959-b3d9-2cf3-31ba469ea87e" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdd: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="c21c472f-43f2-546f-a21f-e7f300d1d143" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/md0: LABEL="CorePoolData" UUID="6286e1c0-da5a-481e-98e5-5efc57f0463c" TYPE="ext4"
    /dev/sda: UUID="87674581-aa4a-abfe-9312-d183a8ed4906" UUID_SUB="02249442-0c9b-53e9-0680-f81e34c7a115" LABEL="cb-server.cb-server:CorePool" TYPE="linux_raid_member"
    /dev/sdf1: LABEL="LINUX MINT" UUID="CCF5-21DA" TYPE="vfat" PARTUUID="6b2409a3-01"


    "fdisk -l | grep "Disk ""




    cat /etc/mdadm/mdadm.conf



    "mdadm --detail --scan --verbose"


    Code
    ARRAY /dev/md0 level=raid0 num-devices=4 metadata=1.2 name=cb-server.cb-server:CorePool UUID=87674581:aa4aabfe:9312d183:a8ed4906
       devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd



    "lsof /dev/md0"


    Code
    COMMAND   PID USER   FD   TYPE DEVICE      SIZE/OFF NODE NAME
    fsck.ext4 471 root    3u   BLK    9,0 0x50088500000  274 /dev/md0


    "fdisk -l"


  • I'm far from a mdadm expert regarding recovery, and might not be able to help much there, but I do have to comment on your RAID config in general in the following 2 points.


    1. RAID 0 is a bad choice for anything that you can't afford to loose. There is no data redundancy. Loose one drive and you loose everything. RAID 1, 10, or 5 would be better choices for safety. But given my point below, a RAID 5 using the WD drives would have been my choice.


    2. If I am reading your config correctly, you created that RAID 0 from 3 WD Red 4TB drives and 1 Seagate Baracuda 2 TB Green drive. Once again bad choice for 2 reasons. ----(1)The Red's are NAS drives and the Seagate is a desktop drive, NAS drives usually have some extra RAID optimized firmware modifications that desktop drives don't have. -----(2) The drives are not the same size. You should always use drives of the same size, and preferably the same make, model and firmware version in any kind of RAID build.

  • Yes I know my configuration isn't the best but it's what I could afford.


    I've got a backup of everything from last year but I've added a lot of data since then.


    I just want to mount it so I can at least get the data copied off everything and wipe the system and start again.

  • The difference in drive sizes means you either have to partition the 4TB drives as 2TB giving you 8TB usable or if you actually tried to used all the drive, if mdadm would even let you, you can't complete the RAID 0 stripe across them after the 2TB seagate is striped, so once you fill past 8TB you have a broken stripe and loose the data. Basically, for the stripe to work every drive or every partition has to be the same size. You could split the 4TB drives into 2 x 2TB partitions and used that space as extra drives, but don't use RAID 0 on anything, and don't include the split partitions from a drive in the same RAID set. If you loose one drive you loose 2 partitions, which as far as the RAID is concerned if they are in the same set is 2 drives and there goes your data again.


    The 3 WD's in a RAID 5 would also give you 8TB usable, but you would have some data redundancy. You would be able to loose a drive without data loss, giving you time to replace the drive and initiate a rebuild. The seagate can be used as a stand alone drive for non critical stuff.


    As I mentioned, I am far from a mdadam expert, and usually rely on Universal File System Explorer when I need to do any data recovery, which I think you may need based on your build. UFS explorer is not expensive if you need you go down that road to try some recovery, but I think you should resign yourself to the possibility of having to restore from backup and loose anything that got lost in the drive size differences. If absolutely have to have everything as one large volume, there are some OMV5/OMV Extras plugins that can do it, such as Union File System, Mergefs, etc. Don't think they all got ported to OMV6 yet, but I've never used them so can't really comment any further aside from the fact that these don't actually do any RAID and simply make everything appear as one big volume.


    Personally on my home server, I run 6 Seagate 4TB Ironwolf drives in a RAID 5 and 2 3 TB Ironwolf in a RAID 1, all created with mdadam in OMV, using an XFS file system, and have never had a problem (I will confess though, I have been building and working with hardware based RAIDs for 30 years for use in video editing and SAN storage systems, so RAID levels I understand)

  • Update on things.


    Tried to shut down the system but got stuck on "Reached target Unmount All Filesystems" and did not shut down.


    Had to force a shutdown, booted to a live linux mint USB.


    ran "sudo mdadm --assemble --scan --verbose"


    which made the RAID array show up as a disk which was great, but I could not mount it. kept getting error "error mounting /dev/md127 at /home/CorePoolData: can't read superblock on /dev/md127"


    then found another page which showed how to mount via a backup superblock with

    dumpe2fs /dev/vgname/lvname | grep superblock



    I then tried to mount the drive with a backup block with:

    mount -o -sb=32768 /dev/md127 /home


    which gave the error "mount: /home: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program or other error"


    now trying to figure out how to fix this and mount it to recover some data

  • GOOD NEWS UPDATE!


    after trying many things with no success I finally found a command that worked!!!


    After trying with many different superblocks the below command worked!!!!!!!


    sudo mount -t ext4 /dev/md127 /mnt


    it seems mounting it while specifying the file system worked!


    I can finally sleep easy tonight after backing up everything!

  • seab33

    Added the Label resolved
  • seab33

    Changed the title of the thread from “RAID Unable to mount after update and reboot” to “RAID Unable to mount after update and reboot [SOLVED]”.
    • Official Post

    If anyone comes across this thread and sees the resolved heading, it's resolved for the OP and should not be referenced as a solution!!

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 6x amd64 running on an HP N54L Microserver

  • You should still get current copies of everything off that array, wipe it and make a properly structured RAID array, then copy the data back on to avoid future problems.


    Just because you managed to get the array mounted, does not mean the incorrect structure will not cause problems in the future.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!