Cannot mount RAID 5 after adding a disk

  • Hello All,

    I'm using 6.0.29-1 (Shaitan) on a x86 intel cpu. I dont really have much linux experience so please be patient with my stupid questions.

    I added a new disk to my RAID 5 array. Wanted to resize the existing file system, but nothing happened even after multiple tries, clicking resize on the webgui didnt bring up anything.

    So I thought I will dismount it first then try it again, but it doesnt show up anymore. I cant mount it again for some reason, the webGUI doesnt see my raid 5 array.


    In the RAID menu it shows up, seems to be fine, doing its thing rebuilding.


    I tried to mount manually from CLI which worked, I could browse it with Midnight Commander afterwards, all my stuff seems to be there. After CLI mounting it, the raid array still didnt show up under file systems. So I umounted it from CLI, which again worked.

    I ran blkid:

    blkid

    /dev/sda: UUID="818c89e6-2533-ab78-cdb1-33a6b682d6dd" UUID_SUB="0eeea586-8733-83 3b-2f9d-c46fc473004f" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdd1: UUID="7f3a049c-8fb5-40b4-a91d-dc3e96e64759" BLOCK_SIZE="4096" TYPE="e xt4" PARTUUID="55f8f9d2-01"

    /dev/sdd5: UUID="b43a25c2-e22a-4f41-a29a-96adfd7a7696" TYPE="swap" PARTUUID="55f 8f9d2-05"

    /dev/sdc1: LABEL="Bence" UUID="cdac8845-b0d5-4793-912c-1376987c5738" BLOCK_SIZE= "4096" TYPE="ext4" PARTUUID="b7e77052-69e6-407c-99bd-c2966bb044a1"

    /dev/sdb: UUID="818c89e6-2533-ab78-cdb1-33a6b682d6dd" UUID_SUB="84087ba7-0240-b0 7d-f693-9af12007a736" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdf: UUID="818c89e6-2533-ab78-cdb1-33a6b682d6dd" UUID_SUB="a56305f5-e9a6-e1 66-6db8-1ea854f8406b" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sde: UUID="818c89e6-2533-ab78-cdb1-33a6b682d6dd" UUID_SUB="c2816463-6bd0-b1 f4-8801-aa8648075c98" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/md0: UUID="107fb4d3-904a-4171-b18d-60c23be38edc" BLOCK_SIZE="4096" TYPE="ex t4"


    So to my understanding the system sees my /dev/md0 array but for some reason its cant mount on the webgui. I really dont want to wipe, altough there is nothing crucial there, but would kind of suck to loose around 8 TB of media.


    I also noticed the system got very sluggish after I added the new drive to the raid array. A often get gateway timeout while navigating the webGUI and also my CPU graph looks very different from what it used to be, lot of Wait I/O. I assume its due the raid rebuilding, my motherboard and CPU is not particularly built for this.



    Can you help me making my raid visible in the webgui File Systems menu again? My main goal is to make it visible on my windows PC via SMB again

    • Offizieller Beitrag

    Can you help me making my raid visible in the webgui File Systems menu again

    Your post is somewhat confusing, post the output of the following, please use the code button </> to enter the output of each, makes it easier to read;


    cat /proc/mdstat

    fdisk -l | grep "Disk "

    mdadm --detail /dev/md0


    how did you add the new drive to the array?


    You won't be able to resize the file system until the array has finished adding the new drive, even then the capacity shown in raid management will be incorrect, this has to be set from the cli, unless it's been changed in V6. Then and only then, you should be able to resize the file system using the gui.

    What I can't understand is how or why the file system should unmount, AFAIK the current file system should remain accessible and the 'slowness' is due to the array rebuilding, that is normal.

  • Hi!


    First of all on the topic of why should it unmount. Honestly I dont really, know, coming from windows backround simply re-attaching drives or things seem to fix a lot of problems and usually doesnt break anything. As I didnt get any error or prompt on the webgui why nothings happening it looked like something I could try as re-attaching seemed easy enough at first glance. Probably this was a rookie mistake and not how linux should be approached. I didnt really have issues like this on OMV3 (I upgraded recently), that seemed to be a bit more noob safe imo.


    The outputs:

    I understand the resize part now, I was just impatient. I found some topics which was talking about increasing your raid and it was mentioned you have to resize after adding the drive. Probably I didnt dive deep enough to discover you first have to wait for it to re-shape.

    • Offizieller Beitrag

    Ok the output looks OK, there are 4 drives in the array, the size the array shows -> /dev/md0 is for the initial three drives, I assume.


    However, there is one issue -> finish=121 14.7min -> if I read that right it's going to take 8 days to rebuild that array 8| somethings wrong it shouldn't take that long

  • Yes it is kind of weird to me too. These are WD Red and Seagete Ironwolf NAS drives, obviously not optimized for speed but still... I assume maybe it has to do with the motherboard, maybe I have to change some sata settings. Initially creating the array was around 40 hours which is still slow but not 8 days :D


    Any ideas why I cant mount it via the WebGUI? Or is it only due the rebuilding and expected? What I found online I still should be able to use it but maybe I just misunderstood.


    Also I can mount it from CLI, but then I cant add it to SMB share as far as I know it. To do that I would need add line to the samba.config (not sure in the name from the top of my head) and the file on OMV specifically tells me not to do that when I open it.

  • I just made a discovery. If I check the file systems with df -aTh I didnt see my RAID array (/dev/md0).

    However to check the bios I had to restart the NAS. Then I checked again with df -aTh and low and behold:

    Its there again. My assumption is that somehow umount didnt really delete it from some table, but it for others. Hence GUI cant see it as mounted, but doesnt let it mount either.


    Any tips?

    • Offizieller Beitrag

    Any tips

    Sorry no, something you have done during adding the new drive to expand the array and attempting to mount/unmount, rebooting has thrown a gremlin in the works as there is no reason for the array to unmount.

    What you are doing has been done before on the forum and successfully without issue, the only downside has been the new size for the raid array is done via the cli, whereas expanding the file system size works in the gui.


    BTW your WD drives one is a WD Purple and the other is WD Red, the WD Red is an SMR drive, these drives are usually not advised to be used in a raid array. There was a recent thread here with another user having 2 SMR drives, TBH I'm actually amazed his data survived the stop/starts.

  • Sorry no, something you have done during adding the new drive to expand the array and attempting to mount/unmount, rebooting has thrown a gremlin in the works as there is no reason for the array to unmount.

    What you are doing has been done before on the forum and successfully without issue, the only downside has been the new size for the raid array is done via the cli, whereas expanding the file system size works in the gui.


    BTW your WD drives one is a WD Purple and the other is WD Red, the WD Red is an SMR drive, these drives are usually not advised to be used in a raid array. There was a recent thread here with another user having 2 SMR drives, TBH I'm actually amazed his data survived the stop/starts.

    Well that sucks... Last question. Do you think just simply re-installing OMV would help? From what I see, the drives and raid array itself is okay.

    I assume there is some data fragment of the previous mount stuck somewhere and it might be easier to just freshly install than try to troubleshoot this.

    • Offizieller Beitrag

    Do you think just simply re-installing OMV would help

    Possibly, but it's a long shot

    From what I see, the drives and raid array itself is okay.

    Yes and no, the two Seagate drives are fine, the WD Purple not sure about those other than the fact they are produced for use with security cameras, WD Red this one is the bad apple, when buying drives never use drives with FRX in the model reference such as yours Disk model: WDC WD30EFRX-68N, these are SMR drives, they can vvveeerrryyy slow during large writes, which technically a rebuild is doing.


    If you start again you'll probably have the same issues as you have now with the rebuild time. BTW which one was the new drive

  • I think I've figured it out. I found an older post with a similar issue where the dismounted drive kept reappearing. The solution was deleting manually from /etc/fstab.


    I did that, restarted the NAS and now the drive appeared as mountable in the File Systems on the WebGUI. Although the confirmation takes a long time, maybe it will fail. I dont know we will see, but this seems to be an improvement.


    The new drives were the seagate ones. I was very low on space, so I bought 2 drives. Copied everything to my WD RED and some external drives, created a raid from the 2 Seagate and WD Purple, then wanted to add the WD Red.


    I didnt had an option to choose, in my country its not common to give the exact type for hard drives. 99% of the cases it just says WD Red x TB and you get what you get.


    I didnt really notice any significant performance problem before the raid with either WD drive, but you are correct on everything. Sucks it will take a week to rebuild, but it is what it is. If one of the WD dies, well thats RAID 5 is for :)


    At this point I really accepted that I will likely loose the data on the RAID which sucks but as said nothing on it is crucial. If I dont thats lucky, and this was a learning experience.

  • Quick update:

    I froze the reshape process with

    Code
    echo frozen >  /sys/block/md0/md/sync_action

    After I did this the File System mount confirmation which seemed to be stuck for 10 minutes at this point finished in 3 seconds. Then I could create the shared folder and create the SMB share.

    Then I restarted with

    Code
    echo check >  /sys/block/md0/md/sync_action


    Now my array is visible in windows and all my stuff is there, nothing seems to be lost. Rebuild is still painfully slow, but it is what it is.

  • ... WD Red this one is the bad apple, when buying drives never use drives with FRX in the model reference such as yours Disk model: WDC WD30EFRX-68N, these are SMR drives, they can vvveeerrryyy slow during large writes, which technically a rebuild is doing.

    I guess you mixed it up ...

    WDx0EFAX are SMR drives (the bad ones to avoid)

    WDx0EFRX are CMR drives (the better ones)


    Cheers,

    Thomas

  • Assuming you're using a spinning disk (rather than SSD)
    One of the things that can slow down a re-sync is the drive using NCQ (Native Command Queing), which basically re-orders the re-write requests to an order that the drive thinks is the best.


    I'm creating a fresh install and a raid 6 array with some quite large drives was reporting ~7 days for it's initial sync (10450 mins)

    Once I disabled NCQ (by setting the queue length to 1) the time dropped to ~18 hours.
    The command I used was


    Code
    echo 1 > /sys/block/sda/device/queue_depth

    Note this is done on a per drive basis so if you you may have to repeat for sdb etc

    The queue length was initially 32 for me. Setting it lower does cause more ware on the disk as the head is bouncing all over the place rapidly rather than gently swinging over the disk so I'd advise that you read up on it's implications and don't leave NCQ disabled in day to day operation

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!