[SOLVED] Grow RAID5 array failure

  • Hi to all,

    can someone understand and explain me what's happening while trying to grow the RAID5 array?


    My procedure is (I think) standard:

    1. Add the disk to the NAS (identical as others already apart of the array)
    2. Wipe it (quickly)
    3. Grow the RAID arrray, selecting the new disk in the list

    But... I obtain the error below in the last step:

    Code
    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mdadm --manage '/dev/md0' --add /dev/md126 2>&1' with exit code '1': mdadm: /dev/md126 not large enough to join array in /usr/share/php/openmediavault/system/process.inc:195
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(350): OMV\System\Process->execute()
    #1 [internal function]: Engined\Rpc\RaidMgmt->grow(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('grow', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('RaidMgmt', 'grow', Array, Array, 1)
    #5 {main}


    The new disk (highlighted) is shown in the "Disks" section:


    ...and is shown also in this the S.M.A.R.T. table:


    ...then I select it to grow the RAID array:


    ...but I obtain the KO:

  • mbinax

    Changed the title of the thread from “Grow RAID5 array fails” to “Grow RAID5 array failure”.
  • You have created an array from a single drive, look at what you have posted, you cannot add a striped array to a raid 5

    Sorry... what?

    To grow, I selected the "RAID5" array (this is exactly the name of my array) adding the only hard-disk available: the new one.

    I suppose you're right bit I don't know:

    1. Why this happens?
    2. How to "merge" them in the RAID5 array?


    PS: this is how RAID table appears now (and I can't delete the stripe array, I think because the secure wipe is still running, but I'm not sure)

  • PS: this is how RAID table appears now (and I can't delete the stripe array, I think because the secure wipe is still running, but I'm not sure)

    If it is you cannot do anything until the wipe has finished, and you may have to do it again :/ but you are adding a drive (/dev/sdd) not a striped array to that raid 5

    Raid is not a backup! Would you go skydiving without a parachute?

  • If it is you cannot do anything until the wipe has finished, and you may have to do it again :/ but you are adding a drive (/dev/sdd) not a striped array to that raid 5

    Deleted the stripe array.

    Now I'm wiping again the disk...

    At the end, I should be able to add it growing the RAID5 array, isn't it?


    W.i.p.

  • Yes, you have to wait for the wipe process to finish

    I let OMV work... but I can't keep the progress window active for too long.

    An error appears (I don't know when, I was sleeping tonight :D), which closes as soon as I touch the keyboard and I was unable to copy...

    I am not able to tell if it depends on the wipe procedure or on the logoff from the GUI.


    Anyway, I would say that after 24h the 6TB wipe should be finished... yet the disk does not appear for the array growth!!


    What could I do?

  • Don't use the secure wipe - it's literally a waste of time. Use the quick wipe.


    Can you confirm that /dev/md126 entry is really, truly gone?


    When you go to grow your RAID5 array, it should show you the new disk /dev/sdd. If you see /dev/mdxxx (where xxx could be 126 or some other odd number except 0), then something is not right.

  • Don't use the secure wipe - it's literally a waste of time. Use the quick wipe.


    Can you confirm that /dev/md126 entry is really, truly gone?


    When you go to grow your RAID5 array, it should show you the new disk /dev/sdd. If you see /dev/mdxxx (where xxx could be 126 or some other odd number except 0), then something is not right.

    After being (teorically) wiped, the disk is now /dev/sdd

    This is now what happens trying to grow:

  • Post the output of cat /proc/mdstat and blkid device or resource busy suggests that /dev/sdd is in use by something


    This is the cat output:

    Code
    root@DIYNAS-OMV:~# cat /proc/mdstat
    Personalities : [raid0] [raid6] [raid5] [raid4] [linear] [multipath] [raid1] [raid10]
    md127 : inactive sdd[0](S)
    1148504 blocks super external:ddf
    md0 : active raid5 sdb[2] sda[3] sdf[0] sde[1] sdc[4]
    23441567744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
    bitmap: 3/44 pages [12KB], 65536KB chunk
    unused devices: <none>


    What should I do, exactly?

  • What's the output of blkid


    This is:

    Code
    root@DIYNAS-OMV:~# blkid
    /dev/nvme0n1p1: UUID="975fe369-9535-4459-80f8-f86795bdcb26" TYPE="ext4" PARTUUID="5b2a273a-01"
    /dev/nvme0n1p5: UUID="21dbc86c-cfc1-4b4a-a9ed-566388d70285" TYPE="swap" PARTUUID="5b2a273a-05"
    /dev/sdc: UUID="add77410-4af9-84b4-4a1b-f4c9a53296ae" UUID_SUB="1e112d20-45bd-32d0-9ebf-69bea2a73f9b" LABEL="DIYNAS-OMV:RAID5" TYPE="linux_raid_member"
    /dev/sde: UUID="add77410-4af9-84b4-4a1b-f4c9a53296ae" UUID_SUB="540cd822-1281-aead-7f6f-ca4f0842dc80" LABEL="DIYNAS-OMV:RAID5" TYPE="linux_raid_member"
    /dev/md0: LABEL="RAID5" UUID="b143f942-1f22-42e0-b92d-4e4c557ff36c" TYPE="ext4"
    /dev/sdf: UUID="add77410-4af9-84b4-4a1b-f4c9a53296ae" UUID_SUB="9e5d7091-039a-77b5-77b8-c3450cc7cac8" LABEL="DIYNAS-OMV:RAID5" TYPE="linux_raid_member"
    /dev/sda: UUID="add77410-4af9-84b4-4a1b-f4c9a53296ae" UUID_SUB="f6d205c8-bc1a-5cd1-6a8a-7458a4b0c817" LABEL="DIYNAS-OMV:RAID5" TYPE="linux_raid_member"
    /dev/sdb: UUID="add77410-4af9-84b4-4a1b-f4c9a53296ae" UUID_SUB="93903863-bbf5-8435-c86f-c1dc338c5bb7" LABEL="DIYNAS-OMV:RAID5" TYPE="linux_raid_member"
    /dev/nvme0n1: PTUUID="5b2a273a" PTTYPE="dos"
  • So if you look at cat /proc/mdstat what was /dev/md126 is now /dev/md127 that would suggest a reboot :) and the raid was not deleted correctly.


    So try;

    mdadm --stop /dev/md127


    mdadm --delete /dev/md127


    cat /proc/mdstat

    Raid is not a backup! Would you go skydiving without a parachute?


  • Sure about "mdm --delete /dev/md127"?

    This is output...

    Code
    root@DIYNAS-OMV:~# mdadm --stop /dev/md127
    mdadm: ARRAY line /dev/md/ddf0 has no identity information.
    mdadm: stopped /dev/md127
    root@DIYNAS-OMV:~# mdadm --delete /dev/md127
    mdadm: unrecognized option '--delete'
    Usage: mdadm --help
    for help
  • Mmmhhhh.....


    Code
    root@DIYNAS-OMV:~# mdadm --remove /dev/md127
    mdadm: error opening /dev/md127: No such file or directory


    Anyway, this is the "cat" output:

    Code
    root@DIYNAS-OMV:~# cat /proc/mdstat
    Personalities : [raid0] [raid6] [raid5] [raid4] [linear] [multipath] [raid1] [raid10]
    md0 : active raid5 sdb[2] sda[3] sdf[0] sde[1] sdc[4]
    23441567744 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU]
    bitmap: 3/44 pages [12KB], 65536KB chunk
    unused devices: <none>


    geaves Can I grow immediately or not? Maybe I have to re-wipe (short or secure) before?

  • Ok the array may have been removed after being stopped, you can check that with cat /proc/mdstat if md127 does not show try adding /dev/sdd using grow again

    Raid is not a backup! Would you go skydiving without a parachute?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!