Array failed to Grow in Size After Replacing Failing 2TB Disks with 4TB Disks

  • I replaced one failed and one failing 2TB drive with two new 4TB drives one at a time and grew the OMV array to no effect.


    cat /proc/mdstat produces


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sdb[6] sdc[1] sdd[3] sde[4] sda[5]
    7813537792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] .


    After reboot, the array went missing in OMV. I used the OMV 0.4 installation media in recovery mode to try putting everything back together using bash commands:
    sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde --verbose --force--
    I then found two drives were busy and I had to stop them. A retry successfully rebuilt the array with 4 of 5 drives.
    I added the 5th drive in OMV and rebuilt the array but the size remained the same.
    I replaced another drive in the same fashion as it was about to fail (according to OMV). Again the array size remained the same.


    I now have 4-4TB and one 2TB drive in the array marked, "clean."


    OMV is running on a separate 60GB SSD.


    The array only recognizes a file system of 7.28TB. Is there a way to grow the array and file system to take advantage of the extra disk space? ?(


    fotafm

  • WIth mdadm raid, it will only use the amount of space on the smallest drive. So, until you replace that fifth drive, you can't really use that extra space in a recommend way.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • OK, sounds reasonable.


    You guys are the experts and I only pick at Linux when I break it...


    To clarify, remove the only 2TB Disk in the Array and replace with a 4TB drive and rebuild the array.


    I can then grow the array and take advantage of 5-4TB disks (20TB with some loss for parity information storage).


    Please confirm... :/

  • Yes. But, I thought I did confirm by saying mdadm would use the space of the smallest disk. If you have five 4TB disks, 4TB is the smallest instead of 2TB like you have now.

    5-4TB disks (20TB with some loss for parity information storage).

    For a raid5 array, just subtract a disk for parity. So five 4tb disks will give you 16tb of storage.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • All is well. I replaced the 2TB drive with a 4TB drive for a total of 5-4TB Drives. The grow button from within OMV's GUI did not work but, the mdadm command line did.


    sudo mdadm --grow /dev/md127 --size=max I was then able to grow the file system using the GUI within OMV.


    Thanks for the help and advice :-) :thumbsup:

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!