Array failed to Grow in Size After Replacing Failing 2TB Disks with 4TB Disks

  • I replaced one failed and one failing 2TB drive with two new 4TB drives one at a time and grew the OMV array to no effect.


    cat /proc/mdstat produces


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid5 sdb[6] sdc[1] sdd[3] sde[4] sda[5]
    7813537792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] .


    After reboot, the array went missing in OMV. I used the OMV 0.4 installation media in recovery mode to try putting everything back together using bash commands:
    sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde --verbose --force--
    I then found two drives were busy and I had to stop them. A retry successfully rebuilt the array with 4 of 5 drives.
    I added the 5th drive in OMV and rebuilt the array but the size remained the same.
    I replaced another drive in the same fashion as it was about to fail (according to OMV). Again the array size remained the same.


    I now have 4-4TB and one 2TB drive in the array marked, "clean."


    OMV is running on a separate 60GB SSD.


    The array only recognizes a file system of 7.28TB. Is there a way to grow the array and file system to take advantage of the extra disk space? ?(


    fotafm

    • Offizieller Beitrag

    WIth mdadm raid, it will only use the amount of space on the smallest drive. So, until you replace that fifth drive, you can't really use that extra space in a recommend way.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK, sounds reasonable.


    You guys are the experts and I only pick at Linux when I break it...


    To clarify, remove the only 2TB Disk in the Array and replace with a 4TB drive and rebuild the array.


    I can then grow the array and take advantage of 5-4TB disks (20TB with some loss for parity information storage).


    Please confirm... :/

    • Offizieller Beitrag

    Yes. But, I thought I did confirm by saying mdadm would use the space of the smallest disk. If you have five 4TB disks, 4TB is the smallest instead of 2TB like you have now.

    5-4TB disks (20TB with some loss for parity information storage).

    For a raid5 array, just subtract a disk for parity. So five 4tb disks will give you 16tb of storage.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • All is well. I replaced the 2TB drive with a 4TB drive for a total of 5-4TB Drives. The grow button from within OMV's GUI did not work but, the mdadm command line did.


    sudo mdadm --grow /dev/md127 --size=max I was then able to grow the file system using the GUI within OMV.


    Thanks for the help and advice :) :thumbup:

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!