Array failed to Grow in Size After Replacing Failing 2TB Disks with 4TB Disks

    • Array failed to Grow in Size After Replacing Failing 2TB Disks with 4TB Disks

      New

      I replaced one failed and one failing 2TB drive with two new 4TB drives one at a time and grew the OMV array to no effect.

      cat /proc/mdstat produces

      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md127 : active raid5 sdb[6] sdc[1] sdd[3] sde[4] sda[5]
      7813537792 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] [UUUUU] .

      After reboot, the array went missing in OMV. I used the OMV 0.4 installation media in recovery mode to try putting everything back together using bash commands:
      sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde --verbose --force--
      I then found two drives were busy and I had to stop them. A retry successfully rebuilt the array with 4 of 5 drives.
      I added the 5th drive in OMV and rebuilt the array but the size remained the same.
      I replaced another drive in the same fashion as it was about to fail (according to OMV). Again the array size remained the same.

      I now have 4-4TB and one 2TB drive in the array marked, "clean."

      OMV is running on a separate 60GB SSD.

      The array only recognizes a file system of 7.28TB. Is there a way to grow the array and file system to take advantage of the extra disk space? ?(

      fotafm
    • New

      WIth mdadm raid, it will only use the amount of space on the smallest drive. So, until you replace that fifth drive, you can't really use that extra space in a recommend way.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • New

      OK, sounds reasonable.

      You guys are the experts and I only pick at Linux when I break it...

      To clarify, remove the only 2TB Disk in the Array and replace with a 4TB drive and rebuild the array.

      I can then grow the array and take advantage of 5-4TB disks (20TB with some loss for parity information storage).

      Please confirm... :/
    • New

      Yes. But, I thought I did confirm by saying mdadm would use the space of the smallest disk. If you have five 4TB disks, 4TB is the smallest instead of 2TB like you have now.

      fotafm wrote:

      5-4TB disks (20TB with some loss for parity information storage).
      For a raid5 array, just subtract a disk for parity. So five 4tb disks will give you 16tb of storage.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!