Raid 5 grown by extra disk, can't resize volume

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Raid 5 grown by extra disk, can't resize volume

      Sunday i added an extra WD red 6tb to my raid 5 3 x WD red 6tb. It was rebuilding till now. Everything seems fine but when i resize the volume nothing happens. Capacity staus 10.82tib.

      Under raid management everything seems to be fine, it says a capacity of 16.36tib.

      Here is my outcome of cat /proc/mdstat

      Source Code

      1. Personalities : [raid6] [raid5] [raid4]
      2. md2 : active raid5 sdc5[0] sda[4] sdb5[2] sdd5[3]
      3. 17567362944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

    • ext4? If so, there is an odd feature (or bug) of ext4. It uses a flag for 64 bit journaling (allowing filesystems over 16tb). The flag is automatically set depending on the size of your filesystem. If it was under 16tb, it would not be set. This means you can't resize the filesystem over 16tb. The latest OMV sets the journal to 64 bit on all 64 bit systems no matter what the size of the filesystem. Only solution we have found is creating a new filesystem.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.10
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Yes it is ext4. This is bad news for me. Allthough everything very imporant is in back ups i still have minor important data which i do not have the storage for.

      Quistion about your remark about the new OMV is setting the journal to 64 bit. Does this mean when i build a new file system with 3 x 6 tb RAID 5 under OMV (so 10.82tib space) i can rebuild later with the 4th disk and resize it to 16.63tib. If so i can use the 4th disk as temprary storage.
    • Yes, if you create a new filesystem using OMV 1.8, it will be 64 bit and you will be able to resize over 16tb.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.10
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • This is a risky procedure:
      You can shrink that raid back to 3 hdd. Then mark one of them as fail and remove, you will have a degraded array. Construct a new degraded raid5 with 2 drives you have free, create a new FS with 64 bit flag in this new array. Start moving data across, when you finish kill the old array and add those two drives to new degraded array.

      As usual always backup your important data
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • I encouter a problem. An hour after my RAID array was grown i got a message my array was degraded. An older device is trown out of the array. Strangely (for me) it is completely recognized in OMV/BIOS. Tried a linux live distro and also not in RAID there. No errors in smart, passes smart fast test. At the moment doing a full smart test.

      Outcome of cat /proc/mdstat

      Source Code

      1. md2 : active raid5 sdb5[3] sda[4] sdd5[2]
      2. 17567362944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]

      Any ideas what i can do more?

      The post was edited 1 time, last by Martijn ().

    • Strangely the disk passed the extended smart test to. Can any1 explains me how an disk can be recognized in bios / OS but can not be added to the raid array? Is there anything i can do / check in cli? I do not think this can be cauesed but the ext4 over 16tb issue?

      Think i am going to do an extended WD diagnostic tool overnight.

      Edited: I have read the test result wrong, it was the short test who was passed. In the extened information it says:
      Self-test execution status: ( 241) Self-test routine in progress...
      10% of test remaining.

      SO the long test is still busy, isn't this strange? As normally it says 255 minutes for the test, but is it busy for arround 12 hrs now?

      The post was edited 1 time, last by Martijn ().

    • Array is rebuild with 4 disk

      subzero79 wrote:

      This is a risky procedure:
      You can shrink that raid back to 3 hdd. Then mark one of them as fail and remove, you will have a degraded array. Construct a new degraded raid5 with 2 drives you have free, create a new FS with 64 bit flag in this new array. Start moving data across, when you finish kill the old array and add those two drives to new degraded array.

      As usual always backup your important data


      I am going to try this procedure but i wondering two questions;
      1. How can i build the array back to 3 disk? It seems it isn't an option in the gui
      2. How can i create a degraded raid5 array with only 2 disk. It seems to me this need to be done with the cli, else i will need a 3rd disk?

      Your help is much appreciated!
    • Cool thanks!

      With a little help from google my machine is now rebuilding from 4 to 3 devices.

      First i shrinked my array to slightly a bit bigger then my file system, my file system is slightly smaller then the max capacity of the array because it used to house an OS and data patrtition in RAID1. It is a former synology array

      Source Code

      1. mdadm /dev/md2 --grow --array-size=11400000


      After that i resized my raid devices

      mdadm /dev/md2 --grow --raid-devices=3 --backup-file /opt/mdadm.md2.backup


      Now it is busy with rebuilding my array, dunno why but it looks like it is going slower then rebuilding.

      Personalities : [raid6] [raid5] [raid4]
      md2 : active raid5 sde[5] sda[4] sdd5[2] sdb5[3]
      11400000 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
      [>....................] reshape = 0.5% (32957444/5855787648) finish=2629.5min speed=36906K/sec

      unused devices: <none>

    • Backup?

      Here is a page that describes the process.
      starcoder.com/wordpress/2011/0…nux-software-raid-volume/

      Is always good to practice in VM before going real.

      If you don't have backup try to use the extundelete or photorec to recover data files.
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • I decided to go to build the complete server from scratch. So OMV 1.9 and the array. I deleted all the partitions in Parted magic (ubcd) from the raid disk and the OS disk. The array was building last night, called it RAID5.

      Now it is ready but again i encouter problems. In RAID management i see 4 disk, state is clean, total size of 16.37TiB.

      So i did go to file system, create, could only choose for /dev/mapper/vg1000-lv with a size of 10.91 TiB. Added that one, etx4, tried resizing but again after pushing yes nothing happens. Tried XFS, same storry, only 10.91TiB

      Any help is much appreciated

      Edit: It is ok now. I booted again in Parted Magic, assigned a GPT on the array, booted again in OMV and i could make an EXT4 volume of 16.37TiB.

      The post was edited 2 times, last by Martijn ().