Raid 5 grown by extra disk, can't resize volume

  • Sunday i added an extra WD red 6tb to my raid 5 3 x WD red 6tb. It was rebuilding till now. Everything seems fine but when i resize the volume nothing happens. Capacity staus 10.82tib.


    Under raid management everything seems to be fine, it says a capacity of 16.36tib.


    Here is my outcome of cat /proc/mdstat


    Code
    Personalities : [raid6] [raid5] [raid4]
    md2 : active raid5 sdc5[0] sda[4] sdb5[2] sdd5[3]
          17567362944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
    • Offizieller Beitrag

    ext4? If so, there is an odd feature (or bug) of ext4. It uses a flag for 64 bit journaling (allowing filesystems over 16tb). The flag is automatically set depending on the size of your filesystem. If it was under 16tb, it would not be set. This means you can't resize the filesystem over 16tb. The latest OMV sets the journal to 64 bit on all 64 bit systems no matter what the size of the filesystem. Only solution we have found is creating a new filesystem.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yes it is ext4. This is bad news for me. Allthough everything very imporant is in back ups i still have minor important data which i do not have the storage for.


    Quistion about your remark about the new OMV is setting the journal to 64 bit. Does this mean when i build a new file system with 3 x 6 tb RAID 5 under OMV (so 10.82tib space) i can rebuild later with the 4th disk and resize it to 16.63tib. If so i can use the 4th disk as temprary storage.

    • Offizieller Beitrag

    Yes, if you create a new filesystem using OMV 1.8, it will be 64 bit and you will be able to resize over 16tb.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    This is a risky procedure:
    You can shrink that raid back to 3 hdd. Then mark one of them as fail and remove, you will have a degraded array. Construct a new degraded raid5 with 2 drives you have free, create a new FS with 64 bit flag in this new array. Start moving data across, when you finish kill the old array and add those two drives to new degraded array.


    As usual always backup your important data

  • I encouter a problem. An hour after my RAID array was grown i got a message my array was degraded. An older device is trown out of the array. Strangely (for me) it is completely recognized in OMV/BIOS. Tried a linux live distro and also not in RAID there. No errors in smart, passes smart fast test. At the moment doing a full smart test.


    Outcome of cat /proc/mdstat


    Code
    md2 : active raid5 sdb5[3] sda[4] sdd5[2]
          17567362944 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU]


    Any ideas what i can do more?

  • Strangely the disk passed the extended smart test to. Can any1 explains me how an disk can be recognized in bios / OS but can not be added to the raid array? Is there anything i can do / check in cli? I do not think this can be cauesed but the ext4 over 16tb issue?


    Think i am going to do an extended WD diagnostic tool overnight.


    Edited: I have read the test result wrong, it was the short test who was passed. In the extened information it says:

    Zitat

    Self-test execution status: ( 241) Self-test routine in progress...
    10% of test remaining.


    SO the long test is still busy, isn't this strange? As normally it says 255 minutes for the test, but is it busy for arround 12 hrs now?

  • Array is rebuild with 4 disk

    This is a risky procedure:
    You can shrink that raid back to 3 hdd. Then mark one of them as fail and remove, you will have a degraded array. Construct a new degraded raid5 with 2 drives you have free, create a new FS with 64 bit flag in this new array. Start moving data across, when you finish kill the old array and add those two drives to new degraded array.


    As usual always backup your important data


    I am going to try this procedure but i wondering two questions;
    1. How can i build the array back to 3 disk? It seems it isn't an option in the gui
    2. How can i create a degraded raid5 array with only 2 disk. It seems to me this need to be done with the cli, else i will need a 3rd disk?


    Your help is much appreciated!

  • Cool thanks!


    With a little help from google my machine is now rebuilding from 4 to 3 devices.


    First i shrinked my array to slightly a bit bigger then my file system, my file system is slightly smaller then the max capacity of the array because it used to house an OS and data patrtition in RAID1. It is a former synology array


    Code
    mdadm /dev/md2 --grow --array-size=11400000


    After that i resized my raid devices


    Zitat

    mdadm /dev/md2 --grow --raid-devices=3 --backup-file /opt/mdadm.md2.backup


    Now it is busy with rebuilding my array, dunno why but it looks like it is going slower then rebuilding.


    Zitat

    Personalities : [raid6] [raid5] [raid4]
    md2 : active raid5 sde[5] sda[4] sdd5[2] sdb5[3]
    11400000 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
    [>....................] reshape = 0.5% (32957444/5855787648) finish=2629.5min speed=36906K/sec


    unused devices: <none>

  • Hmmm. Some concerns here. Although i see the folder structure i can not reach my data. I think i destroyed my array. Before shrinking to the right size i made a typo so my array actually shrinked to 1,087tb in stead of 10.87tb for 10 secs.

  • Yes i have made backups of the important stuff. I let it rebuild and see what happens when it is ready. You think my error, shrinking raid array to 10% of the filesystem evne for such a short time, can cause this?

  • I decided to go to build the complete server from scratch. So OMV 1.9 and the array. I deleted all the partitions in Parted magic (ubcd) from the raid disk and the OS disk. The array was building last night, called it RAID5.


    Now it is ready but again i encouter problems. In RAID management i see 4 disk, state is clean, total size of 16.37TiB.


    So i did go to file system, create, could only choose for /dev/mapper/vg1000-lv with a size of 10.91 TiB. Added that one, etx4, tried resizing but again after pushing yes nothing happens. Tried XFS, same storry, only 10.91TiB


    Any help is much appreciated


    Edit: It is ok now. I booted again in Parted Magic, assigned a GPT on the array, booted again in OMV and i could make an EXT4 volume of 16.37TiB.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!