Raid 5

  • I wanted to put 4 need 4TB HDDs in my Raid 5 with 4 other 4TB HDDs


    But wehn I press "vergrößern" (expand) i got the following error and the new HDDs are only spare HDDs


    Fehler #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mdadm --grow --size=max --raid-devices=5 '/dev/md127' 2>&1' with exit code '1': mdadm: cannot change component size at the same time as other changes. Change size first, then check data is intact before making other changes.' in /usr/share/php/openmediavault/system/process.inc:174Stack trace:#0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(312): OMV\System\Process->execute()#1 [internal function]: OMVRpcServiceRaidMgmt->grow(Array, Array)#2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#3 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('grow', Array, Array)#4 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('RaidMgmt', 'grow', Array, Array, 1)#5 {main}

  • Hey,


    Looks like the actual error is here:
    Fehler #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; mdadm --grow --size=max --raid-devices=5 '/dev/md127' 2>&1' with exit code '1': mdadm: cannot change component size at the same time as other changes. Change size first, then check data is intact before making other changes.' in /usr/share/php/openmediavault/system/process.inc:174Stack trace:#0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(312): OMV\System\Process->execute()#1 [internal function]: OMVRpcServiceRaidMgmt->grow(Array, Array)#2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#3 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('grow', Array, Array)#4 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('RaidMgmt', 'grow', Array, Array, 1)#5 {main}


    Are you doing anything else simultaneously?

  • I get the exact same issue. I'm testing OMV in a VM so I can figure things out before building up my NAS. Whether it's OMV 2 or 3, both give this error when trying to grow a RAID array. The drive is instead added as a spare and I have to go to the command line to fix it and expand properly. This means 'mdadm --grow /dev/md0 --raid-devices=5 --backup-file=/root/raid5_grow.backup' (because without a backup file the grow command just doesn't go).


    It's really annoying to see this basic, core feature of a NAS just not working at this late date. I find reports of the issue back in 2012. 4 years later it's still throwing the same error. Whatever method OMV tries to grow a RAID array, it's not supported by mdadm and just doesn't work.

    • Offizieller Beitrag

    It's really annoying to see this basic, core feature of a NAS just not working at this late date. I find reports of the issue back in 2012. 4 years later it's still throwing the same error. Whatever method OMV tries to grow a RAID array, it's not supported by mdadm and just doesn't work.

    I would thought I tried it before and it worked. Let me do more testing.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    On OMV 2.x, it worked perfectly twice on my VM. Maybe your drives had an existing raid superblock/signature or filesystem?


    On OMV 3.x, it generates an error because of the --size flag but doesn't add it as a spare. I filed a bug report. This should be easy to fix.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Nope, this is a freshly-built VM with multiple vdisks freshly created. Virtualbox latest version (5.1.10) on Win 7 host. Updated all packages, added OMV-Extras, installed backports kernel (4.7.0-0 ), configured RAID 5 with 4 disks, saw the array built, created FS on it, created shared folder. Went to grow RAID 5, added my 5th device, got the error, mdadm reports the disk has been added as a spare.


    I'm more than certain I stood up an OMV 2 VM to test this exact functionality and got the same behaviour. I'll try again now. I'm pretty sure I'm not doing anything exotic or unexpected - just clicking "grow" on the RAID array, choosing the extra device, save, and it fails. If I go to CLI to use mdadm, I can grow the array manually and have the spare disk repurposed as an active disk and the array gets reshaped (but need to provide backup-file otherwise the grow process dies because it can't save the critical section).


    If it helps I could maybe figure out how to capture a video of my process from bare VM install to the error or provide any logs you want. That's the kind of issue I'd expect is important to fix.

  • Well damn. I just stood up a bare OMV 2 VM, and without doing anything else created a RAID 5 array, shut down, added a vdisk, booted up, grow, and it works. Perhaps updating the packages broke something when I did it originally. Will test that scenario and a bare non-updated OMV 3 and provide results.

    • Offizieller Beitrag

    OMV 3 is broken with the latest version. No need to test. Like I said, I already filed a bug report.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Understood. For what it's worth I tested OMV 2 thoroughly and regardless of conditions (bare install, updated, updated with community+pre-release, updated with all and OMV-extras backport kernel 3.16.0, with FS, with fileshare), grow RAID worked without issue. My testing must have been screwed up before.


    Still thinking I'll go with OMV 3, don't want to deal with an upgrade in the future, I prefer a more recent kernel, and supposing this bug isn't fixed when I need to grow my array I can go down to CLI and manhandle it there. I'll try and monitor the bug in the bugtracker (0001648) to see when it's fixed. Thanks for the quick report.

    • Offizieller Beitrag

    I would go with OMV 3. I'm sure Volker will fix this in the new few days. If you need it before that for some reason, the web interface still adds the drive to the array and the error message tells you exactly what command you need to run if you remove the --size=max parameter.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!