Error building raid

  • Hi


    im playing around with a new server ive got 2 of the disks what came with the server was in a hardware mirror using the dell raid card in he server with a windows install. I've since flashed the raid card into IT mode and added a 2,5" disk for the os to play with, also was gifted 2 more caddies and they had drives with though they are sas not sata like the original i can see all 4 in OVM i have selected wipe on them all and seen in raid section it listed the mirror i have also deleted the raid. i have gone to try make a raid with the 2 that was a originally a mirror but as a stripe this time but i get the following errors, can anyone advise?


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-mkraid /dev/md0 -l stripe -n 2 -N Storage /dev/sdc /dev/sdd 2>&1' with exit code '1': mdadm: ARRAY line /dev/md/ddf0 has no identity information. mdadm: super1.x cannot open /dev/sdc: Device or resource busy mdadm: chunk size defaults to 512K mdadm: size set to 35183040835584K mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array mdadm: Defaulting to version ddf metadata mdadm: failed to open /dev/sdc after earlier success - aborting

    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-mkraid /dev/md0 -l stripe -n 2 -N Storage /dev/sdc /dev/sdd 2>&1' with exit code '1': mdadm: ARRAY line /dev/md/ddf0 has no identity information.
    mdadm: super1.x cannot open /dev/sdc: Device or resource busy
    mdadm: chunk size defaults to 512K
    mdadm: size set to 35183040835584K
    mdadm: Fail create md0 when using /sys/module/md_mod/parameters/new_array
    mdadm: Defaulting to version ddf metadata
    mdadm: failed to open /dev/sdc after earlier success - aborting in /usr/share/php/openmediavault/system/process.inc:182
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/raidmgmt.inc(300): OMV\System\Process->execute()
    #1 [internal function]: Engined\Rpc\RaidMgmt->create(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('create', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('RaidMgmt', 'create', Array, Array, 1)
    #5 {main}

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • i have selected wipe on them all

    That's the right thing to do

    seen in raid section it listed the mirror i have also deleted the raid

    ?( did you wipe these drives before you removed the raid listed in raid management

    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-mkraid /dev/md0 -l stripe -n 2 -N Storage /dev/sdc /dev/sdd 2>&1' with exit code '1': mdadm: ARRAY line /dev/md/ddf0 has no identity information.
    mdadm: super1.x cannot open /dev/sdc: Device or resource busy

    That gives a clue, (device or resource busy) you need to run wipefs -n /dev/sdc and the same on sdd that will give you information on the drives signatures.

  • That's the right thing to do

    ?( did you wipe these drives before you removed the raid listed in raid management

    That gives a clue, (device or resource busy) you need to run wipefs -n /dev/sdc and the same on sdd that will give you information on the drives signatures.

    yes i wiped them in the disk section of omv before deleting the raid listing in OMV gui.


    ive ran the cmd you supplied but there was no output, and still cant create the raid they shouldn't be doing anything to be busy this is a fresh install no file systems or disks mounted.

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • yes i wiped them in the disk section of omv before deleting the raid listing in OMV gui.

    That could explain it, delete the raid then, have you tried wiping the drives again

    ive ran the cmd you supplied but there was no output

    Odd that would suggest there is nothing on the drives i.e. they have been wiped, if that's the case it doesn't explain this mdadm: super1.x cannot open /dev/sdc: Device or resource busy


    this mdadm: Defaulting to version ddf metadata might suggest there is something on there.


    If you ignore the raid setup can you created a file system on one of those drives and get output from wipefs -n


    BTW I assume from the output you're using OMV4

  • ill try make a file system on each disk now. And the raid is deleted from the GUI I did this after wiping them first time around there is nothing in raid section now.


    And no not omv4 this is a fresh install of omv5

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • Tried to make a file system on sdc and sdd but get the following error. not sure whats going on with these drives.


    Error #0:
    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mkfs -V -t ext4 -b 4096 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 -O 64bit -L '3' '/dev/sdd1' 2>&1' with exit code '1': mke2fs 1.45.5 (07-Jan-2020)
    /dev/sdd1 is apparently in use by the system; will not make a filesystem here! in /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc:672
    Stack trace:
    #0 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): Engined\Rpc\OMVRpcServiceFileSystemMgmt->Engined\Rpc\{closure}('/tmp/bgstatusQ3...', '/tmp/bgoutputoH...')
    #1 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(688): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure), NULL, Object(Closure))
    #2 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->create(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('create', Array, Array)
    #5 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'create', Array, Array, 1)
    #6 {main}

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • Tried to make a file system on sdc and sdd but get the following error. not sure whats going on with these drives.

    Then there is something on those drives probably from their use in hardware raid, I did a search on ddf metadata one site suggests installing dmraid apt install mdadm dmraid then run dmraid -r

  • Then there is something on those drives probably from their use in hardware raid, I did a search on ddf metadata one site suggests installing dmraid apt install mdadm dmraid then run dmraid -r

    will give it a try and see. Knew I should of destroyed the raid before the flashed the raid card to itmode.

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • Then there is something on those drives probably from their use in hardware raid, I did a search on ddf metadata one site suggests installing dmraid apt install mdadm dmraid then run dmraid -r

    tried this and the final cmd just comes back no raid disks

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • tried this and the final cmd just comes back no raid disks

    TBH I'm at a loss, me I would either now try another machine, but I do have a USB docking station for this sort of stuff or run autonuke on dban which can take hours.


    So there's no residual raid signature, the above comes back blank as does wipefs that would suggest the drives are clean, but you cannot create a raid nor a file system as it returns the drive as busy.


    What's the raid card and the make and model of the server

  • TBH I'm at a loss, me I would either now try another machine, but I do have a USB docking station for this sort of stuff or run autonuke on dban which can take hours.


    So there's no residual raid signature, the above comes back blank as does wipefs that would suggest the drives are clean, but you cannot create a raid nor a file system as it returns the drive as busy.


    What's the raid card and the make and model of the server

    Yea i haven't got a dock for 3.5" disks. if i have to i can jerry rig my gaming pc so i can plug a drive at a time direct in, or if you can suggest somthing i can leave it in the server and use another os the omv install on here is just to test the server my production server is on other older hardware at the moment.


    the raid card is what came with the dell t430 server i got which was been disposed of, its a PERC H330 but I've since flashed it from the dell firmware which is what was on the card when the raid on these two disks where made running a windows server os. I flashed it using this guide (freedos method) https://forums.servethehome.co…ps-hba-it-firmware.25498/

    as i wanted to use the card as a sata card instead of a propriety hw raid card.but i flashed the card before destroying the raid which is why i guess im in this situation, also doubt it has anything to do with it but the drives where originally installed with there os in bios when i did this install of OMV i switched the bios to UEFI to keep with the times.

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • I saw your post in My Nas Build :)

    i switched the bios to UEFI to keep with the times.

    That shouldn't be the issue, however these cards do have their own menu, it's accessible with something like ctrl + s or something perhaps going into that may shed some light.

  • I saw your post in My Nas Build :)

    That shouldn't be the issue, however these cards do have their own menu, it's accessible with something like ctrl + s or something perhaps going into that may shed some light.

    Yea didn't think it would be an issue. I've been into the card controller interface via the Dell system menu. Not sure if now it's flashed if there's another way to get into the card menu. In the end it's not the end of the world as the drives wouldn't be for production use just wanted to make use as I test the server.

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • The reason I suggested accessing the menu is to see if the drives were displayed and could they be wiped in there, I've only ever dealt with Adaptec's, but I did find this which you probably already have

    yea sure ive checked and i didnt notice anything but ill have another look later on.

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • yea sure ive checked and i didnt notice anything but ill have another look later on.

    I guessed you might have, but if the drives do not display, then another option is to re enable the raid option and see if the drives are seen and what action you may be able to take within the card menu.

  • the drives display but dont give any options as far as im aware it just passes the drives through to the os like they are direct attached now so the raid side of the card has completly gone i would have to flash it back to standard if thats even possible

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • so the raid side of the card has completly gone

    I would agree, the only three options then I can think of is dd, dban, another distro, you can set dd to write zero's to the drive, dban and run autonuke, use a live linux distro have a look at the drive with gparted but they should be blank and dd from that.

  • I would agree, the only three options then I can think of is dd, dban, another distro, you can set dd to write zero's to the drive, dban and run autonuke, use a live linux distro have a look at the drive with gparted but they should be blank and dd from that.

    just had no free time but im going to try gparted and see if a few wipes in there and see if that does anything.

    OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
    HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!