HDD not mounting on RAID after reboot

  • Hello,

    After reboot, one of the HDD on my NAS is not mounting anymore (RAID5).

    Trying to add the disk to the RAID via RAID -> Recover and I have the following error ;

    "Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; mdadm --manage '/dev/md127' --add /dev/sda 2>&1' with exit code '1': mdadm: add new device failed for /dev/sda as 4: Invalid argument


    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; mdadm --manage '/dev/md127' --add /dev/sda 2>&1' with exit code '1': mdadm: add new device failed for /dev/sda as 4: Invalid argument in /usr/share/php/openmediavault/system/process.inc:247

    Stack trace:

    #0 /usr/share/openmediavault/engined/rpc/mdmgmt.inc(420): OMV\System\Process->execute()

    #1 [internal function]: Engined\Rpc\MdMgmt->add()

    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array()

    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod()

    #4 /usr/sbin/omv-engined(544): OMV\Rpc\Rpc::call()

    #5 {main}"


    Does anybody knows what should be the problem ?


    HP MicroServer gen 8.

    On ILO starting, no error message (as I know).

  • votdev

    Approved the thread.
    • New
    • Official Post

    HP MicroServer gen 8

    Not totally familiar with these, I have a Gen 7, but I have applied the updated/hacked bios

    Trying to add the disk to the RAID via RAID -> Recover and I have the following error

    It will error, simply because mdadm doesn't know it's been removed, that's if it has, this would suggest that mdadm still has that drive in it's configuration therefore the recover fails.


    Once the machine is booted you need to follow this post please post each output into a code box this symbol </> on the thread post, this makes the output easier to read


    Added to that it could just as well be a hardware issue :/

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 7x amd64 running on an HP N54L Microserver

  • Hello,

    Sorry for the delay.

    Thanks for your help.

    Code: cat /proc/mdstat
    root@serveurmaison:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10] 
    md127 : active raid5 sdc[2] sdb[1] sdd[0]
          5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
          bitmap: 15/15 pages [60KB], 65536KB chunk
    
    unused devices: <none>
    Code: blkid
    root@serveurmaison:~# blkid
    /dev/sdd: UUID="b1bd0cdb-83d9-766c-8fd3-c89bdd62753a" UUID_SUB="e67f0313-548c-4414-d8c8-fe79867ea78a" LABEL="openmediavault:OMV" TYPE="linux_raid_member"
    /dev/md127: LABEL="Raid5" UUID="b6dd9418-30ce-499c-90ca-7fc678a3fe24" BLOCK_SIZE="4096" TYPE="ext4"
    /dev/sdb: UUID="b1bd0cdb-83d9-766c-8fd3-c89bdd62753a" UUID_SUB="85ed296c-7ef9-37bd-16ed-d61c11f676b8" LABEL="openmediavault:OMV" TYPE="linux_raid_member"
    /dev/sde5: UUID="0747ae2c-7242-41d2-b70f-d20bacbe9126" TYPE="swap" PARTUUID="0a762683-05"
    /dev/sde1: UUID="20dd934e-4034-45ce-aacf-c6ea0d908b28" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0a762683-01"
    /dev/sdc: UUID="b1bd0cdb-83d9-766c-8fd3-c89bdd62753a" UUID_SUB="f123a240-b61a-ae20-ffeb-6140b93c49b7" LABEL="openmediavault:OMV" TYPE="linux_raid_member"


    Code: mdadm --detail --scan --verbose
    root@serveurmaison:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/OMV level=raid5 num-devices=4 metadata=1.2 name=openmediavault:OMV UUID=b1bd0cdb:83d9766c:8fd3c89b:dd62753a
       devices=/dev/sdb,/dev/sdc,/dev/sdd

    4 HDD 2TB for the Raid. Seems after running a SMART that 2 HDD a in a degraded state. See att


    1 SSD 128 Gb (not sure of the capacity) for OMV


    I had the problem after updating and rebooting.

    • New
    • Official Post

    You have a problem!! /dev/sd[a c] are showing a warning in SMART that might suggest bad sectors


    The drive missing from the array according to the output is /dev/sda, but what is odd and makes no sense is that the array is active, not active degraded, which it should with a drive missing. The mdadm conf file is also missing the definitions line


    Post the output of mdadm --detail /dev/md127

  • Hi,


    I confirm that sda is normally in the raid but not anymore.


    Since last reboot, the Raid is not mounting (see screen shot).

    • New
    • Official Post

    Since last reboot, the Raid is not mounting (see screen shot).

    This 'might' be due to the issue with /dev/sda and /dev/sdc and the filesystem 'may' need cleaning


    is it better to enable a Raid with HW from the server or as actually SW raid via OMV

    TBH it's down to personal choice, the problem with HW raid it would present a 'single' drive to OMV and you have no way of monitoring the status of the array from within OMV's GUI


    I think HDD is dead

    It needs replacing ASAP, as /dev/sdc is showing a warning in SMART you're running the risk of losing the array and therefore your data

  • Hi geaves,


    I will continue with SW raid as I trust OMV as a very good solution with a very good community. 😀


    I've just received new disk and replaced the one which seems in the baddest condition (sda). I'm now in recovering mode. 8)


    I will order a second disk next month and replace the other one with SMART warnings.


    I'm using OMV since... a long time, and each time I had problems I had a solution via the community.


    I'm using it in a VM on a Freebox Ultra (triple play box from Internet provider Free) and it's working well (but no RAID) and in the configuration with an HP Microserver Gen8.


    Thanks for the help ! :)

  • Hi, it's me (again),


    My RAID has finished the recovering, but is always not mounting.



    Code: cat /proc/mdstat
    root@serveurmaison:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [raid0] [raid1] [raid10] 
    md127 : active raid5 sda[2] sdd[0] sdb[4] sdc[1]
          5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    unused devices: <none>


    Code: blkid
    root@serveurmaison:~# blkid
    /dev/sdd: UUID="b1bd0cdb-83d9-766c-8fd3-c89bdd62753a" UUID_SUB="e67f0313-548c-4414-d8c8-fe79867ea78a" LABEL="openmediavault:OMV" TYPE="linux_raid_member"
    /dev/md127: LABEL="Raid5" UUID="b6dd9418-30ce-499c-90ca-7fc678a3fe24" BLOCK_SIZE="4096" TYPE="ext4"
    /dev/sdb: UUID="b1bd0cdb-83d9-766c-8fd3-c89bdd62753a" UUID_SUB="a3cd44ed-b24b-ca41-25bf-6ef359d37588" LABEL="openmediavault:OMV" TYPE="linux_raid_member"
    /dev/sde5: UUID="0747ae2c-7242-41d2-b70f-d20bacbe9126" TYPE="swap" PARTUUID="0a762683-05"
    /dev/sde1: UUID="20dd934e-4034-45ce-aacf-c6ea0d908b28" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0a762683-01"
    /dev/sdc: UUID="b1bd0cdb-83d9-766c-8fd3-c89bdd62753a" UUID_SUB="85ed296c-7ef9-37bd-16ed-d61c11f676b8" LABEL="openmediavault:OMV" TYPE="linux_raid_member"
    /dev/sda: UUID="b1bd0cdb-83d9-766c-8fd3-c89bdd62753a" UUID_SUB="f123a240-b61a-ae20-ffeb-6140b93c49b7" LABEL="openmediavault:OMV" TYPE="linux_raid_member"
    Code: mdadm --detail --scan --verbose
    root@serveurmaison:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/OMV level=raid5 num-devices=4 metadata=1.2 name=openmediavault:OMV UUID=b1bd0cdb:83d9766c:8fd3c89b:dd62753a
       devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd

    In addition, losing to the console, I have this message :


    My understanding is that I should not have swapped just as it the disks but :

    1/ remove form the raid the disk,

    2/ Rebooting

    3/ Then add the new disk ?


    Is it too late to correct this ?

    • New
    • Official Post

    My understanding is that I should not have swapped just as it the disks but

    :/ From your post #9 I assumed you knew how to replace a drive within an array.....


    Mdadm had removed a drive from the the array, your #5 mdadm --detail, so the procedure would be


    1) Shutdown

    2) Remove the failed drive and insert new drive

    3) Reboot

    4) Storage disks -> select the new drive and select wipe on the menu, short/quick is sufficient for new drives

    5) Raid Management/MD plugin -> select recover from the menu, the new drive should present itself, select it and click OK


    The array would then recover


    The issue from the screenshot is related to the file system, did you reboot after the array finished recovering? This could also be related to the other bad drive.

    The mdadm conf file is also showing an entry under the definitions line albeit incorrect, #3 it was blank.


    Before we run fsck across the array post the output of mdadm --detail on each drive e.g. mdadm --detail /dev/sda

  • Hi geaves,


    Should I put again the "old" disk in place and then remove it ? Cause I've made exactly what you said (maybe with the second reboot missing, can't remember).

    Thank again for you help :)


    EDIT :

    Using this page : https://phoenixnap.com/kb/fsck-command-linux I've corrected a lot of errors. Seems to be ok now.


    BUT (always a BUT), now I found initialramdisk very long to load (about 1 to 2 mn). I've not noticed this before but not sure it's new...

    • New
    • Official Post

    I've corrected a lot of errors. Seems to be ok now.

    Can you run the mdadm --detail on each drive and get output?

    Post the output of cat /etc/mdadm/mdadm.conf my assumption is this will still need fixing

    BUT (always a BUT), now I found initialramdisk very long to load (about 1 to 2 mn). I've not noticed this before but not sure it's new

    Pass, that's out of my ball park :) I don't shutdown my Microserver and it's very rare I connect a monitor to check the console

  • Hi geaves,



    Pass, that's out of my ball park :) I don't shutdown my Microserver and it's very rare I connect a monitor to check the console


    Idem. It was more for my knowledge ;)

  • :/ don't you have to create/update the conf file first then run update-initramfs -u ?

    I'd agree if you were doing this all manually. But omv-salt is smart enougt to get the order correct. For example:


Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!