raid5 missing after power failure... but all physicals disks are present...!

  • Hello,


    I use OMV since 6 years for now. And this is a very stable NAS OS!


    But last day we have a power failure and now we can't see the raid in the raid management array. So strange !


    All the disks are present, and of course the file systems is missing too (see my screenshots for more understanding)



    I don't want to make mistake and loose my data backup.


    So do you have a idea to fix this problem ?


    Thank you very much

  • Thank.
    Here is the infos :


    Code
    root@OMV-NAS:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : inactive sdb[0] sdi[8] sdh[6] sdg[5] sdf[4] sde[3] sdd[1]
    13673684584 blocks super 1.2
    unused devices: <none>




    Code
    root@OMV-NAS:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid5 num-devices=8 metadata=1.2 name=OMV-NAS:RAID5 UUID=118b93c2:b4c2e708:751d95ca:7a668fc9
    devices=/dev/sdb,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi


    I hope it'll help.


    Thank you.


    Regard.

  • Thank you very very much.


    Now I can see the Raid 5.
    Just a last thing. The raid 5 is degraded.
    So I want to add the missing disk, sdc, but I can't see it in the "Add hot spares / recover RAID device" window...!


    But I can see it here, in the "Physical disks" window.


    How can I force to add the sdc in the raid...? If you have a idea...


    Thank you.
    Regard.



    PS : and I'm a little afraid because the "fdisk -l" command return this strange result : "Disk /dev/sdi doesn't contain a valid partition table"


  • So I want to add the missing disk, sdc, but I can't see it in the "Add hot spares / recover RAID device" window...!

    You aren't ready to start using the web interface yet. So, don't anything there.


    What is the output of: cat /proc/mdstat

    and I'm a little afraid because the "fdisk -l" command return this strange result : "Disk /dev/sdi doesn't contain a valid partition table"

    That is fine. The arrays that OMV creates don't use partitions and don't need a partition table.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Here the result of the command :


    Regard.


    Code
    root@OMV-NAS:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md0 : active (auto-read-only) raid5 sdb[0] sdi[8] sdh[6] sdg[5] sdf[4] sde[3] sdd[1]
    13673680384 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/7] [UU_UUUUU]
    unused devices: <none>


    and I see it with the "fdisk-l" command :


  • Thank you.


    I just do this, but the Raid 5 is still degraded and I can't see the sdc in the "Add hot spares / recover RAID device" window.


    Maybe force to add the disk, or format the drive...?


    Regard.


    PS : or maybe add the "c" at the end of this command...

    Code
    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bihgfed]
  • I can't see the sdc in the "Add hot spares / recover RAID device" window.

    Again, you really don't want to use the web interface yet.


    Hello, if someone have a idea to fix this, it'll be great.

    patience... Reinstalling OMV wouldn't help this issue.


    or maybe add the "c" at the end of this command...

    yes add the 'c'.


    mdadm --stop /dev/md0
    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bihgfedc]

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Hello,


    thank for help.


    There is a problem with the first command, so I don't use the second command for now.
    Maybe a restart of OMV...


    Code
    root@OMV-NAS:~# mdadm --stop /dev/md0
    mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?


    ps : Restart of the OMV didn't change anything.



    ok with umount -l /dev/md0, it is unmounted but still mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group


    so status now :

    Ah I read a lot of thing, but do you have a idea to move on ?


    Regard.

  • And if I try the "assemble" command


  • So the problem was the /dev/sdc.


    After reading a lot of of informations, I made



    # mdadm --zero-superblock /dev/sdc
    # mdadm --manage /dev/md0 --add /dev/sdc



    So now the status is rebuilding.


    links
    https://ubuntuforums.org/showthread.php?t=884556
    https://askubuntu.com/question…moved-hard-drive-in-raid5

  • and now Raid 5 is ok.


Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!