degraded RAID but disk still there and works, raid doesn't rebuild

  • Hello. I’m coming to you because I’ve had a problem since yesterday.

    a disk in my array is missing in the "raid management" tab, the raid is degraded, the volume is no longer mounted, but the disk is plugged in and visible in the list of drives and the smart test are green.

    So I deleted the incriminated disk to make it appear in the list of available disks for the raid recovery, he accepted it, I turned up the volume. For now the raid is not rebuilt, but I have again access to my files under windows.

    I don’t know what to do. yesterday I already did the same manipulation, but tonight, by making the server fun again, the same problem appears.

    I just made the server reboot to see if it took into account the changes and if it launched the reconstruction of the raid, but it’s as if I did nothing, that is to say as explained above, more access to the RAID in windows, missing disk, degraded RAID but present and functional disk.

    while waiting for an answer, I will try to run a deep smart test.

    Thank you for reading me

    • Official Post

    I'm sorry trying to make sense of what you've written, but some information would be helpful, ssh into OMV and run the following two commands and post the output of each in a code box this -> </> symbol on thread bar, makes it easier to read

    cat /proc/mdstat


    Raid is not a backup! Would you go skydiving without a parachute?

    OMV 6x amd64 running on an HP N54L Microserver

  • Code
    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [ra id10]
    md0 : active raid5 sdf[8](F) sda[0] sdg[6] sde[4] sdb[1] sdd[3] sdc[2] sdh[7]
    13673676800 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/7] [UUUUU _UU]
    bitmap: 9/15 pages [36KB], 65536KB chunk
    unused devices: <none>
    • Official Post

    it is display active/degraded

    OK, same thing, but the array is in a degraded state, the drive /dev/sdf has been removed by mdadm, hence the output from mdadm --detail

    That drive needs replacing, if you need instructions on how to do that let me know.

    But as a piece of advice, 8 drives in a raid 5 is not a good idea, once you get go over 4, raid 6 is a better option

  • ok, i will change the disk. i begin a backup to restrat with zfs raid solution but the place i need will be critical

    It’s still weird that this record is already dead it must be barely 1 or 2 years old

  • the hard drive is still under warranty, RMA procedure triggered this morning. in addition I replace the disk with a new one, everything normally. thank's for helping.

  • fredlepp

    Added the Label OMV 5.x
  • fredlepp

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!