degraded RAID but disk still there and works, raid doesn't rebuild

  • Hello. I’m coming to you because I’ve had a problem since yesterday.

    a disk in my array is missing in the "raid management" tab, the raid is degraded, the volume is no longer mounted, but the disk is plugged in and visible in the list of drives and the smart test are green.


    So I deleted the incriminated disk to make it appear in the list of available disks for the raid recovery, he accepted it, I turned up the volume. For now the raid is not rebuilt, but I have again access to my files under windows.


    I don’t know what to do. yesterday I already did the same manipulation, but tonight, by making the server fun again, the same problem appears.


    I just made the server reboot to see if it took into account the changes and if it launched the reconstruction of the raid, but it’s as if I did nothing, that is to say as explained above, more access to the RAID in windows, missing disk, degraded RAID but present and functional disk.

    while waiting for an answer, I will try to run a deep smart test.

    Thank you for reading me

    • Offizieller Beitrag

    I'm sorry trying to make sense of what you've written, but some information would be helpful, ssh into OMV and run the following two commands and post the output of each in a code box this -> </> symbol on thread bar, makes it easier to read


    cat /proc/mdstat

    blkid

  • Code
    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [ra id10]
    md0 : active raid5 sdf[8](F) sda[0] sdg[6] sde[4] sdb[1] sdd[3] sdc[2] sdh[7]
    13673676800 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/7] [UUUUU _UU]
    bitmap: 9/15 pages [36KB], 65536KB chunk
    unused devices: <none>
    • Offizieller Beitrag

    it is display active/degraded

    OK, same thing, but the array is in a degraded state, the drive /dev/sdf has been removed by mdadm, hence the output from mdadm --detail


    That drive needs replacing, if you need instructions on how to do that let me know.


    But as a piece of advice, 8 drives in a raid 5 is not a good idea, once you get go over 4, raid 6 is a better option

  • ok, i will change the disk. i begin a backup to restrat with zfs raid solution but the place i need will be critical


    It’s still weird that this record is already dead it must be barely 1 or 2 years old

  • fredlepp

    Hat das Label OMV 5.x hinzugefügt.
  • fredlepp

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!