Can't remove disks of raid

  • Hello, I have mounted a 2-disk mirrored raid, and it works perfectly, however, if one of the disks fails, or I need to take one of the disks out, the raid disappears, and I lose full access to any of the disks.


    I was wondering if anyone else has this kind of problem, or am I doing something wrong?

    • Offizieller Beitrag

    or am I doing something wrong

    You are assuming that mdadm (software raid) knows what you have done

    if one of the disks fails

    Then the array will continue as clean/degraded

    or I need to take one of the disks out, the raid disappears, and I lose full access to any of the disks.

    Why would you remove a disk unless it was showing SMART issues, as to the array disappearing, this is expected behaviour.


    If replacing a drive within an array then use the GUI, to remove, then replace

  • Why would you remove a disk unless it was showing SMART issues, as to the array disappearing, this is expected behaviour.


    If replacing a drive within an array then use the GUI, to remove, then replace

    I was removing disks from a raid so I could test what would happen if a disk failed. and when i removed a disk and plugged it back in, what happened was: total loss of data access of the disk that was still there. Is this really expected?

    You are assuming that mdadm (software raid) knows what you have done

    Then the array will continue as clean/degraded

    Why would you remove a disk unless it was showing SMART issues, as to the array disappearing, this is expected behaviour.


    If replacing a drive within an array then use the GUI, to remove, then replace

    • Offizieller Beitrag

    Is this really expected?

    I assume by that question this is your first time using/testing a raid array.


    OMV uses mdadm (software raid) as with any software the end user has to input information for that software to function, but that part is taken care of by the GUI. If a user disrupts/changes the behaviour of the software then there's an excellent chance the software will cease to function as expected.


    If a drive fails within an array then mdadm will automatically mark the drive as failed, remove the drive from the array and leave the array in clean/degraded state with access to the data within the array.


    Mdadm is not hot-swap, if a user pulls a drive from an array by shutting down the server, then reboots, the check process that is implemented during the boot process will detect a missing drive and mark the array as inactive. This means that no arrays will be displayed in raid management therefore no access to data even though the existing drive still shows under Storage -> Disks. Whilst the array returns as inactive the array can be made inactive via the command line and the array will display as clean/degraded.


    Have a look through the Raid on the forum, most issues are from inactive arrays usually after a power outage.

  • last inactive should have been active.


    ... can be made active via ...

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • yes, it's my first contact with RAID, in this case of having to remount the raid, more specifically in my case, I use 2 mirrored disks, it would be simpler to use RSYNC to avoid having to remount the raid, no?I'll look more on the forum about these glitches! Thank you very much in advance!

    • Offizieller Beitrag

    I use 2 mirrored disks, it would be simpler to use RSYNC to avoid having to remount the raid, no?

    Yes it would, raid is about availability, so if a drive fails you still have access to data, in respect to rsync in the getting started guide there is a section on rsync and how to use it.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!