procedure to replace drive in Raid1

  • Hi All, I have a OMV install setup with 2 disks in a Raid1. Working great and love OMV. I have spent quite a bit of time testing a failed drive scenario to know avail. What is the proper procedure for doing so. I'm a QNAP guy that is used to hot swapping a drive. Can someone point me in the right direction. Thanks

  • I had used this procedure:


    Log in OMV web gui.

    Got to "Storage" -> "Software RAID".

    Select your RAID with a simple single left-click on it.

    Click on the button called "Remove", then select the disk you want to replace, then save.

    Shutdown the NAS.

    Replace the disk.

    Power on the machine.

    Log in OMV web gui.

    Go to "STORAGE" -> "Disks".

    Select the new disk with a single left-click.

    Click on the button called "Wipe".

    Go to "STORAGE" -> "Software RAID".

    Select your RAID (now in degraded state).

    Left-click con the "Recover" button.

    Select the new disk then save.

    Wait for the reconstruction.

  • geaves rebuilding again at the moment. will not be completed until 6 hours. hopefully this is ok for you


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid1 sdc[1] sdb[0]

    1953382464 blocks super 1.2 [2/2] [UU]

    [=>...................] resync = 6.5% (127750528/1953382464) finish=234.4min speed=129762K/sec

    bitmap: 15/15 pages [60KB], 65536KB chunk


    unused devices: <none>

    • Offizieller Beitrag

    rebuilding again at the moment. will not be completed until 6 hours. hopefully this is ok for you

    that's fine the only reason I asked for that output and I would have requested more was to determine if the failing drive was still functioning within the array or whether mdadm had removed it, as that wasn't clear in your first post


    Try not to access the server until the rebuild has finished, less stress on the working drive

  • Ok Raid is rebuilt with 2 healthy drives. Thormir84 I have followed your instructions and everything and worked just fine. However, after removing a drive from the raid1, and wiping the new one, I go to "Storage -> Software Raid" and it's empty. IE. There is no raid anymore. This is one of the issues I've had with OMV


    cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : inactive sdb[0](S)

    1953382488 blocks super 1.2


    md127 : inactive sdc[2](S)

    1953382488 blocks super 1.2


    unused devices: <none>


    While it was rebuilding seemed to show md0 now it shows md127

    • Offizieller Beitrag

    I have spent quite a bit of time testing a failed drive scenario to know avail.

    This is not possible using mdadm you cannot 'test' a failed drive scenario, if a drive fails and it's detected by mdadm the array will appear as clean/degraded in the GUI. If you physically 'pull' a drive from an array the array will go inactive and it goes invisible, unlike QNAP Linux mdadm is not hot-swap.


    So, how have you gone from your #4 to your #6 because it makes no sense, in #4 what was actually resyncing, because once complete the array would have been in an active usable state


    In #6 you now have 2 arrays both of which are inactive, Thormir84 instructions are correct, but that is assuming the array is in active clean state before commencing with those instructions


    #6 suggests that you have somehow created a second array using one drive hence the output from cat /proc/mdstat


    So is your setup just a test scenario, i.e. you have no filesystem and you have no data on the drive/s

  • Yes, this is just a test scenario. No data. I am trying to move away from QNAP and testing NAS OS's. Really like OMV but I cannot get past this scenario. As soon as the mirror was synced, I rebooted to ensure all was well. The raid1 was there and appeared ok, then proceded with Thormir84 instructions. From "Storage->Software Raid" I removed a disk in the array, then shutdown and physically replaced the disk. Restarted, then "wiped" the disk, then went to "Storage->Software Raid" and nothing is present.

    • Offizieller Beitrag

    From "Storage->Software Raid" I removed a disk in the array

    At that point the array would have displayed as clean/degraded in raid management


    then shutdown and physically replaced the disk

    This is where it's gone wrong, I believe you physically removed the wrong drive, I'll explain why further down


    Restarted, then "wiped" the disk, then went to "Storage->Software Raid" and nothing is present

    The nothing is present in raid management would point to the array being inactive, which is what happens if you physically 'pull' a drive from an array, this is the expected outcome. Mdadm is not plug and play it knows nothing of the hardware nor the filesystem on top of it.


    Your #6 where you have run cat /proc/mdstat now shows 2 arrays both of which are inactive, md0 makes sense from your #4, but I have no idea where the other one has come from, I can surmise, but that would be speculation.


    As far as omv's software raid goes it uses mdadm, but it's software it requires user input for it to function and for it to be maintained, most commands can be completed using the GUI by clicking an icon on the menu bar, but sometimes it requires input from the cli.

  • I will try again and make all attempts to remove the actual "removed" drive.


    Thanks so much for helping.


    I've tried OMV, TrueNas, EasyNas and Rockstor. TrueNas and EasyNas were complete failures.. I like OMV more than RockStor. It boots faster, the GUI is layed out nicely.. Rockstor also requires 3 drives in a Raid1 vs 2 in OMV.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!