RAID1 name change after a simulated disk failure

  • Hi everybody,
    I'm trying to test OMV in a Virtual Machine with a RAID1 configuration and simulate a drive failure to check the results.


    1- I've three virtual hard disks:

    Code
    /dev/sda (1GB)
    	/dev/sdb (1GB)
    	/dev/sdc (8GB) <-- OpenMediaVault is installed here.


    2- I've created a RAID type 1 (Mirror) with harddisks: /dev/sda and /dev/sdb
    the name of RAID is: OMV-NAS:RAID1
    Part of the output of "cat /proc/mdstat" is

    Code
    md0: active raid1 sdb[1] sda[0]
    	1048564 blocks super 1.2 [2/2] [UU]


    3- I've formated the file system with ext4 type and I mounted the file systems. Then, I shared some folders with NFS service to write some data without any problem.


    4- To check the RAID system I've simulate a drive failure by software:

    Code
    mdadm --manage --set-faulty /dev/md0 /dev/sdb


    So /dev/sdb is the faulty disk


    The system log report about the problem and the state of the RAID change from ACTIVE to CLEAN,DEGRADED


    5-I shutdown the system and Add a new HardDisk /dev/sdd (2GB) to replace the faulty disk


    6-I noticed that now the RAID has two NEW devices names as:

    Code
    /dev/md127 clean,degraded   /dev/sdb
    	/dev/md126 clean,degraded   /dev/sdc
    And the ORIGINAL /dev/md0 has disappeared --> I don't know if this situation is correct.



    In addition the device name on mounted filesystem has change to: /dev/md126
    However if I type the mount command it report that the device mounted is /dev/md127


    ¿Is this a normal situation? or I'm missing something.


    And if it is a normal situation, How can I recover the ARRAY? I've to click RECOVER button on the /dev/md126 device which is associate to hard drive /dev/sdc


    Thank you,


    PD: Sorry, but my English is not good enough.

    OMV 0.3.3 (Omnius) 64
    HP ProLiant N40L + 8GB RAM + 2TB WD - RAID 1 + 80GB HD OS

    • Offizieller Beitrag

    Did you remove /dev/sdb after putting the new drive in? I would think you would simulate by unplugging the drive.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Because the drive still works and you reboot, it must think the drive is functioning again. Because they both have the same info, mdadm must think they are two separate raids each with a failed drive??

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!