RAID0 disappear after latest OMV update

  • Hello,


    I have problems after latest OMV update. I have 2 x RAID0 that disappeared from WEBUI - Storage - MD.


    I have confirmed that they disappear once I update the OMV, because I made a fresh install and the 2 x RAID0 can be seeing there WEBUI - Storage - MD. Once I install the latest updates for OMV they gone.


    Code
    root@FAQNAS:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md126 : inactive sdf[0] sdg[1]
          5860531120 blocks super 1.2
    
    md127 : inactive sdd[0] sde[1]
          5860531120 blocks super 1.2
    
    unused devices: <none>
    Code
    root@FAQNAS:~# blkid
    /dev/sdf: UUID="9e12811f-80b3-c8cf-b69d-4c88f852332c" UUID_SUB="48478ffa-71e1-0fff-cb60-67718f6c3e8e" LABEL="FAQNAS:RAID0Series" TYPE="linux_raid_member"
    /dev/sdd: UUID="967af789-e703-4e2a-8a41-cd8cefb1bcba" UUID_SUB="57b64adf-0548-6657-1dcc-b3ee65b156c0" LABEL="FAQNAS:RAID0HD" TYPE="linux_raid_member"
    /dev/sdb1: LABEL="Datos" UUID="de0a1396-8d80-434b-8391-71fc4fb62bb4" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="e6f19f49-d581-4b72-a32c-2020cb02f6c3"
    /dev/sdg: UUID="9e12811f-80b3-c8cf-b69d-4c88f852332c" UUID_SUB="98bf4334-b6ab-2564-27bf-56f2ebb7bc56" LABEL="FAQNAS:RAID0Series" TYPE="linux_raid_member"
    /dev/sde: UUID="967af789-e703-4e2a-8a41-cd8cefb1bcba" UUID_SUB="c2a76c84-f934-47c3-9fb2-ce183056dfcc" LABEL="FAQNAS:RAID0HD" TYPE="linux_raid_member"
    /dev/sdc1: LABEL="FHD" UUID="23478bde-7f63-438c-becf-0894ed000803" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="21c0f58d-e080-4331-b7f8-9c0789f6c190"
    /dev/sda5: UUID="2bd1b59f-2779-4a70-9d2d-c395e2a83cb1" TYPE="swap" PARTUUID="241c8ebf-05"
    /dev/sda1: UUID="d3592b93-6cba-42de-9f28-747963e33069" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="241c8ebf-01"
    Code
    root@FAQNAS:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 level=linear num-devices=2 metadata=1.2 name=FAQNAS:RAID0HD UUID=967af789:e7034e2a:8a41cd8c:efb1bcba
       devices=/dev/sdd,/dev/sde
    INACTIVE-ARRAY /dev/md126 level=linear num-devices=2 metadata=1.2 name=FAQNAS:RAID0Series UUID=9e12811f:80b3c8cf:b69d4c88:f852332c
       devices=/dev/sdf,/dev/sdg

    - 2 HDD 3Tb per array.



    I hope somebody can help me to recover them.

    Thanks in advance!

  • Done, but unfortunately the problem persist.


  • Code
    root@FAQNAS:~# cat /proc/mdstat
    Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md126 : inactive sdg[1] sdf[0]
          5860531120 blocks super 1.2
    
    md127 : inactive sdd[1] sde[0]
          5860531120 blocks super 1.2
    
    unused devices: <none>
    Code
    root@FAQNAS:~# blkid
    /dev/sdf: UUID="9e12811f-80b3-c8cf-b69d-4c88f852332c" UUID_SUB="48478ffa-71e1-0fff-cb60-67718f6c3e8e" LABEL="FAQNAS:RAID0Series" TYPE="linux_raid_member"
    /dev/sdd: UUID="967af789-e703-4e2a-8a41-cd8cefb1bcba" UUID_SUB="c2a76c84-f934-47c3-9fb2-ce183056dfcc" LABEL="FAQNAS:RAID0HD" TYPE="linux_raid_member"
    /dev/sdb1: LABEL="Datos" UUID="de0a1396-8d80-434b-8391-71fc4fb62bb4" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="e6f19f49-d581-4b72-a32c-2020cb02f6c3"
    /dev/sdg: UUID="9e12811f-80b3-c8cf-b69d-4c88f852332c" UUID_SUB="98bf4334-b6ab-2564-27bf-56f2ebb7bc56" LABEL="FAQNAS:RAID0Series" TYPE="linux_raid_member"
    /dev/sde: UUID="967af789-e703-4e2a-8a41-cd8cefb1bcba" UUID_SUB="57b64adf-0548-6657-1dcc-b3ee65b156c0" LABEL="FAQNAS:RAID0HD" TYPE="linux_raid_member"
    /dev/sdc1: LABEL="FHD" UUID="23478bde-7f63-438c-becf-0894ed000803" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="21c0f58d-e080-4331-b7f8-9c0789f6c190"
    /dev/sda5: UUID="2bd1b59f-2779-4a70-9d2d-c395e2a83cb1" TYPE="swap" PARTUUID="241c8ebf-05"
    /dev/sda1: UUID="d3592b93-6cba-42de-9f28-747963e33069" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="241c8ebf-01"




    Code
    root@FAQNAS:~# mdadm --detail --scan --verbose
    mdadm: Unknown keyword INACTIVE-ARRAY
    mdadm: Unknown keyword INACTIVE-ARRAY
    INACTIVE-ARRAY /dev/md127 level=linear num-devices=2 metadata=1.2 name=FAQNAS:RAID0HD UUID=967af789:e7034e2a:8a41cd8c:efb1bcba
       devices=/dev/sdd,/dev/sde
    INACTIVE-ARRAY /dev/md126 level=linear num-devices=2 metadata=1.2 name=FAQNAS:RAID0Series UUID=9e12811f:80b3c8cf:b69d4c88:f852332c
       devices=/dev/sdf,/dev/sdg
  • Array definitions were not present in  /etc/mdadm/mdadm.conf before, they are now.


    You are living dangerously with RAID0, I hope you have backups.


    Check devices used in the two arrays are instep with matching update and events counts:


    mdadm -E /dev/sd[de] | egrep "Update|Events"

    mdadm -E /dev/sd[fg] | egrep "Update|Events"


    If they are, then try to re-assemble arrays.


    Understand you proceed at your own risk, I take no responsibility for any data or other loss incurred by following these instructions.



    1. Stop the array using:  mdadm --stop /dev/md127 


    2. Attempt to re-assemble the array using this exact: mdadm --assemble --force /dev/md127 /dev/sdd /dev/sde


    3. Assuming no error messages from 2, check the status of array using cat /proc/mdstat as it re-builds.


    Repeat for /dev/md126

  • Hello,


    First of all, thank you for your help.

    Yes, I understand the risk and fortunately the sensitive data are on another backups.


    I got errors on step 2.


    Code
    root@FAQNAS:~# mdadm -E /dev/sd[de] | egrep "Update|Events"
    mdadm: Unknown keyword INACTIVE-ARRAY
    mdadm: Unknown keyword INACTIVE-ARRAY
        Update Time : Tue Jul  9 01:56:46 2013
             Events : 0
        Update Time : Tue Jul  9 01:56:46 2013
             Events : 0
    Code
    root@FAQNAS:~# mdadm --stop /dev/md127
    mdadm: Unknown keyword INACTIVE-ARRAY
    mdadm: Unknown keyword INACTIVE-ARRAY
    mdadm: stopped /dev/md127
    Code
    root@FAQNAS:~# mdadm --assemble --force /dev/md127 /dev/sdd /dev/sde
    mdadm: Unknown keyword INACTIVE-ARRAY
    mdadm: Unknown keyword INACTIVE-ARRAY
    mdadm: failed to RUN_ARRAY /dev/md127: Invalid argument


    I will reinstall OMV in another USB Stick in order to show you that arrays will be showed properly before update OMV. Could be helpful.

  • Faquir I got things out of sequence. If necessary the omv-salt command should follow trying to re-assemble your arrays. So for now. comment out the two lines that begin with "INACTIVE-ARRAY" in the file /etc/mdadm/mdadm.conf. Then follow #5 above.

  • - Installed OMV (7.4.17) in another USB Stick.

    - Installed MD Plugin.

    - Connected a pair of HDDs that should be in RAID0. I can see them.

    - Installed updates, request a reboot.


    - OMV current version 7.7.0-1. MD plugin still shows that RAID0 is OK.

    - Checking new updates, they appears. A bunch of them related with firmwares and linux-images, so ok, let's install.


    - Once installed, reboot, and the array is gone.


    I don't know the reason, but I can locate the problem here. Could be related with kernel? It changed from "Linux 6.1.0-28-amd64" to "Linux 6.12.9+bpo-amd64"


    Let's change USB Stick in order to make Krisbee pruposal.


    Sorry for this spam. :S

  • Faquir I got things out of sequence. If necessary the omv-salt command should follow trying to re-assemble your arrays. So for now. comment out the two lines that begin with "INACTIVE-ARRAY" in the file /etc/mdadm/mdadm.conf. Then follow #5 above.

    Commented out these two lines in mdadm.conf


    Then:

    Code
    root@FAQNAS:~# mdadm -E /dev/sd[dg] | egrep "Update|Events"
        Update Time : Tue Jul  9 01:56:46 2013
             Events : 0
        Update Time : Tue Jul  9 01:56:46 2013
             Events : 0
    root@FAQNAS:~# mdadm -E /dev/sd[ef] | egrep "Update|Events"
        Update Time : Fri Jul 12 01:21:45 2013
             Events : 0
        Update Time : Fri Jul 12 01:21:45 2013
             Events : 0
    Code
    root@FAQNAS:~# mdadm --stop /dev/md127
    mdadm: stopped /dev/md127
    Code
    root@FAQNAS:~# mdadm --assemble --force /dev/md127 /dev/sdd /dev/sdg
    mdadm: failed to RUN_ARRAY /dev/md127: Invalid argument
    root@FAQNAS:~#
    root@FAQNAS:~#
    root@FAQNAS:~# mdadm --assemble --force /dev/md127 /dev/sd[dg]
    mdadm: /dev/sdd is busy - skipping
    mdadm: /dev/sdg is busy - skipping

    ?(

  • SOLVED!


    Finally, the problem is located in the kernel.


    I only changed it from kernel plugin (thx to this post Get backport kernel 6.12.9+bpo-amd64 after updating from 7.6.0 to 7.7.0), reboot, and everything is working like always. :)


    Thank you Krisbee for your help.



    That's impossible because non of the updates have touched MD devices. But somehow your RAID got inactive. You need to active them again.

    I cannot understand why, but change the kernel was the only thing that resolve the problem. Problem that had been produced after last update as you can see above. :/

    • Official Post

    I cannot understand why, but change the kernel was the only thing that resolve the problem. Problem that had been produced after last update as you can see above. :/

    Looks like thios is the reason, but the kernel does not come from the OMV project, it comes from the Debian project and therefor this is out of the scope of OMV.

    I have a MD RAID in my productive system and have no problems with the latest Debian kernel images.

  • Faquir On a system with a pre-existing MD array, a kernel upgrade from latest stable to latest backport kernel does not appear to cause any problems. You would not expect the contents of "/etc/mdadm/mdadm.conf" not to change and it should already have the correct array definition entries for the working arrays. Part of any new kernel install is the generation of the associated initrd, you can see this action was carried out in your last screenshot in #9 above. That initrd should contain the mdadm info that matches your system.


    As a test/check could you please do the following:


    1. As root create a temp dir : mkdir /root/temp

    2. cd /root/temp

    3. unmkinitramfs /boot/initrd.img-6.12.9+bpo-amd64 . (note the final . )

    4. while in /root/temp post the output of : cat ./main/etc/mdadm/mdadm.conf

    5. Remove /root/temp and it's contents : rm -rf  /root/temp

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!