File System (RAID1) "Missing" after Upgrade

    • Offizieller Beitrag

    Yes I tried that as it was a troubleshooting step already in this thread. It still does not show in OMV, or mount.

    Two of your drives are marked as spares (you should fix that). That should be fixed but that shouldn't keep a raid 6 array from showing up as a filesystem. What is the output of blkid?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Something I noticed last night was that maybe it has to do with the UUID being different? (it was, now that I've done the omv-mkconf mdadm it's the correct UUID) I'm not too sure of myself when it comes to the underlying RAID stuff (that's why I used OMV :) )


    I thought RAID6 was supposed to have 2 spares? My thought process is that it can lose 2 drives?

    • Offizieller Beitrag

    I thought RAID6 was supposed to have 2 spares? My thought process is that it can lose 2 drives?

    raid6 has two parity drives which allows you to lose 2 drives without losing data. spares are different and are there to replace failed drives.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Nope, With the mdadm commands, try removing the drive then adding it back.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • growing the raid to 6 devices seems to be the trick to get rid of the extra spare:


    mdadm --grow /dev/md0 --raid-devices=6


    do you think that by changing something like this in the raid configuration that OMV will be able to see it in the GUI? Once this process is done (800 minutes estimated) should I recreate the mdadm.conf?


    Thanks for your help so far, it's appreciated!

    • Offizieller Beitrag

    do you think that by changing something like this in the raid configuration that OMV will be able to see it in the GUI? Once this process is done (800 minutes estimated) should I recreate the mdadm.conf?

    OMV will never be able to "see" it unless blkid shows it in the output. Once it is done, I would execute: omv-mkconf mdadm This will recreate mdadm.conf and update initramfs.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!