How to remove a missing file system after RAID 0 failure?

  • After losing my RAID 0 array, and in the process of building a new RAID 5 array, I have the old file system still listed in Storage > File Systems.


    It shows:


    Device: n/a
    Label: n/a
    File system: XFS
    Total: n/a
    Available: n/a
    Used: n/a
    Mounted: No
    Referenced: Yes
    Status: Missing


    I've removed or disabled all the plugins that would have been referencing the array, stopped sharing over SMB, and NFS and stopped the processes. All the dashboard shows running is UPS, and SSH. But I cannot get the Referenced status to change, and delete is greyed out.


    I'm hoping to re-use my OMV and just create the new array, but I would like to remove the missing file system. How do I do this please?


    Edit: I've just reviewed my config.xml and even though it's disabled, the old file system appears to be referenced in mntentref, and path for one of the plugins that is still installed. If I wait for the new array to be initialised, change the path in the plugin, I assume I should be able to kill that last reference and remove the old file system.

    • Offizieller Beitrag

    I assume I should be able to kill that last reference and remove the old file system.

    Should be able to. Otherwise, it isn't difficult to remove from config.xml

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!