Unable to delete 2 missing zfs file systems listed in OMV8

  • Hello,


    I'm still a bit of a newbie with OMV and have been learning a lot. I'm using a ZimaBlade 7700 DeskBuild NAS | x86 4-Core DIY NAS with 2 500 GB SATA hard drives with a install of OMV8 (not an upgrade).


    Following: https://wiki.omv-extras.org/do…?id=omv8:omv8_plugins:zfs

    I basically created a zfs mirror, a filesystem, a shared directory, and a snapshot. I, of course, made mistakes along the way. I've noticed that there were 2 missing ZFS file systems listed in Storage/File Systems, but can't seem to "delete/remove" them, even after deleting the snapshot, shared directory, zfs filesystem and zfs mirror. Unfortunately, I don't know what I did just before these 2 lines were listed. I can't seem to get back to a "clean" state in OMV. I've rebooted and shutdown/booted the system. The lines remain listed with no change.


    There is/are no info/details on either of these 2 file systems, except that their status is "Missing."

    Via ssh,

    "# zpool list" and "# zpool status" both return "no pools available"

    "# mount" doesn't show zfs anywhere


    I found several links to similar issues, but the suggestions don't seem to resolve my issue e.g.

    Remove missing file system and change/remove shared folder #### mounteditor and resetperms don't list any items.


    Would anyone happen to know how I can delete the missing listed file systems, or, to have any suggestions that I can try? Please let me know if I can provide any other information.


    TIA!

    Mark


    [Edit]

    I reinstalled OMV just before post #21 and updated my configuration. The following was my configuration before #21.

    Version: 8.1.0-2 (Synchrony)

    Kernel: Linux 6.17.9-1-pve

    Plugins: compose 8.1.5 | cputemp 8.0 | cterm 8.0 | filebrowser 8.0-6 | kernel 8.0.6 | md 8.0.3-1 | mounteditor 8.0 | omvextrasorg 8.0.2 | resetperms 8.0.1 | sharerootfs 8.0-1 | zfs 8.0.4

    Version: 8.1.2-1 (Synchrony)

    Kernel: Linux 6.14.11-5-pve

    Plugins: kernel 8.0.7 | omvextrasorg 8.0.2 | zfs 8.0.4

    Edited 4 times, last by tsts415: Added note about rebooting and shutdown/booting. Added note about install of OMV8. ().

  • votdev

    Approved the thread.
  • The same thing happened to me; you need to manually edit the file /etc/openmediavault/config.xml, removing traces of the old ZFS filesystems; you will recognise them because, if you have others, the 'missing' ones will not have a description or other references.


    Take extreme care with what you delete from the file, as you could cause permanent damage to the installation. Make a backup before doing anything.

  • Manually editing the omv config file is to be avoid if at all possible and in any case should always be preceded by making a backup of the file. The safest way to remove traces of an old zfs filesystem is to use the "omv-confdbadm" as root at the CLI ( see: https://docs.openmediavault.or…/tools/omv_confdbadm.html).


    To list all zfs filesystems registered in the config file use:


    omv-confdbadm read conf.system.filesystem.mountpoint  | jq -r '.[]|select(.type=="zfs")'


    Identify the "missing" filesystems in the listed in the command output, then delete them individually using this command:


    omv-confdbadm delete --uuid XXXXXXXXXXXXXXXXXXXXXX  conf.system.filesystem.mountpoint  | jq -r '.[]|select(.type=="zfs")'

    Substituting the correct uuid number for each filesystem you wish to remove from the config file.

  • Thormir84 and Krisbee, Thank you for your replies. I did take a look at the /etc/openmediavault directory. My config.xml only references 2 zfs file systems.


    Unfortunately, my working knowledge of Linux is limited. I'm running into an error that I can't figure out how to resolve. The "omv-confdbadm delete" command line failed, so I ran the first command to try to get a better idea of what's not working. It seems to be referring to an object, but I have no idea on how to "interprete" that object .//system/fstab/mntent[uuid='b5c50509-c443-431f-900e-233be7d58414']


    Could I request help with this error?


    At bottom is what's in my /etc/fstab. (I don't understand what ".//system" refers to.)


    Version: 8.1.2-1 (Synchrony)

    Kernel: Linux 6.14.11-5-pve

    Plugins: kernel 8.0.7 | omvextrasorg 8.0.2 | zfs 8.0.4

    Edited once, last by tsts415: Added /etc/fstab contents. ().

    • New
    • Official Post

    The Discover button in the zfs plugin didn't fix this issue?

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello and thank you!


    After submitting my last post #4, I shutdown OMV/NAS. I've just powered up the NAS and after OMV started, I listed the file systems. Now it only showed 1 ZFS file system, so I listed the ZFS file systems registered in config file and got the following:


    I went to Storage | zfs | Pools. It still said "No data to display." I clicked the Discover button and got the following (copied from Notifications). I went back to Storage | File Systems and the same 1 ZFS file system was listed.


    I looked in /etc/openmediavault/. I have never edited config.xml. I pasted the following 3 files from "ls -l". There are a number of diffs, including removal of the "other" ZFS file system with uuid "*58414".

    Code
    -rw-rw-rw- 1 root root                  21037 Feb 27 00:13 config-zfs-20260227-001342.xml
    -rw-rw-rw- 1 root root                  19926 Mar  1 16:40 config-zfs-20260301-164049.xml
    -rw-rw---- 1 root openmediavault-config 19926 Feb 28 20:46 config.xml

    Version: 8.1.2-1 (Synchrony)

    Kernel: Linux 6.14.11-5-pve

    Plugins: kernel 8.0.7 | omvextrasorg 8.0.2 | zfs 8.0.4

  • The Discover button in the zfs plugin didn't fix this issue?

    Delete a zfs filesystem at the CLI and it naturally disappears from the main WebUi "pools data table". Hitting "discover" generates an error, shown briefly on screen, as the system cannot reconcile zfs list with contents of config.xml.



    In short, adding zfs filesystems at the CLI and then hitting the "discover" button syncs zfs list to config.xml. But deleting zfs filesystems at the CLI and then hitting the "discover" does not attempt to reconcile zfs list to config. Current restriction is that ALL deletes of zfs data objects can only be "safe" if done via WebUI which will check if data object is referenced, etc.

    • New
    • Official Post

    In short, adding zfs filesystems at the CLI and then hitting the "discover" button syncs zfs list to config.xml. But deleting zfs filesystems at the CLI and then hitting the "discover" does not attempt to reconcile zfs list to config. Current restriction is that ALL deletes of zfs data objects can only be "safe" if done via WebUI which will check if data object is referenced, etc.

    Ok, I will try to reproduce.

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thank you! After reading these latest posts, I now recall that I did create a zpool via CLI. I'd like to apologize for not being more complete. I was in "learning" mode (OMV and ZFS) where I try to only use "instructions" and the WebUI. At first, I tried to create a zpool using the WebUI, but don't quite recall why I ended up creating a zpool via CLI. When I switch to "debug" mode, I try to be much more deliberate on what I do. Really wish I could be of better/more help (I'm a former HW/FPGA engineer with only very limited script/coding experience.)

    Version: 8.1.2-1 (Synchrony)

    Kernel: Linux 6.14.11-5-pve

    Plugins: kernel 8.0.7 | omvextrasorg 8.0.2 | zfs 8.0.4

  • Thank you! After reading these latest posts, I now recall that I did create a zpool via CLI. I'd like to apologize for not being more complete. I was in "learning" mode (OMV and ZFS) where I try to only use "instructions" and the WebUI. At first, I tried to create a zpool using the WebUI, but don't quite recall why I ended up creating a zpool via CLI. When I switch to "debug" mode, I try to be much more deliberate on what I do. Really wish I could be of better/more help (I'm a former HW/FPGA engineer with only very limited script/coding experience.)

    This is not so much about script/coding but the internals of OMV8 and what you can and cannot get way with when using the CLI. If you wish to continue to use ZFS, then it's always best to create pools via the webui as under the hood the pools are created using certain parameters as the pool history would show as in this example of pool "tankB" consisting of a single mirror vdev :


    Code
    zpool create -o ashift=12 -o failmode=continue -o autoexpand=on -O atime=off -O acltype=posix -O xattr=sa tankB mirror scsi-0QEMU_QEMU_HARDDISK_3333 scsi-0QEMU_QEMU_HARDDISK_1111



    • New
    • Official Post

    If some wants to test a change on an unimportant system, the deb is here - https://omv-extras.org/testing…vault-zfs_8.0.5_amd64.deb

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • New
    • Official Post

    What's the expected new behaviour?

    If you delete a filesystem from the command line, it will remove the entry from the database eliminating the error.

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ryecoaaron


    If you delete a zfs filesystem at the CLI, when the filesystem is linked to a shared folder and SMB share, for example, you are left with a orphaned object in the man config.xml file. The WebUI will show "unavailable" under "shared folders" and the folder cannot be deleted via the webui.


    If the zfs filesystem is unreferenced then deleting it at the CLI and then clicking "discover" will sync zfs list to config.xml.


    Is this proposed change worth implementing? Is simply shifts the item that's out of sync in the config.xm file. Leaving the status quo does generate an appropriate WEBUI error message as show in my example #7 above. After this change there is no error message and the end user may not realise anything is wrong. My vote is not to make this change. Deletes can only be safely be done via WebUI unless you know how to clean up the problems.

    • New
    • Official Post

    Ok. I won't make the change. I didn't realize filesystems were being deleted from the command line that were associated with a shared folder. My goal was only to remove filesystems that were listed in the Filesystems tab (and therefore have a mntent entry in the database) that were deleted from the command line.

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • A kind thank you to all for your time and sharing of information! I will definitely keep to the WebUI.

    Version: 8.1.2-1 (Synchrony)

    Kernel: Linux 6.14.11-5-pve

    Plugins: kernel 8.0.7 | omvextrasorg 8.0.2 | zfs 8.0.4

    • New
    • Official Post

    I re-wrote almost all of the zfs plugin. I was tired of maintaining the very old, hard to maintain code. The Discover now has three options. I had claude code write a test script to test just about every function of the plugin. If anyone wants to try it...


    wget https://omv-extras.org/testing/openmediavault-zfs_8.1_amd64.deb -O openmediavault-zfs_8.1_amd64.deb

    sudo dpkg -i openmediavault-zfs_8.1_amd64.deb

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I'd like to share my experience with my setup after installing the .deb file.
    1) Basically, a reference to a missing zfs file system is left in Storage | File Systems each time I delete the file system and zpool.
    2) I could not click on the "Pending configuration change" after selecting the zpool and clicking on "Delete." It seemed to "assume" that I had clicked the check mark.
    I'm considering to do a fresh/new OMV install (and just create/delete a zpool, file system, shared folder) to see if I caused this missing zfs file system issue. [I'm sure I did, but I guess I'm trying to say I was hoping that deleting the "lines" for the missing zfs file system in config.xml would work for me.]


    Steps below:
    I removed the entry for the "missing" zfs file system by deleting the "lines" in the config.xml. Storage | File Systems updated immediately.
    I followed the documentation again to create a zpool, file system, and shared folder. In Services | File Browser, I selected the shared folder, enabled File Browser, clicked "Open UI" and created some test directories and files. I didn't have to install zfs-auto-snapshot. Snapshots were automatically run. (I now seem to recall that I went off the rails when I was working on snapshots.)


    From ssh, I downloaded the .deb file and installed it.
    At the end of the install, I went to the Storage | zfs | Pools. I didn't see any change.
    I pressed Ctrl+Shft+R and the page refreshed and displayed "Pending configuration changes." I clicked the check button.
    When it completed, the page updated to show the updated button bar with 9 buttons.


    In Services | File Browser, I disabled the service and changed the shared folder to "None."
    In Storage | Shared Folders, I deleted the shared folder.
    In Storage | zfs | Pools, I selected the zpool and clicked "Delete." When it started, a spinning circle appeared and then the "Pending configuration changes" appeared, but I could not click anything on it. (See below [zfs_20260307c.png] for the screen capture of the 2nd time I deleted a zpool.)
    When both the circle and Pending changes disappeared, the zpool and dataset/filesystem were removed and I saw "No data to display."
    By chance, I looked at Storage | File Systems, I saw the following, which shows 1 missing file system.

    [zfs_20260307a.png]


    I created a new zpool, file system, and shared folder. The following is from Storage | File Systems, which shows the 1 missing file system and the new zpool and new file system.

    [zfs_20260307b.png]


    Then, I deleted the shared folder, and the zpool. I was not able to click on any selection of the "Pending" window.
    [zfs_20260307c.png]


    The following is from Storage | File Systems, which now shows 2 missing file systems.
    [zfs_20260307a.png]


    I looked in config.xml and I saw sections/references for both zpools that I created and deleted today, zmirror and zm1.

  • I re-wrote almost all of the zfs plugin. I was tired of maintaining the very old, hard to maintain code. The Discover now has three options. I had claude code write a test script to test just about every function of the plugin. If anyone wants to try it...


    wget https://omv-extras.org/testing/openmediavault-zfs_8.1_amd64.deb -O openmediavault-zfs_8.1_amd64.deb

    sudo dpkg -i openmediavault-zfs_8.1_amd64.deb

    I've installed this new zfs plugin verison and I'm testing it now. Comments to follow. But want to continue on this tread, move to a new thread or move to PM?

    • New
    • Official Post

    But want to continue on this tread, move to a new thread or move to PM?

    Probably a new thread. I created a new thread - openmediavault-zfs 8.1 test version

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Edited once, last by ryecoaaron ().

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!