ZFS Plugin: there are some changes every time I open "Pools" tab

  • After update to 7.0 I've got this strange behavior - every time I open "pools" tab in zfs plugin page, OMV says there are changes, that I must apply:


    Zitat

    Pending configuration changes. You must apply these changes in order for them to take effect.The following modules will be updated:

    • samba
    • sharedfolders
    • systemd


    If I hit apply and then refresh/open page again - this message will appear again. If I choose "Undo" - there will be no errors is samba or anything else.


    Is there any way to find out, whats wrong?

  • KM0201

    Hat das Thema freigeschaltet.
    • Offizieller Beitrag

    The plugin tries to import pools (it did this on OMV 5.x and 6.x versions as well) that aren't in the database and then adds an identifier property to the pool. Do you have pools not created by the plugin? If the identifier is not on the pool, it will add it and add an entry to the mntent section of the database. This will trigger the apply banner. Hard to say what is wrong without more info about your pools and mntent section of the database but clicking undo is not going to help.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sorry for late reply, it was a hard week.

    I think i've created all the pools in the gui. Two of them were created in omv 5. And the last one (FastPool) I've added after update to OMV7.

    I'm not 100% sure, but I think banner started to appear even before "FastPool" was added to the system, but 100$ after OMV7 upgrade.

    There were no problems in OMV6...


    I've got 3 pools:

    Property is set on all og them, as I can see. I removed snapshots from the command output, but all snapshots also have it set.

    All this pools and filesystems are on the plugin page.

    \\\;;;;;;;;;;

    As I remember, there were only several things I've done with zfs from the terminal:

    * set com.sun:auto-snapshot

    * run scrub manually

    * delete snapshots, because plugin only allows to delete one at a time

    Maybe I've deleted some filesystems from the terminal, but I'm not pretty sure.


    One big change I've made when updated OMV6 to OMV7 - there were problems with ZFS DKMS and new kernels, so I've deleted it and switched to Proxmox Kernel 6.5.

    While I was trying to find out wats wrong I reinstalled zfs plugin several times and re-imported pools also a few times.

    Maybe it broke something?

  • petrovich666


    Have you crossed checked the output of  zfs get all | grep 'omvzfsplugin:uuid' against the mntent entries in the config.xml file? There should be an entry per zfs filesystem with matching UUID values. Use:

    Code
    # omv-confdbadm read conf.system.filesystem.mountpoint  | jq -r '.[]|select(.type=="zfs")'


    (ref: https://docs.openmediavault.or…/tools/omv_confdbadm.html) Is the cross check correct?


    If the cross check is OK, and you don't mind exporting your pools yet again, then the following step at the CLI will force the zfs plugin to set new values for omvzfsplugin:uuid which might cure your problem.


    1. Recursively remove omvzfsplugin:uuid property from all pools


    Code
    zfs inherit -r omvzfsplugin:uuid BackupPool
    zfs inherit -r omvzfsplugin:uuid FastPool
    zfs inherit -r omvzfsplugin:uuid MainPool


    2. Export all pools


    Code
    zpool export BackupPool
    zpool export FastPool
    zpool export MainPool


    3. Import all pools


    Code
    zpool import BackupPool
    zpool import FastPool
    zpool import MainPool


    Now go to the zfs pool tab and apply the changes. Test if yellow configuration banner still appears every time the zfs pool tab is accessed.

  • This one was very helpfull omv-confdbadm read conf.system.filesystem.mountpoint | jq -r '.[]|select(.type=="zfs")'


    So in DB there are:

    * thee entries for MainPool. One of them is correct, others are not

    * two entries for MainPool/Backups. One of them is correct, others are not


    All other entries are correct. Don't know how that happend :(

    Also it's strange that only those two entries has got duplicates, and the ammount of duplicates is different.


    I think I shoud remove wrong entries, not re-import pools.

    So should I just use omv-confdbadm delete --uuid ..... conf.system.filesystem.mountpoint for the wrong entries? It will only modify DB, right?

    • Offizieller Beitrag

    It will only modify DB, right?

    After modifying the database, execute the changes in salt using:

    omv-salt deploy run zfszed collectd fstab monit quota nfs samba sharedfolders systemd tftpd-hpa

  • petrovich666 There was a typo in my previous post.

    If the cross check is OK, and you don't mind exporting your pools yet again, then the following step at the CLI will force the zfs plugin to set new values for omvzfsplugin:uuid which might cure your problem.

    That sentence should have started with "If Cross check is NOT OK ... ". Clearly that's the root cause as you've discovered. It's your choice, but if you edit config.xml directly you need to know what salt commands etc. to issue afterwards. Hence my suggestion to re-generate zfs plugin property via listed commands and apply change via the WebUI which should ensure a consistent system.

    • Offizieller Beitrag

    you need to know what salt commands

    Krisbee

    In case it helps you for future references on this in case you need it in other threads, here is a list of the salt modules that are updated in each section of the GUI. At the end of the readme there is a diagram, it is in Spanish but it is easy to follow. https://github.com/xhente/omv-regen

  • After modifying the database, execute the changes in salt using:

    omv-salt deploy run zfszed collectd fstab monit quota nfs samba sharedfolders systemd tftpd-hpa


    This was the right path to take, because I can't export pools that are in use.

    Now I've got no messages about pending configuration changes when on zfs tab, but somewhere on the way my BackupPool has gone from Filesystems tab.

    It is mounted. So I can only unmount it from cli.


    It is in the list, if I try to add exysting filesystem


    But I've got error:


    What is the best way to fix this issue?

  • petrovich666 I can't remember if just rebooting after removing the plugin property from pool at the CLI would have been sufficient to regenerate plugin UUIDs on ZFS filesystems instead of trying to export a busy pool.


    Now you can see my dislike of directly editing the config.xml. Maybe it's a simple error in the config file. Try comparing working pool entries with BackupPool entry in the config schema. It should be of this form:


    Otherwise, I vaguely remember seeing a similar error on the forum elsewhere.

  • Now you can see my dislike of directly editing the config.xml. Maybe it's a simple error in the config file. Try comparing working pool entries with BackupPool entry in the config schema. It should be of this form:

    I must say, that BackupPool was lost from filesystem tab before any manipulations to DB were made. I messed things up a little bit because shared folders also were a little bit broken.


    Anyway thank you very much - because I just figured out what was using my pool and exported it sucsessfully. Even without deleting uuid from pool properties - after import in the gui and all is working now.


    Problem solved :)

  • chente

    Hat das Label gelöst hinzugefügt.
  • chente

    Hat das Label OMV 7.x (RC1) hinzugefügt.
  • Hi !

    I have similar problems

    Code
    omv-confdbadm read conf.system.filesystem.mountpoint | jq -r '.[]|select(.type=="zfs")'

    returns me the list of my pool with datasets where almost all datasets have two entries with different uuid.

    I tried delete as recommended with

    Code
    zfs inherit -r omvzfsplugin:uuid raid

    but I got fatal errors from many other services. It doesn't work for me.

    How safely to fix duplicates uuid for datasets in OMV config/database?

  • Shperrung You cannot fix your problem by simply using zfs inherit -r omvzfsplugin:uuid raid in isolation. In fact it make matters worse. Hopefully you're able to reboot your OMV system and see if your zfs pool is correctly imported. You should have started by deleting the duplicate entries from the config database.


    Only when you can import your pool and a new set of uuid values have been applied can you proceed to compare zfs pool UUID values to those in the OMV config.xml file.


    Then the safe way to delete duplicate entries in the OMV config.xml files is to (a) make a copy of the file, (b) use the appropriate omv-confdbadm command to delete all duplicated entries 1 , and then (c) use the correct omv-salt command to deploy the changes 2 .


    Rather than attaching files as txt docs, please either post them between code tabs or place them on pastebin. I have looked at your config.xml extract. What is noticeable is many of the pairs of duplicated entries have different mount options and/or long lists of mount options that include duplicated options. Only a few of these entries appear correct, most do not. You compare the values of the "opts" field with the output of the mount command. It looks like you need to delete all the ZFS entries, before trying to generate a new set.


    It would be useful to know how this problem arose. How was your pool created, inside or outside OMV? Which version of OMV are you using? What action/change to your pool or dataset(s) triggered this problem? Have you made multiple attempts to export and import your pool? Have you been trying to modify zfs filesystems mount options at the CLI?


    As fas as I can tell, if you can import your pool and all zfs filesystems a have a valid non-zero omvzfsplugin:uuid property then you should delete all zfs filesystem entries from the config.xml file and then use the correct omv-salt command 2 to get your OMV system in a consistent state. At that point you should be able to apply any outstanding config changes on the zfs page of WebUI.




    1. https://docs.openmediavault.or…/tools/omv_confdbadm.html - see bottom of page.

    2. See #7 above.

  • Krisbee , thank you for your recommendation! I solved my issue with "Apply changes" banner on "zfs" page of OMV gui.

    The problem was in duplicated mountpoints for datasets in "omv-confdbadm" output. I compared two lists

    Code
    omv-confdbadm read conf.system.filesystem.mountpoint | jq -r '.[]|select(.type=="zfs")' > omv-confdbadm.txt

    and

    Code
    zfs get all | grep 'omvzfsplugin:uuid' > omvzfsplugin-uuid.txt

    All wrong mountpoints in omv-confdbadm which hasn't real dataset with the same uuid I deleted using this

    Code
    omv-confdbadm delete --uuid fe5746bd-efa2-4a75-b6b4-f6618138744d conf.system.filesystem.mountpoint

    I don't have ideas why it happened. I made clear OMV7 installation but I imported forcedly pool previously attached in OMV6. Also I did "zfs upgrade" but I don't remember before or after that issue has appeared. Here are two mountpoints from omv-confdbadm

    Wrong:

    Right:

    Wrong mountpoint has 3 repeating "opts".


  • I don't have ideas why it happened. I made clear OMV7 installation but I imported forcedly pool previously attached in OMV6. Also I did "zfs upgrade" but I don't remember before or after that issue has appeared. Here are two mountpoints from omv-confdbadm

    Shperrung I'm glad you were able to resolve your problem. I'll have to test if a pool created in omv6 can be correctly imported into a clean install of OMV7 whether it had been properly exported or not.. I would have expected that to work

    • Offizieller Beitrag

    I'll have to test if a pool created in omv6 can be correctly imported into a clean install of OMV7 whether it had been properly exported or not.. I would have expected that to work

    I have done it many, many times and never have had to export a pool.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ryecoaaron  Krisbee

    My pool is very old. It was created in Xigmanas 4 years ago.

    My clean OMV7 installation was followed with problems in incompatibility of zfs-plugin installed on Debian kernel and PVE-kernel installed after.

    I reinstalled twice OMV7 untill I guess that zfs-plugin must be installed after PVE-kernel is turned on.

    Anyway, such problem is rare. We may close "ticket". Thank you!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!