omvzsfs plugin zfs set spamming

    • OMV 4.x
    • omvzsfs plugin zfs set spamming

      I'm on omv 4.1.31-1. I installed proxmox kernel and zfs, created a zpool manually (for more control over how the raidz2 was created) and it showed up in the openzfs plugin in omv so I haven't touched anything in the plugin itself other than to note that my zpool was there (and the underlying datasets) and have been using it with a few bumps (bad cable causing udma cmc error that is now fixed). When I look at zpool history there's constant spamming in there with zfs set commands, I assume from the plugin and I'm curious how to get this to stop.

      An example of the spam, this is just the latest it happens every minute or two:

      Source Code

      1. 2020-01-12.09:09:06 zfs set omvzfsplugin:uuid=8fd025c3-c908-4b89-b37f-acae0444081a zdata
      2. 2020-01-12.09:09:10 zfs set omvzfsplugin:uuid=068d155c-3d32-4992-9c5d-57567b1b6a48 zdata/archive
      3. 2020-01-12.09:09:15 zfs set omvzfsplugin:uuid=467afe74-5169-4eea-90c3-6ca0bf9f5392 zdata/backupdockerconf
      4. 2020-01-12.09:09:19 zfs set omvzfsplugin:uuid=8eb92b9a-9a28-453f-aff0-e14a0c9a724e zdata/dockerconf
      5. 2020-01-12.09:09:23 zfs set omvzfsplugin:uuid=980d894d-4bc1-44ae-b04d-ba8d2db43ebf zdata/dockerpath
      6. 2020-01-12.09:09:27 zfs set omvzfsplugin:uuid=fcda2be3-a3fa-415a-9ddb-37c94865c315 zdata/downloads
      7. 2020-01-12.09:09:31 zfs set omvzfsplugin:uuid=26275790-9085-457a-a971-77995f59291a zdata/incomplete
      8. 2020-01-12.09:09:35 zfs set omvzfsplugin:uuid=1e9c735e-9520-4fe7-bcde-f3074e2b939b zdata/mayan
      9. 2020-01-12.09:09:40 zfs set omvzfsplugin:uuid=cda757e9-0dd6-4500-b05e-2e90f061309d zdata/mayanbackup
      10. 2020-01-12.09:09:44 zfs set omvzfsplugin:uuid=4d1dc217-0e99-4170-a937-86939c954f07 zdata/mayanwatch
      11. 2020-01-12.09:09:48 zfs set omvzfsplugin:uuid=1c5d2959-dcfe-470e-807a-c7c36f9c4914 zdata/omvbackup
      12. 2020-01-12.09:09:52 zfs set omvzfsplugin:uuid=48d8e290-4eaa-47a2-a8e5-57913fa73982 zdata/test
      13. 2020-01-12.09:09:57 zfs set omvzfsplugin:uuid=0bba8625-6b20-441a-b029-951f5731e5ac zdata/urbackup
      14. 2020-01-12.09:10:21 zfs set omvzfsplugin:uuid=8fd025c3-c908-4b89-b37f-acae0444081a zdata
      15. 2020-01-12.09:10:25 zfs set omvzfsplugin:uuid=068d155c-3d32-4992-9c5d-57567b1b6a48 zdata/archive
      16. 2020-01-12.09:10:30 zfs set omvzfsplugin:uuid=467afe74-5169-4eea-90c3-6ca0bf9f5392 zdata/backupdockerconf
      17. 2020-01-12.09:10:34 zfs set omvzfsplugin:uuid=8eb92b9a-9a28-453f-aff0-e14a0c9a724e zdata/dockerconf
      18. 2020-01-12.09:10:38 zfs set omvzfsplugin:uuid=980d894d-4bc1-44ae-b04d-ba8d2db43ebf zdata/dockerpath
      19. 2020-01-12.09:10:42 zfs set omvzfsplugin:uuid=fcda2be3-a3fa-415a-9ddb-37c94865c315 zdata/downloads
      20. 2020-01-12.09:10:47 zfs set omvzfsplugin:uuid=26275790-9085-457a-a971-77995f59291a zdata/incomplete
      21. 2020-01-12.09:10:51 zfs set omvzfsplugin:uuid=1e9c735e-9520-4fe7-bcde-f3074e2b939b zdata/mayan
      22. 2020-01-12.09:10:55 zfs set omvzfsplugin:uuid=cda757e9-0dd6-4500-b05e-2e90f061309d zdata/mayanbackup
      23. 2020-01-12.09:10:59 zfs set omvzfsplugin:uuid=4d1dc217-0e99-4170-a937-86939c954f07 zdata/mayanwatch
      24. 2020-01-12.09:11:03 zfs set omvzfsplugin:uuid=1c5d2959-dcfe-470e-807a-c7c36f9c4914 zdata/omvbackup
      25. 2020-01-12.09:11:08 zfs set omvzfsplugin:uuid=48d8e290-4eaa-47a2-a8e5-57913fa73982 zdata/test
      26. 2020-01-12.09:11:12 zfs set omvzfsplugin:uuid=0bba8625-6b20-441a-b029-951f5731e5ac zdata/urbackup
      Display All
      Also, it set off a zfs scrub last night, where do I configure when that should happen, as I don't see anything in the scheduled jobs plugin or in the zfs plugin for that and would like to make sure it doesn't happen again until I'm done moving all my data to the drive.
    • There's one automated scrub per month, that's set as part of the plugin install.

      I looked at my own zfs history and noted the uuid entries as well. (recent history attached) Some are less than a minute apart, some are days apart. This could be triggered by something like a write to the pool. Do you have something automated, that's writing to the pool on a regular basis?

      Unless you're having problems, I wouldn't worry about it.

      zfs.txt
    • Look at this location/file:
      /etc/cron.d/zfsutils-linux

      It's set for the second Sunday of the month.
      I left the default scrub in place and set a zpool status command in Scheduled Tasks, for the day after the scrub, to get the scrub results E-mailed to me.
      _____________________________________________
      Some useful CLI commands: (Where the name of the pool is ZFS1)

      zpool scrub ZFS1
      zpool scrub -s ZFS1 #stop a scrub#
      zpool status -v ZFS1
      _____________________________________________

      BTW: To get a close approximation of posix permissions/acl's, extra attributes etc., you might want to run the following commands on your pool before you copy data onto it. The compression line is optional but it might save you some space. (Where the name of the pool is ZFS1)

      zfs set aclinherit=passthrough ZFS1
      zfs set acltype=posixacl ZFS1
      zfs set xattr=sa ZFS1
      zfs set compression=lz4 ZFS1
      ____________________________________________

      Once you're set up, you might be interested in automated snapshots. If so, this may be interesting to you -> autosnap.
    • Users Online 1

      1 Guest