Posts by Krisbee

    Health Widget

    ==============


    The new health widget reports on a single zpool property per pool. The concept of pool health can be expanded to consider pool "state", pool "status" in terms of error count per vdev and space available where 80% usage might considered a "warning" level and >90% a "critical" level. With traffic light colours you could have a widget like this example:



    Clicking on the widget could take the user to the pool details page.


    It's a toss up whether such a widget should be based on pool capacity or available space for datasets.

    Pool History Button

    ==================


    Just in case it was not obvious, a pool's history is never cleared. For a long lived and busy pool this could extend of 1000s of lines. Does this option need a line limit filter and/or a between two dates filter? Should it be searchable in some way, eg:


    I changed the script to log in the latest upload (also fixed the other issues you reported) just in case they are throwing errors. What is the output of:

    cat /etc/cron.d/openmediavault-zfs-s*

    sudo omv-salt deploy run zfscron

    As requested:


    ARC STATS & WIDGETS

    ===================


    1. Please alter wording on "ZFS Hit/Misses" widget to "ZFS ARC Hit/Misses".


    2. ARC Stats tab.


    This option is may be for the more advanced user, but what is the consensus, if any, about the metrics that should be displayed? Opinions vary.


    A typical list would be:


    Cache Hit Metrics

    ARC Size Metrics

    ARC List Metrics

    Eviction and Memory Pressure

    Metadata vs Data Usage


    Should the WebUI provide any kind of information page about ARC metrics, or a more clearly formatted page such as the examples below?



    For those who want to dig deeper, the output of "arc_summary" (debian) or "zarcsummary" (pve) could be used rather than parsing file

    /proc/spl/kstat/zfs/arcstats




    ryecoaaron


    Made a start on testing encryption options. Hit a bug with unlocking a dataset not being mounted.


    0. Clear down prior to testing


    1. Create listed datasets below for tests via WebUI:

    Can only create tankB/clone1 via WebUI due to form restriction on snapshot tab , rather than tankB/secure/clone1.


    Check Encryption Roots


    Code
    root@omv8vm:/etc/cron.d# zfs list -H -o name,encryptionroot -r tankB | awk '$1==$2 {print $1}'
    tankB/home/fred
    tankB/home/kate
    tankB/secure
    tankB/secure/private
    root@omv8vm:/etc/cron.d#

    WEBUI

    ======


    1. Test display. Start state all datasets unlocked -- WebUI correctly shows keystatus = availalbe for all 8 datasets and marks encryption roots correctly.


    2. Test locking dataset fred (not shared) - keystatus = unavail on dataset tab, dataset no longer mounted on filesystem tab, cannot be shared. All correct.


    3. Test unlocking dataset fred (not shared) - Expect keystatus = avail on dataset tab, dataset longer mounted on filesystem tab, can be shared - BUG??


    dataset not mounted on unlocking.


    Code
    zfs list -o name,encryption,encryptionroot,keystatus,mounted,mountpoint | grep aes
    tankB/clone1               aes-256-gcm  tankB/secure          available    yes      /tankB/clone1
    tankB/home/fred            aes-256-gcm  tankB/home/fred       available    no       /tankB/home/fred <--
    tankB/home/kate            aes-256-gcm  tankB/home/kate       available    yes      /tankB/home/kate
    tankB/secure               aes-256-gcm  tankB/secure          available    yes      /tankB/secure
    tankB/secure/backups       aes-256-gcm  tankB/secure          available    yes      /tankB/secure/backups
    tankB/secure/private       aes-256-gcm  tankB/secure/private  available    yes      /tankB/secure/private
    tankB/secure/private/docs  aes-256-gcm  tankB/secure/private  available    yes      /tankB/secure/private/docs
    tankB/secure/project       aes-256-gcm  tankB/secure          available    yes      /tankB/secure/project
    root@omv8vm:/etc/cron.d#

    Shown as key-status avail on dataset tab. Still shown as AVAIL but UNMOUNTED on Filesystem Tab.

    ryecoaaron Split Post part B.


    1.4.1 Expect all datasets to appear on Dataset Tab and Filesystem Tab and be selectable in Shared Folders Tab - OK , All correct and counts tally.


    1.4.2 "Add a snapshot / Quick snapshot" on dataset set with children - Snapshots are not recursive, is this correct??


    Code
    root@omv8vm:~# zfs list -rt all tank/data1
    NAME                             USED  AVAIL  REFER  MOUNTPOINT
    tank/data1                       288K  9.20G    96K  /tank/data1
    tank/data1@2026-03-12_11-28-43     0B      -    96K  -
    tank/data1@atest                   0B      -    96K  -
    tank/data1/child1                 96K  9.20G    96K  /tank/data1/child1
    tank/data1/child3                 96K  9.20G    96K  /tank/data1/child3
    root@omv8vm:~#



    1.4.1 Delete a dataset. Expected result - datasets with children are treated differently to those with no children. Referenced datasets are treated differently to unreferenced datasets.



    CASE A - Unreferenced Datasets



    1.4.1.1 Delete a dataset with no children. Expected result - is allowed, entry removed from dataset tab, filesystem tab, no longer selectable in shared folder tab - Bugs??



    Passes when dataset is like "tank/docs"



    Fails when dataset is like "tankB/backups/prod/archibald" mounted at ""/tankB/backups/prod/archibald". On screen RED ERROR warning appears briefly before yellow confirm banner appears.



    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; zfs destroy 'tankB/backups/prod/archibald' 2>&1' with exit code '1': cannot destroy 'tankB/backups/prod/archibald': filesystem has children
    use '-r' to destroy the following datasets:
    tankB/backups/prod/archibald@test-1773312578


    Change can still be applied:

    Code
    The following modules have been updated: collectd, monit, quota
    undefined

    Dataset that should have been deleted is still in dataset table, but does not appear in Filesystem table or Shared Folder tab or config.xml.



    CASE B - Referenced Datasets.



    1.4.2.1 All OK - WebUI with dataset marked as "has shares" is blocked from deletion.

    ryecoaaron Split Post part A.



    End User Testing New Pool & Dataset Tabs - Post commit 26477cc 11/03/025

    ========================================================================


    1.Test 1 - Add a second pool then degrade it.

    ============================================


    1.1 CLI check


    Code
    root@omv8vm:~# zpool list -v
    NAME                              SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    tank                             9.50G  1.17M  9.50G        -         -     0%     0%  1.00x  DEGRADED  -
    mirror-0                       9.50G  1.17M  9.50G        -         -     0%  0.01%      -  DEGRADED
    virtio-BHYVE-E2E1-0D3F-D0E6  9.92G      -      -        -         -      -      -      -    ONLINE
    virtio-BHYVE-CC6E-3F83-BF12  9.92G      -      -        -         -      -      -      -   OFFLINE
    tankB                            19.5G   278M  19.2G        -         -     0%     1%  1.00x    ONLINE  -
    virtio-BHYVE-40A5-D119-35BA    19.9G   278M  19.2G        -         -     0%  1.39%      -    ONLINE
    root@omv8vm:~#


    1.2 WebUIi Check - Expect rows in each of Pools, Datasets and Filesystems Tabs.


    At Tabs OK.


    1.3 WebUI Check - Pools tab - Bugs??


    1.3.1 Available cols include values not relevant to a pool - namely: "Encryption, Key Status, Auto-unlock, Has Children, Has Shares". Are they needed on the pools table?





    1.3.3 Expected order of cols does not match output of zpool list command - bug?


    1.3.4 Expected zpool list command values FREE and ALLOC, but zpool property names changed in table - bug?


    1.3.5 Details display is incorrectly formatted for selected pool - repetition of zpool status - bug ? Output of "zpool list -" no longer needs to be included details.



    1.3.6 Should a user be able to use the "add options" both here and on the dataset tab?






    1.4 WebUI - Dataset Tabs - Bugs??


    Use a script to generate mix of filesystems (with children) and volumes and snapshots on a pre-existing pool.


    Check CLi:



    Hit discover button to "Add New" to sync datasets on pool with mntent config.xml.


    continued below:

    End User Testing New Pool & Dataset Tabs

    =================================


    1.Test 1 - Add a pool - Expect pool to be created along with a pool root filesystem of the same name.


    1.1 CLI check


    root@omv8vm:~# zpool history

    History for 'tank':

    2026-03-10.16:28:55 zpool create -o ashift=12 -o failmode=continue -o autoexpand=on -O atime=off -O acltype=posix -O xattr=sa tank mirror virtio-BHYVE-E2E1-0D3F-D0E6 virtio-BHYVE-CC6E-3F83-BF12


    root@omv8vm:~# zpool list

    NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT

    tank 9.50G 624K 9.50G - - 0% 0% 1.00x ONLINE -


    root@omv8vm:~# zfs list -rt all

    NAME USED AVAIL REFER MOUNTPOINT

    tank 624K 9.20G 96K /tank

    root@omv8vm:~#



    1.2 WebUI check - Expect one row in each of Pools, Datasets and Filesystems Tabs - Errors?


    Pools Tab - ok



    Datasets Tab - missing dataset ??



    Filesystems Tab - ok



    Filesystems "tank" not listed on datasets tab. Re-checking on Pool Tab.


    Is highlighted row a "pool" or a "dataset"? Props button displays filesystem properties. Details button displays pool information.



    You can snapshot the selected row. Does a snapshot of a pool mean a recursive snapshot of all filesystems within the selected pool? Or is this an error, because snapshots apply to pools but not datasets? Or is the selected row really the pool root filesystem?


    A key metric zfs end users look for is "pool capacity" as a percentage. It's not obvious if it's value is on screen. It is part of a pools tab selected row's details.

    @ryecoarron Yes, I do have other options, but I 'd like to see the OMV zfs plugin on a par with those other options as far as is possible. As you've have fluctuated between wanting and not wanting to make extensive changes to the zfs-plugin in recent weeks it seems reasonable to have asked if you'd prefer to revert to a pre-split version. I don't see what the problem is in asking/suggesting this.


    I'll stick to the facts.


    1. I can make no progress with testing encryption accept to say there seems to be a bug when using the encryption option when first adding a pool - a passphrase of length > 10 chars is rejected as not being of correct length?


    2. Should I expect the "encryption" options to work with this kind of layout?


    Code
    root@omv8vm-test:~# zfs list -o name,encryptionroot | awk '$1==$2'
    tankB/home/fred             tankB/home/fred
    tankB/home/kate             tankB/home/kate
    tankB/secure                tankB/secure
    tankB/secure/private        tankB/secure/private
    root@omv8vm-test:~#


    3. Is it correct that when applying "auto-lock" it can a different passphrase to an existing passphrase?

    Uploaded a new test version. details should be fixed. The datasets (filesystems and volumes) have been split off to their own tab. The size, alloc, and free columns might be right now but who knows. Scrub should be fixed.

    I think its great you've decided to split "zfs pools" and "zfs datasets" into their own tables. However, you are not going to want to hear this, and no doubt will curse me from dawn to dust, but the info in each table is really not what an end user would expect.


    They would expect the rows in "pools table" just to reflect the list in the output of the command "zpool list .. " (possibly adding last scrub date):


    Code
    root@omv8vm-test:~# zpool list
    NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
    tank     39G  3.43G  35.6G        -         -     1%     8%  1.00x    ONLINE  -
    tankB  19.5G  15.2G  4.31G        -         -    13%    77%  1.00x    ONLINE  -
    root@omv8vm-test:~#

    All actions then apply to a selected pool or pools listed in the table.



    Similarly, for the "zfs dataset" table, they would expect the rows to reflect the output of a "zfs list ... " command, e.g:


    I realise this means more work to write separate get methods for pool and dataset, separate delete methods and re-working various workbench components, etc. It also means deciding exactly what cols to put on the dataset table.


    I posted examples of this a few weeks back, at a time when you were contemplating dropping the zfs plugin altogether - RE: ZFS PLUGIN: Bugs (longstanding?) in Deleting Objects from main Pools Datatable There's no "method calculated data" in these tables.


    By doing this OMV would conform to the displays of every other zfs manager I know - TrueNAS CORE/SCALE, 45Drive's cockpit-zfs-manager, XigmaNAS, Napp-IT, poolsMan, webzfs and zfdash.


    If this unacceptable to you, then I'm not sure the proposed split table layouts are particularly helpful.


    Mock Up of Pools and Dataset Layout:



    You seem to be taking this rather personally. Nothing I've said was meant as a slight on your integrity nor have I impugned your obvious high technical skills. Please remember I'm not a coder, only an end user. So using terms like "code moved" is just a literal expression like something's moved from A to B - not a statement that implies a value judgment on the degree or difficulty or amount of work involved.


    As I stated my last post - it's obvious once I looked on github what the scale of re-write was. It goes without saying that involves considerable skill, time, effort and commitment on your part. Only another coder perhaps can fully appreciated the effort that goes into your work. But what I do know is the steep barrier for any non-coder to make enough sense of how OMV works in order to contribute anything as small as new widget, let alone more major changes. I have made the effort to try to better understand OMV to be able to communicate my ideas about how the zfs plugin might evolve.


    From an end user perspective, is it not reasonable to expect a zfs manager to provide ready access to both the "physical pool storage accounting" and the "dataset storage accounting"?


    I will move on to "encryption testing" when possible.

    More manual testing on latest test version:


    Error 1 - bug in details display for a "pool. Maybe a string variable not being cleared and so duplicated "zpool status -v" output in display:



    Error 2 - Bug in details display for a "fileystem". This is due to a misunderstanding. Requested output of "zfs list .." should be added to output "zfs get all .." in details display, not that "zfs list ..." should replace "zfs get all .." for the selected filesystem.


    Error 3 - Scrub Pause/Resume not working. ( Note to self: Pool scrub "stop" & "pause" are determined by flags, "zpool scrub -s ..." & "zpool scrub -p ...". Just issue a "zpool scrub ..." to resume a scrub.)


    3.1 "Resume scrub option" remains greyed on WeBUI button so cannot resume.


    3.2 Hit pause again causes RED ERROR and "Resume Scrub" option remains greyed out on WeBUI button so cannot resume.


    Error 4- Table cols value for "used" and "avail" expected to be shown for all rows. Currently it is this:



    Despite how it might across in the forum, I'm truly appreciative of the time and effort you've devoted to this re-write. Once I looked at the commit on github I understood the scale of the re-write. I had previously only looked to see if the method getZfsFlatArray() was still being used to populate the main pool data table and, as I noted, it is. But I hadn't realised Utils.php, etc. had gone and this and other code had move to the main rpc.inc file.


    As I've mentioned before, all rows in the main pools data table are either filesystems or volumes and any row marked as type "pool" is a "zfs root filesystem" acting as a "fake pool". As such you cannot display the storage space values as returned by "zpool list .." for "pool" rows, only the "used" and "avail" values given by "zfs list" or "zfs get .." for the "zfs root filesystem".


    The answer you got from AI was incorrect/misleading. The zpool capacity is calculated from these values:


    Command: "zpool list .."

    <br>

    FieldMeaningSimple explanation
    SIZETotal usable pool capacityPhysical storage available to ZFS after RAID layout
    ALLOCSpace currently allocatedPhysical blocks actually used by data, metadata, snapshots, etc.
    FREERemaining unallocated spacePhysical blocks still available in the pool
    CAPPool capacity percentageALLOC / SIZE

    Formula

    Code
    SIZE = ALLOC + FREE
    CAP  = ALLOC / SIZE


    Whereas Filesystem storage values are:

    Command: "zfs list ... " 


    FieldMeaningSimple explanation
    USEDSpace consumed by this dataset and descendantsIncludes snapshots, children datasets, etc.
    AVAILSpace available to this datasetRemaining pool space that this dataset could use

    Example:

    Code
    zfs list
    NAME        USED  AVAIL
    tank/home   2T    4T
    tank/backups 3T   4T

    Important point

    AVAIL is not reserved per dataset.

    All datasets typically see the same AVAIL because they share the pool.


    This why I was banging on about the "size" column in the pools table. It's also one of the reason I would still advocate a separate table zfs pools and zfs datasets.


    No testing of encryption yet.

    1. I wonder if I had some stuff left over from a previous zs plugin install. I'll need to go back and have another look why I thought that. As an aside, it's interesting that a quick glance "Claude Code" did not produced anything that made use of the json output versions of some of the main zpool & zfs commands.


    2. Delete of fs at CLI that leaves orphaned ref in config is down to user to clear up.


    3. I have no idea about how to make use of the test scenario script, or adding to it. Perhaps I can make sense of it.


    4. It's not a case of making "size" col hidden. To be blunt, it's junk and should be gone. I can provide refs on zfs storage space accounting.


    5. Pity about the auto-naming of ad-hoc snapshots.


    I'll start from a clean state and retest. But not until Monday.

    ryecoaaron This post follows on from: RE: Unable to delete 2 missing zfs file systems listed in OMV8



    General Comments:

    ======================


    As far as I can tell, this "re-write" does not extend to changing the underlying OO model & associated code and the main method for the zfs "pools datatable" is getZFSFlatAray().


    From the end user perspective the re-arrangement of "action buttons" on the zfs "pools datatable" is much improved with corrected navigation, as are the on-screen messages when adding or deleting objects via the WebUI. These are very positive changes. But there a number of problems/bugs new and old.



    Specific Test Comments:

    =========================

    1. The new "discover" options:


    CASE A - filesystems with on children


    1.1 Add - OK


    Adding a zfs fs at CLI and then hitting "Add" adds entry to config.xml


    1.2 Add + Delete Missing - new bug ?


    Applied to a a referenced filesystem leaves an orphaned "shared folder" entry in config.xml and an "UNAVAILABLE" status on the "shared folder" tab.


    1.3 Delete Missing - new bug ?


    Delete missing applied to a referenced filesystem leaves an orphaned "shared folder" entry in config.xml and an "UNAVAILABLE" status on the "shared folder" tab.


    CASE B - filesystems with children


    1.4 Add - OK


    Add a new child at CLI fs is OK.

    add parent and server children at CLI works.


    1.5 Add + Delete Missing - new bug?


    As 1.6.


    1.6 Delete Missing - new bug?


    Delete parent fs at CLI with "zfs destroy -r ...." Flagged as missing in Filesystem tab and one unavailable in shared folder tab for what was a referenced fs. Option fixes fs tab but leaves orphaned "shared folder".


    In summary, it works but leaves the question of how best to handle clean up of CLI deletion of a referenced zfs fs.


    Is "discover" the right name for this newly expanded function? Wouldn't "sync to config", or similar, be a better name? After all the "discover button" doesn't discover anything. It attempts to "sync" fs in zpools to fs in config" as determined by a set of rules/conditions. It doesn't "discover" if the pool and config are out of sync.


    2. Add a snapshot - new bug


    Adding an ad-hoc snapshot has lost the "manual + $(date ...)" naming ???


    3. Information button - new & old bug


    3.1 Selecting the "information button" for a row with "type = Pool" displays zpool status multiple times (new bug) and includes unnecessary zfs properties (old problem). Displaying top-level filesystem details as part fo the "Pool information" is redundant as selecting the "properties button" for a row with "type = Pool" still displays a zfs properties table screen. Amend getObjectDetails() for case 'Pool' to only display the output of "zpool status -v", "zpool list -v" and "zpool get all".


    3.2 Selecting the "information button" for a row with "type = Filesystem" just duplicates info retrieved when "properties button" is pressed. Adding the output of the the following command to the "details" of the selected dataset will make the function useful:

    Code
    zfs list  -o name,type,used,avail,refer,mountpoint,compression,compressratio,encryption


    4. Scrub button - old bug


    Selecting the "scrub button" when a scrub is already running generates an on-screen RED ERROR (old bug) -


    The scrub function should check the current scrub status and at least provided a option which toggles between start/stop state, or supports start,stop,pause and resume states.


    4. Table Columns - old bug - please remove the "size" col.


    The columns in each row is a composite, neither wholly "pool" nor "filesystem/volume" data. The values in the "size" column ("used" + "avail") are bogus and meaningless in the context of ZFS and should be removed. Storage space accounting in ZFS is not straightforward. ZFS datasets may have potential maximum "size" at any given moment, but it's dynamic and dependent on all other datasets in the pool and factors such a quotas and reservations need to be taken into account.


    If and when the zfs encryption functions become part of the WebUI, removing the "size" col would give space for a "enc" indicator in the pools table data.

    I re-wrote almost all of the zfs plugin. I was tired of maintaining the very old, hard to maintain code. The Discover now has three options. I had claude code write a test script to test just about every function of the plugin. If anyone wants to try it...


    wget https://omv-extras.org/testing/openmediavault-zfs_8.1_amd64.deb -O openmediavault-zfs_8.1_amd64.deb

    sudo dpkg -i openmediavault-zfs_8.1_amd64.deb

    I've installed this new zfs plugin verison and I'm testing it now. Comments to follow. But want to continue on this tread, move to a new thread or move to PM?

    ryecoaaron


    If you delete a zfs filesystem at the CLI, when the filesystem is linked to a shared folder and SMB share, for example, you are left with a orphaned object in the man config.xml file. The WebUI will show "unavailable" under "shared folders" and the folder cannot be deleted via the webui.


    If the zfs filesystem is unreferenced then deleting it at the CLI and then clicking "discover" will sync zfs list to config.xml.


    Is this proposed change worth implementing? Is simply shifts the item that's out of sync in the config.xm file. Leaving the status quo does generate an appropriate WEBUI error message as show in my example #7 above. After this change there is no error message and the end user may not realise anything is wrong. My vote is not to make this change. Deletes can only be safely be done via WebUI unless you know how to clean up the problems.

    Thank you! After reading these latest posts, I now recall that I did create a zpool via CLI. I'd like to apologize for not being more complete. I was in "learning" mode (OMV and ZFS) where I try to only use "instructions" and the WebUI. At first, I tried to create a zpool using the WebUI, but don't quite recall why I ended up creating a zpool via CLI. When I switch to "debug" mode, I try to be much more deliberate on what I do. Really wish I could be of better/more help (I'm a former HW/FPGA engineer with only very limited script/coding experience.)

    This is not so much about script/coding but the internals of OMV8 and what you can and cannot get way with when using the CLI. If you wish to continue to use ZFS, then it's always best to create pools via the webui as under the hood the pools are created using certain parameters as the pool history would show as in this example of pool "tankB" consisting of a single mirror vdev :


    Code
    zpool create -o ashift=12 -o failmode=continue -o autoexpand=on -O atime=off -O acltype=posix -O xattr=sa tankB mirror scsi-0QEMU_QEMU_HARDDISK_3333 scsi-0QEMU_QEMU_HARDDISK_1111



    The Discover button in the zfs plugin didn't fix this issue?

    Delete a zfs filesystem at the CLI and it naturally disappears from the main WebUi "pools data table". Hitting "discover" generates an error, shown briefly on screen, as the system cannot reconcile zfs list with contents of config.xml.



    In short, adding zfs filesystems at the CLI and then hitting the "discover" button syncs zfs list to config.xml. But deleting zfs filesystems at the CLI and then hitting the "discover" does not attempt to reconcile zfs list to config. Current restriction is that ALL deletes of zfs data objects can only be "safe" if done via WebUI which will check if data object is referenced, etc.