Disk usage tab for my ZFS pool has vanished from GUI !

  • Hi,


    Just connected this morning to OMV Gui and I saw that my disk usage tab for my ZFS pool as vanished !!! I'm sure it was there on Friday evening but I didn't connected to the GUI since.


    The system disk graphs are still here, and the datas from the pool are still collected...



    Code
    root@home-server:/var/lib/rrdcached/db/localhost# ls -al df-StoragePool/
    total 452
    drwxr-xr-x  2 root root   4096 avril  5 12:38 .
    drwxr-xr-x 48 root root   4096 avril 16 12:40 ..
    -rw-r--r--  1 root root 148648 avril 16 12:45 df_complex-free.rrd
    -rw-r--r--  1 root root 148648 avril 16 12:49 df_complex-reserved.rrd
    -rw-r--r--  1 root root 148648 avril 16 12:45 df_complex-used.rrd


    The graph are still generated too...



    There was an update of the ZFS plugin last week, but don't think It's the cause.


    What can cause the GUI to be modified ???
    Where this tab (and graph) is gone ????


    Thanks


    EDIT : There was also an openmediavault update I think last week, and this could have broken the tab...
    Does anybody know where is the code for those tabs ? Volker ;)

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    Einmal editiert, zuletzt von sbocquet ()

    • Offizieller Beitrag

    Upgrading to the 4.15 kernel will break anything that requires a compiled kernel module - zfs, virtualbox, etc


    VirtualBox and ZFS plugins installation issue after fresh OMV install

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks Aaron for the warning! You saved my day ;)


    On the other hand, I'm searching for the source code of the diskstats plugin to solve my new problem.
    Any idea where I can check the UI for those stats tabs ?

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    • Offizieller Beitrag

    Any idea where I can check the UI for those stats tabs ?

    Don't know much about the stats tab. Here is the source code though - https://github.com/openmediava…/openmediavault-diskstats

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ok...


    I have had a look at :
    /var/www/openmediavault/js/omv/module/admin/diagnostic/system/plugin/DiskUsage.js
    /var/www/openmediavault/js/omv/module/admin/diagnostic/system/plugin/DiskIO.js


    I don't understand all the stuff, but from what I can understand, the ZFS usage graph doesn't show because of it's mount point (/StoragePool and not /dev/xxxx). I don't know why it used to work until last week.
    The fact is that the storage pool free space is rather a basic info to have with a NAS !!! Having only the free space of the system disk is nice, but not enough.


    Same thing for the disk I/O. As those disks are not mounted directly with a Fs or mdam, no I/O stats shown even if all the datas and graph are generated...


    Maybe a little patch could help...

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

  • Aaron,


    Maybe you could help me.


    How can I check what is returned by this call, and which format is it ?
    how can I replace this code with static one to do some tests ?


    DiskIO.js near line 90:


    Code
    rpcData: {
      service: "FileSystemMgmt",
      method: "enumerateMountedFilesystems",
      params: {
        includeroot: true
      }
    }

    Thanks for your help

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    • Offizieller Beitrag

    How can I check what is returned by this call, and which format is it ?

    Here is the function - https://github.com/openmediava…c/filesystemmgmt.inc#L166


    It returns this array for each filesystem - https://github.com/openmediava…c/filesystemmgmt.inc#L221


    You could change this function on a test machine. As long as it returned the array with the proper structure, it should keep working.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • aybe the best way is to "add" a mount point in OMV config.xml file... as it is a mountpoint !


    I don't know if there is some side effect to have a mount point in config.xml without the corresponding line in /etc/fstab ?


    EDIT: I have just setup a fresh VM with OMV + ZFS plugin and the tab entry is here !!!

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    Einmal editiert, zuletzt von sbocquet ()

  • OK !


    For whatever reason, the ZFS pool mount entry have disapeared from OMV config file !!!
    Don't know why or what happens.


    Luckily, with the OMV backup plugin, I had a backup of the file at last friday's evening, and manage to copy back the right section of conf the the file, and then the graph is showing back.


    EDIT: Gone another time several minutes after the edit... something is writing in the config.xml file and removing the ZFS pool mount point...


    Sometimes, computers makes me mad :(


    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    3 Mal editiert, zuletzt von sbocquet ()

  • So...


    On one hand, this morning, I copied back another time the save config file and reboot the server just after.
    Problem seems to be gone !


    I still don't know what was keeping modifing my config file... but it "seems" to be gone.




    On another hand, I was asking myself why the DiskIO.js in the diskstats pluging is using the "mounted" disks to generate the UI for the I/O graph.
    You don't have to have a disk mounted in/by OMV to check the I/O and throughputs (e.g. ZFS disk are not mounted by OMV but it's nice to have some read/write infos on them !)
    Maybe the diskstat plugin need a little improvment...


    Cheers,

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

  • Here is the function - https://github.com/openmediava…c/filesystemmgmt.inc#L166
    It returns this array for each filesystem - https://github.com/openmediava…c/filesystemmgmt.inc#L221


    You could change this function on a test machine. As long as it returned the array with the proper structure, it should keep working.

    I have tried to replace this one with that one as it seems that only the devicename result is used.
    https://github.com/openmediava…ined/rpc/diskmgmt.inc#L55


    Here are both results to compare:
    # omv-rpc 'FileSystemMgmt' 'enumerateMountedFilesystems' '{}'

    Code
    [{"devicefile":"\/dev\/sda1","parentdevicefile":"\/dev\/sda","uuid":"052263f1-d951-4a32-bdbf-052649fe60da","label":"System","type":"ext4","blocks":"11563616","mountpoint":"\/","used":"4.80 GiB","available":"6546513920","size":"11841142784","percentage":45,"description":"System (6.09 GiB available)","propposixacl":true,"propquota":true,"propresize":true,"propfstab":true,"propcompress":false,"propautodefrag":false,"hasmultipledevices":false,"devicefiles":["\/dev\/sda1"]},
    {"devicefile":"\/dev\/sda5","parentdevicefile":"\/dev\/sda","uuid":"2cc50fbd-d0ca-4630-8666-abcd673cb0ac","label":"Download","type":"ext4","blocks":"479576752","mountpoint":"\/srv\/dev-disk-by-id-ata-SAMSUNG_MZ7LN512HCHP-000L1_S1ZKNXAG526958-part5","used":"86.04 GiB","available":"398678654976","size":"491086594048","percentage":19,"description":"Download (371.29 GiB available)","propposixacl":true,"propquota":true,"propresize":true,"propfstab":true,"propcompress":false,"propautodefrag":false,"hasmultipledevices":false,"devicefiles":["\/dev\/sda5"]},
    {"devicefile":"StoragePool","parentdevicefile":null,"uuid":null,"label":"StoragePool","type":"zfs","blocks":8750995865.6,"mountpoint":"\/StoragePool","used":"4.56 TiB","available":3947246743715.8,"size":8961019766374.4,"percentage":55,"description":"StoragePool (3.58 TiB available)","propposixacl":true,"propquota":false,"propresize":false,"propfstab":false,"propcompress":false,"propautodefrag":false,"hasmultipledevices":false,"devicefiles":"StoragePool"}]

    # omv-rpc 'DiskMgmt' 'enumerateDevices' '{}'


    Code
    [
    {"devicename":"sda","devicefile":"\/dev\/sda","devicelinks":["\/dev\/disk\/by-id\/ata-SAMSUNG_MZ7LN512HCHP-000L1_S1ZKNXAG526958","\/dev\/disk\/by-id\/wwn-0x5002538d00000000","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-1"],"model":"SAMSUNG MZ7LN512","size":"512110190592","description":"SAMSUNG MZ7LN512 [\/dev\/sda, 476.93 GiB]","vendor":"","serialnumber":"S1ZKNXAG526958","israid":false,"isroot":true},
    {"devicename":"sdb","devicefile":"\/dev\/sdb","devicelinks":["\/dev\/disk\/by-id\/wwn-0x50014ee00387feef","\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC301141384","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-2"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sdb, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC301141384","israid":false,"isroot":false},
    {"devicename":"sdc","devicefile":"\/dev\/sdc","devicelinks":["\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-3","\/dev\/disk\/by-id\/wwn-0x50014ee0ae32d2f4","\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC300977122"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sdc, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC300977122","israid":false,"isroot":false},
    {"devicename":"sde","devicefile":"\/dev\/sde","devicelinks":["\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-4","\/dev\/disk\/by-id\/wwn-0x50014ee0ae32d2b9","\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC300905426"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sde, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC300905426","israid":false,"isroot":false},
    {"devicename":"sdf","devicefile":"\/dev\/sdf","devicelinks":["\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC300979172","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-5","\/dev\/disk\/by-id\/wwn-0x50014ee00387ffcd"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sdf, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC300979172","israid":false,"isroot":false},
    {"devicename":"sdg","devicefile":"\/dev\/sdg","devicelinks":["\/dev\/disk\/by-id\/wwn-0x5000c5008aa0cf4b","\/dev\/disk\/by-id\/ata-ST2000DM001-1ER164_W4Z2AWC6","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-6"],"model":"ST2000DM001-1ER1","size":"2000398934016","description":"ST2000DM001-1ER1 [\/dev\/sdg, 1.81 TiB]","vendor":"","serialnumber":"W4Z2AWC6","israid":false,"isroot":false},
    {"devicename":"sdh","devicefile":"\/dev\/sdh","devicelinks":["\/dev\/disk\/by-id\/ata-ST2000DM001-1ER164_Z4Z69EHC","\/dev\/disk\/by-id\/wwn-0x5000c50092743b29","\/dev\/disk\/by-path\/pci-0000:01:00.0-ata-1"],"model":"ST2000DM001-1ER1","size":"2000398934016","description":"ST2000DM001-1ER1 [\/dev\/sdh, 1.81 TiB]","vendor":"","serialnumber":"Z4Z69EHC","israid":false,"isroot":false},
    {"devicename":"sdi","devicefile":"\/dev\/sdi","devicelinks":["\/dev\/disk\/by-id\/ata-ST2000DM001-1ER164_Z560B0EA","\/dev\/disk\/by-path\/pci-0000:01:00.0-ata-2","\/dev\/disk\/by-id\/wwn-0x5000c50092742180"],"model":"ST2000DM001-1ER1","size":"2000398934016","description":"ST2000DM001-1ER1 [\/dev\/sdi, 1.81 TiB]","vendor":"","serialnumber":"Z560B0EA","israid":false,"isroot":false}]

    Unfortunatly, that doesn't works.


    Any idea why ?


    EDIT: Found. The first return 'parentdevicefile' and the second 'devicefile'.
    Yeah baby. It works, but it would be scratched by the next OMV update :(

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    Einmal editiert, zuletzt von sbocquet ()

    • Offizieller Beitrag

    Zfs backend does not return parentdevice file, because there is no block device associated with it, is linux commands like mount /dev/sdZFSStorage /srv/Storage is not valid.


    I once though of adding this as an array of the disks members of the pool, unfortunately due to how zfs displays information ATM i dropped the task, hopefully in the future zfs commands can return json data. A PR for zfs commands that returns json is in ZoL github for about 2 years without any news lately.


    So there will be no IO stats for disks members of zfs pool, until @votdev changes the RPC call, to generate stats for all block devices, not what comes out of the enumerateMountedFilesystems method only.


    https://github.com/openmediava…/plugin/DiskIO.js#L90-L94


    which also shouldn't be a problem IMO


    Also if you take a look at the zfs plugin in github you will notice the zfs plugin will eventually get pull off in the future if no developer comes up front to rewrite it (completely). By future i mean omv5, atm the plugin walks in crutches.

  • Hi SubZero,


    I managed to get my IO stats from the disks of the ZFS pool by using the DiskMgmt/enumerateDevices function in place of FileSystemMgmt/enumerateMountedFilesystems. That works. I'm having the raw stats and not the ZFS iostat results. But that enough for me ATM. The sad thing is that I must be careful at each OMV update as I "updated" 2 or 3 js scripts that would be overwriten... :(



    Volker told me that this function enumerates also the USB disks (plug in and out), so as I'm not using any on the NAS (only USB 2 ports against PC USB 3 + gigabyte network is more efficient), that is a good solution for me.


    I also added some temperature graph on the CPU. I searching a way to have my 2 cores on the same tab, as I have discovered that only the cpu-0 core is displayed, even if you have the datas from all your CPU cores. I have almost found, but it need some GUI tweaks (I'm not good at this :( ).


    I would like to participate in the plugin dev, but from what I have seens the last 2 days, my level is poor with the OMV "Framework". I'm a sys admin, used to be a dev 15 years ago... helped to dev some of the v0.2/0.3 plugins (greyhole, etc...) but it's hard to understand the Framework details.


    From my point of view, and I'm really new to ZFS (3 weeks !), that a really good file system. I was using mdadm/ext4 before but the checksum/snapshots features are really a must have. I managed to find some "silent corruptions" on video files with it.
    More and more NAS OS are using ZFS for data pools (e.g.: MyNAS, etc...), as you can have encryption, deduplication, compression, multiple copies of files natively...
    This should be the default FS for data pool in OMV inn my opinion. Ext4+mdadm+lvm works nice, but you need to manage 3 piece of software to do the same. So let's try to keep this plugin alive, as it is the future of FS storage ;)

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    4 Mal editiert, zuletzt von sbocquet ()

  • Unfortunatly, Something is still modifying my config.xml file... and deleting the ZFS pool from it !


    Any idea what can do this ?


    Thx

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

  • That's the case (for the export button), but there may be another one somewhere else... as it seems I'm not the only one, except that I haven't update the backport kernel as I'm still with the 4.14


    Finding the correct mntent UUID for a filesystem not in config.xml


    For me, the problem appears 5/6 days ago... before the new kernel was release.

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    • Offizieller Beitrag

    Only thing that can delete pool from db is the export button, which should be disabled if shared folders are engaged (current bug)

    Not true :)ZFS device(s) not listed in devices dropdown

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!