Posts by sbocquet

    Backup plugin is setup to the second partition (/dev/sda5) of the OS drive (/dev/sda1 on /dev/sda) (I know ;) ) on which I have created a shared folder 'Sauvegardes'.

    The fact is that this entry is remove even after boot time... I gonna test to re-add it and go to the plugin panel to see what happens...

    EDIT : Bingo ! That's it !
    On a current working conf, no reboot...
    - I copy back the ZFS pool mount section in config.xml.
    - In the GUI, move to several place, tab, etc... except ZFS panel tab. No change in config.xml.
    - In the GUI, move to ZFS panel. config.xml is modified and the ZFS pool mount section is remove...

    Done several times

    The ZFS pool was created 2/3 weeks ago with the plugin.
    It runs nicely since without a problem since, and continue to... but something is removing the <mntent> section of the pool in config.xml. If I re-add the section in the OMV config file, all the problems are solved.
    In that case, I doubt that ZFS himself is in fault. That's something with the pluging itself or in OMV.

    Same for me
    Disk usage tab for my ZFS pool has vanished from GUI !

    and here too
    Finding the correct mntent UUID for a filesystem not in config.xml

    There is definitively a problem with ZFS plugin or something related too, and that's not the version of the kernel...

    Check if you still have the <mntent> section of your ZFS pool in config.xml. You can copy the section back, but it will be deleted at some point... don't know by what at the moment.


    I have just checked my / dir and saw a strange dir name some kind of UUID

    root@home-server:~# cd /
    root@home-server:/# ls -al
    total 129
    drwxr-xr-x 3 root root 4096 mars 26 17:40 ~
    drwxr-xr-x 27 root root 4096 avril 18 15:53 .
    drwxr-xr-x 27 root root 4096 avril 18 15:53 ..
    drwxrwxrwx 2 root root 4096 avril 17 14:51 96d9c49e-f4af-48d9-a8bd-40992ea4c63b

    I have checked the config.xml file and it's a shared folder UUID !

    The share folder is only used in 2 places : OMV backup and SMB


    Any idea why this dir is created ? If I delete it, it is recreated at some point.

    From what I can see, the UUID in config.xml seems to have no link with some disk UUID at all.


    /dev/sda5: LABEL="Download" UUID="2cc50fbd-d0ca-4630-8666-abcd673cb0ac" TYPE="ext4" PARTUUID="6e22f66d-05"



    I think it is just a random UUID. But it needs to be referenced by each <sharefolder>/<mntentref> to link those two.

    That's standard hardware with intel Core processor and Lan. There should be no problem with it. Don't know what is the audio chipset (why audio in a NAS !??? lol) but I assume that it will be too, as Linux have a good driver support nowadays.
    Go on, there should be no problem with it.


    I think that for a reason or another, I have the same problem here, as my pool is continuously deleted from the config.xml file.

    Disk usage tab for my ZFS pool has vanished from GUI !

    Unfortunatly, Something is still modifying my config.xml file... and deleting the ZFS pool from it !

    Any idea what can do this ?


    Hi SubZero,

    I managed to get my IO stats from the disks of the ZFS pool by using the DiskMgmt/enumerateDevices function in place of FileSystemMgmt/enumerateMountedFilesystems. That works. I'm having the raw stats and not the ZFS iostat results. But that enough for me ATM. The sad thing is that I must be careful at each OMV update as I "updated" 2 or 3 js scripts that would be overwriten... :(

    Volker told me that this function enumerates also the USB disks (plug in and out), so as I'm not using any on the NAS (only USB 2 ports against PC USB 3 + gigabyte network is more efficient), that is a good solution for me.

    I also added some temperature graph on the CPU. I searching a way to have my 2 cores on the same tab, as I have discovered that only the cpu-0 core is displayed, even if you have the datas from all your CPU cores. I have almost found, but it need some GUI tweaks (I'm not good at this :( ).

    I would like to participate in the plugin dev, but from what I have seens the last 2 days, my level is poor with the OMV "Framework". I'm a sys admin, used to be a dev 15 years ago... helped to dev some of the v0.2/0.3 plugins (greyhole, etc...) but it's hard to understand the Framework details.

    From my point of view, and I'm really new to ZFS (3 weeks !), that a really good file system. I was using mdadm/ext4 before but the checksum/snapshots features are really a must have. I managed to find some "silent corruptions" on video files with it.
    More and more NAS OS are using ZFS for data pools (e.g.: MyNAS, etc...), as you can have encryption, deduplication, compression, multiple copies of files natively...
    This should be the default FS for data pool in OMV inn my opinion. Ext4+mdadm+lvm works nice, but you need to manage 3 piece of software to do the same. So let's try to keep this plugin alive, as it is the future of FS storage ;)

    Here is the function -…c/
    It returns this array for each filesystem -…c/

    You could change this function on a test machine. As long as it returned the array with the proper structure, it should keep working.

    I have tried to replace this one with that one as it seems that only the devicename result is used.…ined/rpc/

    Here are both results to compare:
    # omv-rpc 'FileSystemMgmt' 'enumerateMountedFilesystems' '{}'

    [{"devicefile":"\/dev\/sda1","parentdevicefile":"\/dev\/sda","uuid":"052263f1-d951-4a32-bdbf-052649fe60da","label":"System","type":"ext4","blocks":"11563616","mountpoint":"\/","used":"4.80 GiB","available":"6546513920","size":"11841142784","percentage":45,"description":"System (6.09 GiB available)","propposixacl":true,"propquota":true,"propresize":true,"propfstab":true,"propcompress":false,"propautodefrag":false,"hasmultipledevices":false,"devicefiles":["\/dev\/sda1"]},
    {"devicefile":"\/dev\/sda5","parentdevicefile":"\/dev\/sda","uuid":"2cc50fbd-d0ca-4630-8666-abcd673cb0ac","label":"Download","type":"ext4","blocks":"479576752","mountpoint":"\/srv\/dev-disk-by-id-ata-SAMSUNG_MZ7LN512HCHP-000L1_S1ZKNXAG526958-part5","used":"86.04 GiB","available":"398678654976","size":"491086594048","percentage":19,"description":"Download (371.29 GiB available)","propposixacl":true,"propquota":true,"propresize":true,"propfstab":true,"propcompress":false,"propautodefrag":false,"hasmultipledevices":false,"devicefiles":["\/dev\/sda5"]},
    {"devicefile":"StoragePool","parentdevicefile":null,"uuid":null,"label":"StoragePool","type":"zfs","blocks":8750995865.6,"mountpoint":"\/StoragePool","used":"4.56 TiB","available":3947246743715.8,"size":8961019766374.4,"percentage":55,"description":"StoragePool (3.58 TiB available)","propposixacl":true,"propquota":false,"propresize":false,"propfstab":false,"propcompress":false,"propautodefrag":false,"hasmultipledevices":false,"devicefiles":"StoragePool"}]

    # omv-rpc 'DiskMgmt' 'enumerateDevices' '{}'

    {"devicename":"sda","devicefile":"\/dev\/sda","devicelinks":["\/dev\/disk\/by-id\/ata-SAMSUNG_MZ7LN512HCHP-000L1_S1ZKNXAG526958","\/dev\/disk\/by-id\/wwn-0x5002538d00000000","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-1"],"model":"SAMSUNG MZ7LN512","size":"512110190592","description":"SAMSUNG MZ7LN512 [\/dev\/sda, 476.93 GiB]","vendor":"","serialnumber":"S1ZKNXAG526958","israid":false,"isroot":true},
    {"devicename":"sdb","devicefile":"\/dev\/sdb","devicelinks":["\/dev\/disk\/by-id\/wwn-0x50014ee00387feef","\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC301141384","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-2"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sdb, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC301141384","israid":false,"isroot":false},
    {"devicename":"sdc","devicefile":"\/dev\/sdc","devicelinks":["\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-3","\/dev\/disk\/by-id\/wwn-0x50014ee0ae32d2f4","\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC300977122"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sdc, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC300977122","israid":false,"isroot":false},
    {"devicename":"sde","devicefile":"\/dev\/sde","devicelinks":["\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-4","\/dev\/disk\/by-id\/wwn-0x50014ee0ae32d2b9","\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC300905426"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sde, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC300905426","israid":false,"isroot":false},
    {"devicename":"sdf","devicefile":"\/dev\/sdf","devicelinks":["\/dev\/disk\/by-id\/ata-WDC_WD20EZRX-00DC0B0_WD-WMC300979172","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-5","\/dev\/disk\/by-id\/wwn-0x50014ee00387ffcd"],"model":"WDC WD20EZRX-00D","size":"2000398934016","description":"WDC WD20EZRX-00D [\/dev\/sdf, 1.81 TiB]","vendor":"","serialnumber":"WD-WMC300979172","israid":false,"isroot":false},
    {"devicename":"sdg","devicefile":"\/dev\/sdg","devicelinks":["\/dev\/disk\/by-id\/wwn-0x5000c5008aa0cf4b","\/dev\/disk\/by-id\/ata-ST2000DM001-1ER164_W4Z2AWC6","\/dev\/disk\/by-path\/pci-0000:00:1f.2-ata-6"],"model":"ST2000DM001-1ER1","size":"2000398934016","description":"ST2000DM001-1ER1 [\/dev\/sdg, 1.81 TiB]","vendor":"","serialnumber":"W4Z2AWC6","israid":false,"isroot":false},
    {"devicename":"sdh","devicefile":"\/dev\/sdh","devicelinks":["\/dev\/disk\/by-id\/ata-ST2000DM001-1ER164_Z4Z69EHC","\/dev\/disk\/by-id\/wwn-0x5000c50092743b29","\/dev\/disk\/by-path\/pci-0000:01:00.0-ata-1"],"model":"ST2000DM001-1ER1","size":"2000398934016","description":"ST2000DM001-1ER1 [\/dev\/sdh, 1.81 TiB]","vendor":"","serialnumber":"Z4Z69EHC","israid":false,"isroot":false},
    {"devicename":"sdi","devicefile":"\/dev\/sdi","devicelinks":["\/dev\/disk\/by-id\/ata-ST2000DM001-1ER164_Z560B0EA","\/dev\/disk\/by-path\/pci-0000:01:00.0-ata-2","\/dev\/disk\/by-id\/wwn-0x5000c50092742180"],"model":"ST2000DM001-1ER1","size":"2000398934016","description":"ST2000DM001-1ER1 [\/dev\/sdi, 1.81 TiB]","vendor":"","serialnumber":"Z560B0EA","israid":false,"isroot":false}]

    Unfortunatly, that doesn't works.

    Any idea why ?

    EDIT: Found. The first return 'parentdevicefile' and the second 'devicefile'.
    Yeah baby. It works, but it would be scratched by the next OMV update :(


    On one hand, this morning, I copied back another time the save config file and reboot the server just after.
    Problem seems to be gone !

    I still don't know what was keeping modifing my config file... but it "seems" to be gone.

    On another hand, I was asking myself why the DiskIO.js in the diskstats pluging is using the "mounted" disks to generate the UI for the I/O graph.
    You don't have to have a disk mounted in/by OMV to check the I/O and throughputs (e.g. ZFS disk are not mounted by OMV but it's nice to have some read/write infos on them !)
    Maybe the diskstat plugin need a little improvment...


    OK !

    For whatever reason, the ZFS pool mount entry have disapeared from OMV config file !!!
    Don't know why or what happens.

    Luckily, with the OMV backup plugin, I had a backup of the file at last friday's evening, and manage to copy back the right section of conf the the file, and then the graph is showing back.

    EDIT: Gone another time several minutes after the edit... something is writing in the config.xml file and removing the ZFS pool mount point...

    Sometimes, computers makes me mad :(

    aybe the best way is to "add" a mount point in OMV config.xml file... as it is a mountpoint !

    I don't know if there is some side effect to have a mount point in config.xml without the corresponding line in /etc/fstab ?

    EDIT: I have just setup a fresh VM with OMV + ZFS plugin and the tab entry is here !!!


    Maybe you could help me.

    How can I check what is returned by this call, and which format is it ?
    how can I replace this code with static one to do some tests ?

    DiskIO.js near line 90:

    rpcData: {
    service: "FileSystemMgmt",
    method: "enumerateMountedFilesystems",
    params: {
    includeroot: true

    Thanks for your help