Beiträge von nicjo814

    Too bad - Maybe in another release :)


    Just wanted to report the plugin works really well (I'm running OMV v1.9) - AFAIK this is really the only "GUI" or "Web Tool" available for ZFS on Linux - pretty cool! I haven't explored a whole lot other then importing an existing zpool I had on an Ubuntu 14.04.2 Server, and creating a new zpool with 3 mirrored vdevs.


    Are you able to do snapshot/rollbacks through the plugin or is that all cli still as well?


    You can do snapshots and clones in the GUI. Rollback I believe is not there.

    So I have been following this guide and mistakenly I added the drive by path. I'm seeing the disappearing drives issue after a reboot. When I run zpool export poolname and then zpool import poolname, I do get my drives back, but then a status shows them as still being referenced by path. Is there a way to "alias" /dev/sdx with a certain uuid everytime, or should I backup the data and then recreate the pool?


    Thanks in advance!


    Have you tested with proper path as outlined here?


    http://zfsonlinux.org/faq.html…angeNamesOnAnExistingPool

    Now I'm just thinking with my problem. I'm guessing that the create share page tries to list all the filesystem using something like "zfs list -t snapshot" which would list all the zfs filesystem plus the snapshots.


    If that is the case is it possible for me to temporarily change it so that it only list the filesystem and not the snapshots? If I can do this then it will get me going creating the share. I only need to create the share once off and and not need to touch it again for quite a while.


    If It is possible to do the above where would I find this bit of code / line to change this?


    As I mentioned in my e-mail I don't think that the number of Snapshots is an issue, but the number of Datasets could probably be.

    See attached video. You can see the name scroll across the field. Each time (tried 5 times), it starts an fpm thread that chews up a lot of cpu until a timeout or even a segfault happen. I assume it happens because each time the name moves, it is calling a php function. This is a problem with the plugin and omv 2.x/extjs 5.1.


    Also, if I remember correctly, the filesystem backend is a little slow. So, the more zfs nodes (or whatever) you have, the longer it takes and can timeout if enough of them.


    This looks a bit odd :) Someone with ExtJS experience might want to look at it...

    Now I'm just thinking with my problem. I'm guessing that the create share page tries to list all the filesystem using something like "zfs list -t snapshot" which would list all the zfs filesystem plus the snapshots.


    If that is the case is it possible for me to temporarily change it so that it only list the filesystem and not the snapshots? If I can do this then it will get me going creating the share. I only need to create the share once off and and not need to touch it again for quite a while.


    If It is possible to do the above where would I find this bit of code / line to change this?


    I'm working on an answer, but it will be quite long so it will take some time to compose :)


    Update: I've sent an e-mail with some thoughts on the issue.

    I don't have a solution for this unfortunately. I would guess that the timeout is specified somewhere in the "core" code of OMV. I'll have to test setting up a similar scenario as you describe in a virtualized environment to see if I can reproduce the problem. However I won't be able to do this next couple of days.

    Really nice to hear that the plugin is working so well for you. Regarding support, I still don't have that much free time due to the current family situation but it might change sometime in the future :) Until then I would definitely be available to help out anyone who want to continue working on the plugin if there are any questions on the current implementation. The code is available on GitHub for anyone interested :)
    One thing that I think should be investigated is how to bump the dependencies on the ZFS packages (how smooth it is to update to the new 0.6.4.1 release). I haven't done this myself yet, but will probably try it out quite soon...

    I think I read somewhere that you could accomplish this by exporting the pool and then re-importing it with the proper flags. Might be worth to research?


    Edit: This is from the ZFS on Linux FAQ:
    Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev:


    $ sudo zpool export tank
    $ sudo zpool import -d /dev/disk/by-vdev tank


    Maybe test in a VM first :)

    This MUST no be happens ( is like pool where created using /dev/sdx names).


    I'm not sure I follow you. There is a setting for which type of "alias" to use when you create the pool. Default is to use "By path", which is the recommendation according to the ZFS on Linux FAQ. Did you change his value when you cretaed your pool?


    PD: Other strange problem detected, I always mount my previos pools on /mnt <- I write this on apropiate field when create the pool, but I notice that no folder are created on /mnt when do this.


    My new recreated pools are created by default ( no not use /mnt in field) and create apropiate folder on "/" (Tpool & Rpool) like is expected, please can someone test if you create a newpool and use /mnt in the field "monutpoint", that really a folder with the name of the pool are created on /mnt?


    If you specify a "mountpoint" when creating the pool, this directory will be used instead of the pool name. In your case you specified /mnt (which already existed in your system) and thus no folder was created. If you want the pool to be mounted in "/mnt/Rpool" you have to specify the full path to this directory when creating the pool. This is how ZFS on Linux handles the "mountpoint" option. The plugin does not do much magic itself, but is merely a simple frontend to ZFS on Linux. If you find some strange behaviour I think that this would be a good place to look for information.

    Ok, I'll try to answer some questions in the same post :)


    A small feature request: After creating a pool the panel should automatically refresh so that the new pool is displayed in the panel.


    I'll take a look at this. I actually thought it behaved like you want it to...


    Another bug:
    1) Create a RaidZ1 pool
    2) Press details for the newly created pool


    Could you point out the issue here?


    Other bug, do not show same info ZFS Plugin & Filesystem ( Free and size):


    This is most likely because I haven't implemented a proper free/size method in the ZFS plugin Filesystem backend. The "default" method is probably based on "df" which doesn't show ZFS values correctly. I'll see what I can do regarding this.


    This is related to the bug I reported earlier today - caching problem.


    Not sure if this is really the same issue. See above.



    I think this is another issue from deleteing pools manually. There are probably some stuff left in your "/etc/monit/conf.d/openmediavault-filesystem.conf" file. Make a backup of this file and delete all duplicate entries, then retry to mount the filesystem.

    Michael, you are most likely correct in that there is a missing include... The OMV.WorkspaceManager is probably not included properly in overview.js. I'll have a look at it later today.


    Edit: I've sent you an e-mail with an updated version of the plugin.