The 2.0 version is progressing in my head only so far... Have some ideas I want to test when there is time.
Beiträge von nicjo814
-
-
The plugin is not compatible with the 2.0.x branch of OMV. Not sure if /when I'll be able to look at it unfortunately.
-
Too bad - Maybe in another release
Just wanted to report the plugin works really well (I'm running OMV v1.9) - AFAIK this is really the only "GUI" or "Web Tool" available for ZFS on Linux - pretty cool! I haven't explored a whole lot other then importing an existing zpool I had on an Ubuntu 14.04.2 Server, and creating a new zpool with 3 mirrored vdevs.
Are you able to do snapshot/rollbacks through the plugin or is that all cli still as well?
You can do snapshots and clones in the GUI. Rollback I believe is not there.
-
Great plugin - Thanks for this!
Just a question - is there a way to add cache/log through the "gui" - I have no issues doing it via cli but just thought I'd ask - I couldn't find a way while I was poking around this morning.
Unfortunately that feature is missing from the plugin.
-
So I have been following this guide and mistakenly I added the drive by path. I'm seeing the disappearing drives issue after a reboot. When I run zpool export poolname and then zpool import poolname, I do get my drives back, but then a status shows them as still being referenced by path. Is there a way to "alias" /dev/sdx with a certain uuid everytime, or should I backup the data and then recreate the pool?
Thanks in advance!
Have you tested with proper path as outlined here?
-
Another thing is the ZFS tab does not list my zpool! (just empty as if zpool are available)
Could you post a screenshot of this?
-
Now I'm just thinking with my problem. I'm guessing that the create share page tries to list all the filesystem using something like "zfs list -t snapshot" which would list all the zfs filesystem plus the snapshots.
If that is the case is it possible for me to temporarily change it so that it only list the filesystem and not the snapshots? If I can do this then it will get me going creating the share. I only need to create the share once off and and not need to touch it again for quite a while.
If It is possible to do the above where would I find this bit of code / line to change this?
As I mentioned in my e-mail I don't think that the number of Snapshots is an issue, but the number of Datasets could probably be.
-
See attached video. You can see the name scroll across the field. Each time (tried 5 times), it starts an fpm thread that chews up a lot of cpu until a timeout or even a segfault happen. I assume it happens because each time the name moves, it is calling a php function. This is a problem with the plugin and omv 2.x/extjs 5.1.
Also, if I remember correctly, the filesystem backend is a little slow. So, the more zfs nodes (or whatever) you have, the longer it takes and can timeout if enough of them.
This looks a bit odd Someone with ExtJS experience might want to look at it...
-
Now I'm just thinking with my problem. I'm guessing that the create share page tries to list all the filesystem using something like "zfs list -t snapshot" which would list all the zfs filesystem plus the snapshots.
If that is the case is it possible for me to temporarily change it so that it only list the filesystem and not the snapshots? If I can do this then it will get me going creating the share. I only need to create the share once off and and not need to touch it again for quite a while.
If It is possible to do the above where would I find this bit of code / line to change this?
I'm working on an answer, but it will be quite long so it will take some time to compose
Update: I've sent an e-mail with some thoughts on the issue.
-
I don't have a solution for this unfortunately. I would guess that the timeout is specified somewhere in the "core" code of OMV. I'll have to test setting up a similar scenario as you describe in a virtualized environment to see if I can reproduce the problem. However I won't be able to do this next couple of days.
-
It should update automatically unless the repo location changed. The dependencies in the control file don't specify a version.That's true. I thought we had specified the 0.6.3 version in the control file, but that was wrong Guess I just have to upgrade my system then to test how the ZFS package update behaves...
-
Really nice to hear that the plugin is working so well for you. Regarding support, I still don't have that much free time due to the current family situation but it might change sometime in the future Until then I would definitely be available to help out anyone who want to continue working on the plugin if there are any questions on the current implementation. The code is available on GitHub for anyone interested
One thing that I think should be investigated is how to bump the dependencies on the ZFS packages (how smooth it is to update to the new 0.6.4.1 release). I haven't done this myself yet, but will probably try it out quite soon... -
Really good info you have provided here! I think this will help out everyone who is looking at implementing ZFS on their OMV system.
-
Thanks for the info! I will most definitely have a look at this on my NAS.
-
But I think that once you select one to create, this election it's used and can't be changed.
Please see the post just before yours regarding how to change this for created pools.
-
I think I read somewhere that you could accomplish this by exporting the pool and then re-importing it with the proper flags. Might be worth to research?
Edit: This is from the ZFS on Linux FAQ:
Changing the /dev/ names on an existing pool can be done by simply exporting the pool and re-importing it with the -d option to specify which new names should be used. For example, to use the custom names in /dev/disk/by-vdev:$ sudo zpool export tank
$ sudo zpool import -d /dev/disk/by-vdev tankMaybe test in a VM first
-
This MUST no be happens ( is like pool where created using /dev/sdx names).
I'm not sure I follow you. There is a setting for which type of "alias" to use when you create the pool. Default is to use "By path", which is the recommendation according to the ZFS on Linux FAQ. Did you change his value when you cretaed your pool?PD: Other strange problem detected, I always mount my previos pools on /mnt <- I write this on apropiate field when create the pool, but I notice that no folder are created on /mnt when do this.
My new recreated pools are created by default ( no not use /mnt in field) and create apropiate folder on "/" (Tpool & Rpool) like is expected, please can someone test if you create a newpool and use /mnt in the field "monutpoint", that really a folder with the name of the pool are created on /mnt?
If you specify a "mountpoint" when creating the pool, this directory will be used instead of the pool name. In your case you specified /mnt (which already existed in your system) and thus no folder was created. If you want the pool to be mounted in "/mnt/Rpool" you have to specify the full path to this directory when creating the pool. This is how ZFS on Linux handles the "mountpoint" option. The plugin does not do much magic itself, but is merely a simple frontend to ZFS on Linux. If you find some strange behaviour I think that this would be a good place to look for information. -
Ok, I'll try to answer some questions in the same post
A small feature request: After creating a pool the panel should automatically refresh so that the new pool is displayed in the panel.
I'll take a look at this. I actually thought it behaved like you want it to...Another bug:
1) Create a RaidZ1 pool
2) Press details for the newly created pool
Could you point out the issue here?Other bug, do not show same info ZFS Plugin & Filesystem ( Free and size):
This is most likely because I haven't implemented a proper free/size method in the ZFS plugin Filesystem backend. The "default" method is probably based on "df" which doesn't show ZFS values correctly. I'll see what I can do regarding this.This is related to the bug I reported earlier today - caching problem.
Not sure if this is really the same issue. See above.Ok, I continue doing Test, I add a volume to my TPool , and format as ext4 this volume, not problem at this point, what when I'll try to mount, error appear, Backup Volume is really mounted ( I test it in the backup pluging), but I have a yellow warnning
"The configuration has been changed. You must apply the changes in order for them to take effect." and if I click on "Apply" button error appear:CodeFailed to execute command 'export LANG=C; monit restart collectd 2>&1': /etc/monit/conf.d/openmediavault-filesystem.conf:14: Error: service name conflict, fs_mnt already defined '"/mnt"' Error #4000: exception 'OMVException' with message 'Failed to execute command 'export LANG=C; monit restart collectd 2>&1': /etc/monit/conf.d/openmediavault-filesystem.conf:14: Error: service name conflict, fs_mnt already defined '"/mnt"'' in /usr/share/php/openmediavault/monit.inc:113 Stack trace: #0 /usr/share/php/openmediavault/monit.inc(70): OMVMonit->action('restart', 'collectd', false) #1 /usr/share/openmediavault/engined/module/collectd.inc(53): OMVMonit->restart('collectd') #2 /usr/share/openmediavault/engined/rpc/config.inc(206): OMVModuleCollectd->startService() #3 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array) #4 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array) #5 /usr/share/php/openmediavault/rpcservice.inc(158): OMVRpcServiceAbstract->callMethod('applyChanges', Array, Array) #6 /usr/share/openmediavault/engined/rpc/config.inc(224): OMVRpcServiceAbstract->callMethodBg('applyChanges', Array, Array) #7 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array) #8 /usr/share/php/openmediavault/rpcservice.inc(125): call_user_func_array(Array, Array) #9 /usr/share/php/openmediavault/rpc.inc(79): OMVRpcServiceAbstract->callMethod('applyChangesBg', Array, Array) #10 /usr/sbin/omv-engined(500): OMVRpc::exec('Config', 'applyChangesBg', Array, Array, 1) #11 {main}
I used /mnt to mount all my pools as you can see here: [HOWTO] Instal ZFS-Plugin & use ZFS on OMV
I think this is another issue from deleteing pools manually. There are probably some stuff left in your "/etc/monit/conf.d/openmediavault-filesystem.conf" file. Make a backup of this file and delete all duplicate entries, then retry to mount the filesystem. -
Perfect! I'll push the changes and ask Aaaron to update the package in the repo Thanks for the help!
-
Michael, you are most likely correct in that there is a missing include... The OMV.WorkspaceManager is probably not included properly in overview.js. I'll have a look at it later today.
Edit: I've sent you an e-mail with an updated version of the plugin.