With the ZOL project being sponsored and ran by the Lawrence Livermore National Lab, well,, that's really powerful backing. (Tax payer funded.) It's no wonder why more is going on in ZOL. Yep, this hiccup with kernel 5 should be resolved in no time.
WARNING: Do NOT ubgrade to Kernel 4.16 if you use ZFS
-
- OMV 4.x
- Update
- vln0x
-
-
Hi,
two days ago I updated to latest kernel 4.19:
Coderoot@omv4:~# uname -a Linux omv4 4.19.0-0.bpo.1-amd64 #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30) x86_64 GNU/Linux root@omv4:~# dpkg --list | grep zfs ii libzfs2linux 0.7.12-1~bpo9+1 amd64 OpenZFS filesystem library for Linux ii openmediavault-zfs 4.0.4 amd64 OpenMediaVault plugin for ZFS ii zfs-dkms 0.7.12-1~bpo9+1 all OpenZFS filesystem kernel modules for Linux ii zfs-zed 0.7.12-1~bpo9+1 amd64 OpenZFS Event Daemon ii zfsutils-linux 0.7.12-1~bpo9+1 amd64 command-line tools to manage OpenZFS filesystems
Everything seems to work as expected with zfs 0.7.12.Regards Hoppel
-
two days ago I updated to latest kernel 4.19:
Did you do a pool upgrade?
-
-
It was only the kernel update to 4.19, zfs 0.7.12 was already installed.
I‘ve just executed „zpool status“ to look if there is an upgrade for my pool. But there is no zpool upgrade available.
Why do you think there should be an upgrade?
Regards Hoppel
-
Not a ZFS upgrade - a pool upgrade. To upgrade from OMV3, I did a clean OMV 4.0 build, upgraded to kernel 4.19 and then installed the ZFS plugin.
Apparently, the newest plugin supports a newer version of ZOL.
zpool status returns the following:
Codepool: ZFS1 state: ONLINEstatus: Some supported features are not enabled on the pool. The pool can still be used, but some features are unavailable.action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(5) for details.
etc., etc.
There's no hurry, it will work fine like this indefinitely, so I don't plan to upgrade the pool right away.
I was just wondering if you upgraded your pool along the way.
-
Yes, we are talking from the same thing.
In January 2018 I recognized the upgrade and documented it here:
Is ZFS supported in Kernel 4.13-4.15?
In May 2018 I recognized that I still did not upgrade my pool:
Is ZFS supported in Kernel 4.13-4.15?
and I did the upgrade the same day:
Is ZFS supported in Kernel 4.13-4.15?
It took only 3 seconds and was not a big deal!
Regards Hoppel
-
-
It took only 3 seconds and was not a big deal!
Keeping the pool at the earlier version allows me to back out, back to OMV3.X if need. I'm configuring up OMV4, on my main server, and I'm waiting for the second shoe to drop. (Urbackup doesn't work!, or something like that.) That's overly cautious I know.
-
Do you really think that you ever go back to omv3? For me there is only the way into the other direction to omv5 nowadays.
Regards Hoppel
-
Do you really think that you ever go back to omv3?
While there's no going back, in any real sense, I would back out until I could find a work around or patch for something I deem to be essential. (In any case, that's a very narrow possibility. I suspect all will be fine.)
For me there is only the way into the other direction to omv5 nowadays.
You may find that you may not care for that direction. -> OMV5
I chimed in -> here, but there were many cautionary comments. -
-
Hm..., I am not ready for that step. At the moment I still hope for a omv zfs plugin and a general solution for the GPL trouble.
What about the raid 5/6 errors? Are they solved with the latest kernel updates?
ZitatThe parity RAID code has multiple serious data-loss bugs in it. It should not be used for anything other than testing purposes.
https://btrfs.wiki.kernel.org/index.php/RAID56
Regards Hoppel
-
Hi,
today I updated to the latest kernel 4.19 in debian backports:
Coderoot@omv4:~# uname -a Linux omv4 4.19.0-0.bpo.2-amd64 #1 SMP Debian 4.19.16-1~bpo9+1 (2019-02-07) x86_64 GNU/Linux
zfs 0.7.12 was already installed:Coderoot@omv4:~# dpkg --list | grep zfs ii libzfs2linux 0.7.12-1~bpo9+1 amd64 OpenZFS filesystem library for Linux ii openmediavault-zfs 4.0.4 amd64 OpenMediaVault plugin for ZFS ii zfs-dkms 0.7.12-1~bpo9+1 all OpenZFS filesystem kernel modules for Linux ii zfs-zed 0.7.12-1~bpo9+1 amd64 OpenZFS Event Daemon ii zfsutils-linux 0.7.12-1~bpo9+1 amd64 command-line tools to manage OpenZFS filesystems
Everything seems to work as expected so far.Regards Hoppel
-
Debian will never stop having ZFS and I can’t see the plugin disappearing either. If the default for OMV 5 is BTFRS, that’s not the end of the world.
Sent from my iPhone using Tapatalk
-
-
Debian will never stop having ZFS and I can’t see the plugin disappearing either.
I hope so!
-
Debian will never stop having ZFS and I can’t see the plugin disappearing either.
Even if Debian does, I doubt Ubuntu will which in turn helps Proxmox. And since we can use the proxmox kernel on Debian, everything is good.
-
I upgraded all packages, which upgraded my kernel as well.
After reboot I noticed that none of my shared folders were accessible. If I try to edit them via the web ui, I get:CodeError #0: OMV\Config\DatabaseException: Failed to execute XPath query '//system/fstab/mntent[uuid='2368cce0-fb6e-45e8-a354-121d8c624d15']'. in /usr/share/php/openmediavault/config/database.inc:78 Stack trace: #0 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(231): OMV\Config\Database->get('conf.system.fil...', '2368cce0-fb6e-4...') #1 [internal function]: OMVRpcServiceShareMgmt->get(Array, Array) #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('get', Array, Array) #4 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ShareMgmt', 'get', Array, Array, 1) #5 {main}
I'm running kernel 4.19.0-0.bpo.1-amd64 and zfs 0.7.12-1~bpo9+1My pool is visible in the web ui (status OK) and the zfs-share.service is running.
Does anybody have a clue how I can get out of this mess?
-
-
Update: I figured out that the first time after I started the kernel, the zfs modules were not loaded.
However, I had some docker containers which had a volume mapped to a directory on the zfs pool. These were automatically re-created in the pool's mount point.
So somehow, the next time I rebooted, the zfs modules were loaded and zfs-share service started, but the mount failed because the directory was not empty.I emptied the directory and now the pool got mounted, but the uuid has changed.
I manually updated the uuid in /etc/openmediavault/config.xml and now I can edit the shared folders. However, my samba shares for those shared folders still fail.
Does somebody know how I can restore the samba sharing?
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!