Is ZFS supported in Kernel 4.13-4.15?

  • would it be possible to add a freeze/unfreeze checkbox etc to the extras kernel tab?

    The 4.1.7 version of omv-extras has buttons to hold/unhold the current kernel/headers (linux-image-$arch and linux-headers-$arch) and disable/enable the backports repo.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Hi back from holiday trying to solve my problem anyone have an idea why the folders are empty of my pool but the pool size still suggests the data is there


    root@nas:~# zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    onepool 1.22T 2.29T 1.22T /onepool
    root@nas:~# cd /onepool
    root@nas:/onepool# ls
    docker downloads kids movies music photos series timemachine videos
    root@nas:/onepool# ls *
    docker:
    downloads:
    deluge transmission
    kids:
    movies:
    music:
    photos:
    series:
    timemachine:
    videos:


    root@nas:/onepool# uname -a
    Linux nas 4.16.0-0.bpo.1-amd64 #1 SMP Debian 4.16.5-1~bpo9+1 (2018-05-06) x86_64 GNU/Linux


    or how can i get back to my old situation without reinstalling OMV.


    Thanks


    EDIT:


    Removed zfs plugin, moved back to 15 kernel and reboot
    Disabled test repo
    When trying to install zfs plugin get error


    Setting up openmediavault-zfs (4.0.2-1) ...
    modprobe: FATAL: Module zfs not found in directory /lib/modules/4.15.0-0.bpo.2-amd64
    dpkg: error processing package openmediavault-zfs (--configure):
    subprocess installed post-installation script returned error exit status 1
    Processing triggers for openmediavault (4.1.6) ...
    locale: Cannot set LC_CTYPE to default locale: No such file or directory
    locale: Cannot set LC_ALL to default locale: No such file or directory
    Restarting engine daemon ...
    Errors were encountered while processing:
    openmediavault-zfs
    E: Sub-process /usr/bin/dpkg returned an error code (1)


    root@nas:~# zfs list
    The ZFS modules are not loaded.
    Try running '/sbin/modprobe zfs' as root to load them.
    root@nas:~# /sbin/modprobe zfs
    modprobe: FATAL: Module zfs not found in directory /lib/modules/4.15.0-0.bpo.2-amd64
    root@nas:~# uname -a
    Linux nas 4.15.0-0.bpo.2-amd64 #1 SMP Debian 4.15.11-1~bpo9+1 (2018-04-07) x86_64 GNU/Linux


    How can i get zfs to work again on 15 kernel?

  • Probably because you don't have the Kernel headers installed for that version of the Kernel. It's essential, as they allow the building of modules. The headers change as the Kernel changes, so they would normally be updated as the Kernel is updated. I'd imagine that because you've been jumping around a bit, they've been removed somehow.


    With that said, the Debian repos are only listing one package matching 4.15 headers right now.. not sure why, when there are lots for 4.14 and 4.16: https://packages.debian.org/stretch-backports/kernel/


    Try this:


    Code
    apt search linux-headers-$(uname -r)


    See if you get results.



    Then try this:



    Code
    apt install -t stretch-backports linux-headers-$(uname -r)

    Then if that succeeds, reinstall the plugin to make sure the module is built:



    Code
    apt install --reinstall openmediavault-zfs


    You probably won't need this but you can then reboot or:


    Code
    modprobe zfs


    Joy? :)



    After you get this working, I would highly suggest holding the kernel using the modifications that @ryecoaaron has added to the OMV extras tab.

  • @ellnic thanks for your help, but this is the result:


    root@nas:~# apt search linux-headers-$(uname -r)
    Sorting... Done
    Full Text Search... Done
    root@nas:~#



    root@nas:~# apt install -t stretch-backports linux-headers-$(uname -r)
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    E: Unable to locate package linux-headers-4.15.0-0.bpo.2-amd64
    E: Couldn't find any package by glob 'linux-headers-4.15.0-0.bpo.2-amd64'
    E: Couldn't find any package by regex 'linux-headers-4.15.0-0.bpo.2-amd64'
    root@nas:~#


    EDIT: on debian site i found this:


    There is no maintainer for linux-headers-4.15.0-0.bpo.2-amd64. This means that this package no longer exists (or never existed). Please do not report new bugs against this package

  • Hrm... Well, it was there as I'm using 4.15 and it wasn't long ago that we were discussing here about the headers being manually installed for 4.15. I can only think that it's been removed due to some bug. I don't really follow the kernel news but the Debian team wouldn't have ditched it without good reason.


    So, from here your choices are: Move back to 4.14 or go to 4.16.

    Let's go 4.14 if you have no objections:


    -------OOPS, hang on editing-----


    Then select it from OMV Extras GUI and reboot. After that issue the same commands as above. Starting at:



    That should get you up and running.

    Can't find 4.14 either.

    @ryecoaaron am I just being dumb, or have the 4.14 and 4.15 kernels disappeared from the repos?


    I can only see 4.9 in stretch and 4.16 in backports?

  • @ellnic thanks again i went back to 4.9 and installed the headers and reinstalled the zfs plugin that went all good now.



    root@nas:~# zpool list
    NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
    onepool 3.62T 1.22T 2.41T - 12% 33% 1.00x ONLINE -


    root@nas:~# cd /onepool
    root@nas:/onepool# ls
    docker downloads kids movies music photos series timemachine videos
    root@nas:/onepool# cd movies
    root@nas:/onepool/movies# ls
    root@nas:/onepool/movies#


    So i have still the same problem as i had on 4.16 can not see my data, all these folders have data and i hope not to loose it, everything was working fine before on 15 till i upgraded to 4.16, now running 4.9.



    EDIT: OK i solved it, i didn't check if the zfs pool was mounted trying to mount it revealed that the pool folder was not empty, i have no idea how the same structure ended up there, i deleted it and could mount the pool, can access all data now , happy again.



    Thanks to all that helped, specially @ellnic

  • There is a new version (4.0.3) of the zfs plugin in the omv-extras testing repo. Hopefully it fixes some things (see commits for changes). Thanks to @subzero79 for all of the work on this update.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • am I just being dumb, or have the 4.14 and 4.15 kernels disappeared from the repos?


    I can only see 4.9 in stretch and 4.16 in backports?

    Nope, they are definitely gone. Hold the kernel and headers will only help if you have them installed. Otherwise, you can't download the appropriate version anymore. I guess I need to ask Volker to update the ISO to include the 4.16 kernel or disable backports by default/use the 4.9 kernel on the ISO. I really wish we could have switched to Ubuntu for the base OS...

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I'm sure this kind of thing doesn't happen often, first time it's happened that I can remember in how ever many years of using Debian. There must be a Godzilla bug in the 4.14/5 kernels that they've discovered. The hold buttons will still be of great use once everyone is upgraded and settled on 4.16, then it's time to press hold and leave it alone unless you have good reason.


    I don't see a problem with having the 4.9 stable kernel by default, but think an option in the GUI to move to backports should be kept > Stable by default and move to backports if need-be.

  • I don't see a problem with having the 4.9 stable kernel by default, but think an option in the GUI to move to backports should be kept > Stable by default and move to backports if need-be.

    I will definitely leave the hold buttons and enable/disable backports buttons in omv-extras.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I really wish we could have switched to Ubuntu for the base OS...

    I think debian is a better choice for stability. Using backports is the issue. Backports is almost bleeding edge. Same for ubuntu.


    One of the main goals (at least in the past) was to not have to maintain packages in the distro. Maybe look at what proxmox has to do to use ubuntu kernel or newer zfs versions. They are on 4.15 kernel and zfs 7.9.

  • I think debian is a better choice for stability. Using backports is the issue. Backports is almost bleeding edge. Same for ubuntu.

    A few years ago I would have said the same things. Now, Ubuntu 16.04 LTS is just as stable as Debian. I maintain a lot of Ubuntu 16 boxes and have zero issues. Debian Stretch uses newer packages than Ubuntu 16.04. I wouldn't call Ubuntu LTS releases bleeding edge.


    One of the main goals (at least in the past) was to not have to maintain packages in the distro. Maybe look at what proxmox has to do to use ubuntu kernel or newer zfs versions. They are on 4.15 kernel and zfs 7.9.

    It isn't hard to add the support for the Proxmox kernel again (like in omv-extras for omv 3.x). It is using the Ubuntu 16.04HWE/18.04 kernel with zfs builtin. The problem with doing that is the repo can't be left enabled for updates. I am pretty sure this is why Proxmox is compiling a new kernel (maybe with a few extra tweaks). I will do some testing...

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I used to have the opinion that Ubuntu was unstable, but it really has improved in recent years. I have a box here running 16.04, and it's solid. A simple apt-get and zfs is up and running too ;) I wouldn't be [entirely] opposed to a change to Ubuntu, but I still think Debian is better :)

  • Wasn't thinking of LTS but ok. Many of us always want to be on the bleeding edge. LOL


    Wont being on LTS make it much harder to go to a newer version once support ends. Seems it is hard enough just going 7 - 8 - 9 - 10 vs 16.x to whatever at EOL.

  • Wont being on LTS make it much harder to go to a newer version once support ends. Seems it is hard enough just going 7 - 8 - 9 - 10 vs 16.x to whatever at EOL.

    Debian's releases and Ubuntu's releases happen on about the same frequency. I have quite a few boxes at work that have been upgraded from 10 -> 12 -> 14 -> 16 and they will be going to 18 soon.

    omv 5.5.17-3 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.4.2
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Hi guys,


    today I found the time to try the upgrade to kernel 4.16, openmediavault-zfs 4.03 and zfs 0.7.9-3. The kernel and the headers were already installed some time ago.


    After installing latest zfs and latest plugin I had problems to import my pool, maybe because I didn't export the before the update procedure. Never had this issue before:



    So I did the following on the command line:


    1. export the pool:

    Code
    root@omv4:~# zpool export mediatank


    2. unload the zfs modules:

    Code
    root@omv4:~# rmmod zfs


    3. check if the mountpoints are still available:


    4. remove the mountpoints:

    Code
    root@omv4:~# rm -R /mediatank/


    5. load the zfs modules:

    Code
    root@omv4:~# modprobe -a zfs


    6. import the pool again

    Code
    root@omv4:~# zpool import mediatank
    root@omv4:~#


    This time the import worked as expected. After a reboot also zfs seems to work as expected.


    Maybe this helps other guys with the same issue. But one question remains:


    • Why did I have to delete the mountpoints manually?



    Nevertheless, thanks for all the work you @subzero79 and @ryecoaaron investigated into this.



    Regards Hoppel

    ---------------------------------------------------------------------------------------------------------------
    frontend software - tvos | android tv | libreelec | win10 | kodi krypton
    frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
    -------------------------------------------
    backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
    backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • By the way, since my upgrade from omv3 to omv4, I didn't do a pool upgrade. If I look at my pool status, I see the following message:


    Now where omv4 is stable, I want to upgrade my pool. I don’t think that I ever get back to omv3. ;)


    Are there any known issues with upgrading the pool? How long does the 'zpool upgrade' take (min, hours or days)?


    Regards Hoppel

    ---------------------------------------------------------------------------------------------------------------
    frontend software - tvos | android tv | libreelec | win10 | kodi krypton
    frontend hardware - appletv 4k | nvidia shield tv | odroid c2 | yamaha rx-a1020 | quadral chromium style 5.1 | samsung le40-a789r2
    -------------------------------------------
    backend software - debian | openmediavault | latest backport kernel | zfs raid-z2 | docker | emby | unifi | vdr | tvheadend | fhem
    backend hardware - supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    Edited 2 times, last by hoppel118 ().

  • I did the "zpool upgrade" some time ago: the process is a matter of minutes (even less) if I remember correctly and everything went
    well without issues. See post #69.

    HP Microserver Gen8 - 16GB RAM - 1x Kingston 120 GB SSD - 4x 3TB WD Red / ZFS - OMV 5.2.x bare metal - Docker running Plex, TimeMachine - Synology DS214 with 2x 4TB WD Red for rsync backup

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!