Is ZFS supported in Kernel 4.13-4.15?

    • Offizieller Beitrag

    So does that imply the upgrade script has a bug?

    Maybe. I think it should change jessie to stretch in the backports file but I haven't tested enough to know if the stretch-backports repo being enabled would cause issues upgrading. I don't think it would but I need to test. If the backports repo is disabled before upgrading, there definitely is no bug.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ ryecoaaron I can't speak for others, but I've been reading what you've written very carefully.


    Zitat von ryecoaaron

    SO, if you want a OMV 4.x install with working zfs, set the OMV_APT_USE_KERNEL_BACKPORTS="no" in /etc/default/openmediavault before upgrading to 4.x. Then you will keep the 4.9 kernel and never ever install the 4.13 kernel. When you install OMV extras and zfs plugin, the 4.9 linux headers will be installed making zfs module compiling work.


    I can only get the method you described to work with these additional steps using a fresh & updated OMV3 install :



    1. Prior to upgrading from OMV 3.0.91 edit /etc/default/openmediavault and execute both "omv-mkconf apt" and "apt-get update" before using "omv-release-upgrade" to OMV4. This ensures OMV4 boots using a stable stretch kernel and there is no kernel pinning.


    2. Immediately after OMV4 first boots, edit /etc/apt/souces.list to include contrib and non-free for debian sources. This ensures the zfs plugin can find its dependent packages from stable stretch.


    3. Execute apt-get update && apt-get upgrade. Any remaining backports packages from OMV3 should now be replaced except for old installed jessie-backport kernels. (optionally remove these now or later.)


    4. After installing OMV4 extras, remove stretch-backports pinning from the file /etc/apt/preferences.d/omv-extras-org.


    5. Install zfs plugin via webui.


    6. Check status post plugin install:



    A quick test of zfs plugin:



    I had already tested upgrading a 3.0.91 base where omv3 extras and zfs plugin where installed. The same steps results a OMV4 system with working zfs and no stretch-backports. I don't know if there are any unwanted side-effects by doing this.

  • @ ryecoaaron I can't speak for others, but I've been reading what you've written very carefully.


    Zitat von ryecoaaron

    SO, if you want a OMV 4.x install with working zfs, set the OMV_APT_USE_KERNEL_BACKPORTS="no" in /etc/default/openmediavault before upgrading to 4.x. Then you will keep the 4.9 kernel and never ever install the 4.13 kernel. When you install OMV extras and zfs plugin, the 4.9 linux headers will be installed making zfs module compiling work.


    I can only get the method you described to work with these additional steps using a fresh & updated OMV3 install :



    1. Prior to upgrading from OMV 3.0.91 edit /etc/default/openmediavault and execute both "omv-mkconf apt" and "apt-get update" before using "omv-release-upgrade" to OMV4. This ensures OMV4 boots using a stable stretch kernel and there is no kernel pinning.


    2. Immediately after OMV4 first boots, edit /etc/apt/souces.list to include contrib and non-free for debian sources. This ensures the zfs plugin can find its dependent packages from stable stretch.


    3. Execute apt-get update && apt-get upgrade. Any remaining backports packages from OMV3 should now be replaced except for old installed jessie-backport kernels. (optionally remove these now or later.)


    4. After installing OMV4 extras, remove stretch-backports pinning from the file /etc/apt/preferences.d/omv-extras-org.


    5. Install zfs plugin via webui.


    6. Check status post plugin install:



    A quick test of zfs plugin:



    I had already tested upgrading a 3.0.91 base where omv3 extras and zfs plugin where installed. The same steps results a OMV4 system with working zfs and no stretch-backports. I don't know if there are any unwanted side-effects by doing this.

    • Offizieller Beitrag

    Prior to upgrading from OMV 3.0.91 edit /etc/default/openmediavault and execute both "omv-mkconf apt" and "apt-get update" before using "omv-release-upgrade" to OMV4. This ensures OMV4 boots using a stable stretch kernel and there is no kernel pinning.

    Yes, I forgot to put omv-mkconf apt (or just delete the backports list) to my process overview. apt-get update isn't necessary since it is run by omv-release-upgrade.


    Immediately after OMV4 first boots, edit /etc/apt/souces.list to include contrib and non-free for debian sources. This ensures the zfs plugin can find its dependent packages from stable stretch.

    You could do this before the upgrade as well. I didn't have to do this since I already had it in my sources.list.


    After installing OMV4 extras, remove stretch-backports pinning from the file /etc/apt/preferences.d/omv-extras-org.

    If the backports repo isn't enabled, the pinning don't hurt anything and they will be added back anytime you do something in omv-extras. I guess I could check to the same backports variable but this won't matter once the zfs packages are updated to compile on 4.13.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I have zfs working quite well with Kernel 4.13 and spl-dkms / zfs-dkms 0.7.3, installed via sid temporarily enabled.
    Me stupid also upgraded the pool and could not go back to backed up 3.0.91 due to incompatibility.


    I am on Arrakis 4.0.9-1 and unfortunately have still the issue reported in Mantis as solved since 4.07 (see Mantis # 0001827).


    The pool itself and random datasets can't mount because mounting locations are not empty. Can be solved manually after each restart.


    Else everything works perfectly fine regarding zfs with Kernel 4.13 on a HP Proliant Gen8. FTP transfer speed e.g. is better than ever.


    @Skaronator :Do you also still face this issue?

    HP Microserver Gen8 - 16GB RAM - 1x Dogfish 64GB SSD - 4x 3TB WD Red / ZFS raid1 - OMV 7.x bare metal - Docker - Synology DS214 with 2x 4TB WD Red for rsync backup

  • @Skaronator :Do you also still face this issue?

    Kinda, maybe? Well I created FS inside of my Pool as you can see here.


    So I've /StorageZFS/Multimedia as Path and on ZFS Level it is the StorageZFS Pool with Multimedia Filesystem.


    My Shared-Folder now point to the `/StorageZFS/Multimedia` FS and the Path is `/`. And now when I reboot it'll create random folders on the root drive of my OS drive so on `/` eg. `/Multimedia` but after the ZFS mount it'll use the ZFS folders.



    Bad explained but currently not at home.



    Edit: Mainproblem is now, that StorageZFS won't mount because the folder are not empty. Didn't digged deep in that problem just notice it because I got a mail with something something StorageZFS_fs something something but folders and Sub-FS works just fine.

    OMV 4 - Ryzen 7 1700 (8 Cores / 16 Threads 65W TDP) - 32 GB DDR4 ECC
    128 GB OS SSD - 256 GB Plex SSD - 32 TB RAIDZ2 (6x8TB HGST NAS)

    Einmal editiert, zuletzt von Skaronator ()

    • Offizieller Beitrag

    What if I put the zfs 0.7.3 modules in the omv-extras repo? Then contrib wouldn't be needed in the main sources.list and it would compile on 4.13.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • What if I put the zfs 0.7.3 modules in the omv-extras repo? Then contrib wouldn't be needed in the main sources.list and it would compile on 4.13.

    As a temp fix, or ? I would think die-hard zfs users would welcome the chance to use zfs-0.7.3 , being eager to try out all the new features and improvements. Using debian sid + OMV4 might be OK as a test, but to me it sounds like playing with fire on real data.


    From OMV's point of view, do you want to be saddled with another maintenance task of providing zfs modules that complie and are in sync with backport kernels?

  • What about using the Proxmox kernel?



    Proxmox 5.1 (debian stretch) also uses kernel 4.13 and zfs 0.72. This can be read here in german:


    https://www.proxmox.com/de/new…tteilungen/proxmox-ve-5-1


    Maybe it’s an alternative... At the moment I don’t use Proxmox or omv4.


    The zfs modules are integrated in kernel 4.4 of Proxmox 4 (debian jessie). It should also be the case for Proxmox 5.1.


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    4 Mal editiert, zuletzt von hoppel118 ()

    • Offizieller Beitrag

    Proxmox is a good example of maintaining your own repo. The zfs stuff and kernels comes from their repo and not stock debian. I think it is probably a lot of work and the goal of omv is to not have to maintain the base repo, just the added features.


    I am just going to wait for it to work itself out.


    Thanks

  • Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
    I didn''t do it for now, but I will probably this Saturday or Sunday.


    Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?

    IMO Just disable the backports and use the 4.9 Kernel for now.

    OMV 4 - Ryzen 7 1700 (8 Cores / 16 Threads 65W TDP) - 32 GB DDR4 ECC
    128 GB OS SSD - 256 GB Plex SSD - 32 TB RAIDZ2 (6x8TB HGST NAS)

  • Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
    I didn''t do it for now, but I will probably this Saturday or Sunday.


    Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?

    Assuming there is no compelling reason to use the openmediavault-zfs plugin in OMV4 extras, it's not essential to upgrade to OMV4 to use zfs. You could just fully update your current OMV3 install to version 3.0.91, then install the zfs plugin from OMV3 extras. You would essentially be using the same 4.9 Kernel and version number of the zfs packages as if you upgraded to OMV4 without backports.

    • Offizieller Beitrag

    As a temp fix

    I would just leave them in the repo. If debian released a new version, it would automatically use that version.

    Using debian sid + OMV4 might be OK as a test, but to me it sounds like playing with fire on real data.

    It wouldn't be Debian sid. Yes, the source code would be the same as Sid but it would be compiled on stretch. Considering OMV 4.x isn't even beta, you shouldn't be using OMV 4.x for production anyway.


    From OMV's point of view, do you want to be saddled with another maintenance task of providing zfs modules that complie and are in sync with backport kernels?

    I'm not providing modules. Just the packages which don't change very often. Your system would compile its own modules.


    What about using the Proxmox kernel?

    I will look at adding that back into the code.

    Proxmox is a good example of maintaining your own repo. The zfs stuff and kernels comes from their repo and not stock debian. I think it is probably a lot of work and the goal of omv is to not have to maintain the base repo, just the added features.

    If you add the proxmox repos to your system, then it would automatically update when proxmox releases updates. No work for us. It probably is a lot of work for them but they make money from their work.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ryecoaaron


    Understood it would just be deb packages for zfs 0.7.3 and not modules, that was just my careless use of language. My comment about Sid was to those using it now, not yourself.


    @Blabla

    Zitat von Blabla

    Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
    I didn''t do it for now, but I will probably this Saturday or Sunday.


    Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?


    I hesitate to ask, but what are/have been your mdadm RAID troubles? Fixing those may mean you don't need zfs, which if you believe what you read should not be used on systems like yours that do not support ECC memory.

  • I created a topic in the raid section :) my first raid1 (wd red 4tb) is working perfectly, while the new one (Seagate ironwolf 6tb) will disappear every time that I reboot :/


    Send by my Sony XZ1 using Tapatalk

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • @Blabla

    Zitat von Blabla

    Hi everyone. Since mdadm RAID is giving me a lot of trouble, I want to build a ZFS mirror.
    I didn''t do it for now, but I will probably this Saturday or Sunday.


    Do you suggest to update to OMV4 before installing the ZFS plugin and then, once I'm with Debian 9, kernel 4.13 and everything else updated, install the zfs plugin?


    I hesitate to ask, but what are/have been your mdadm RAID troubles? Fixing those may mean you don't need zfs, which if you believe what you read should not be used on systems like yours that do not support ECC memory.[/quote]



    I created a topic in the raid section :) my first raid1 (wd red 4tb) is working perfectly, while the new one (Seagate ironwolf 6tb) will disappear every time that I reboot :/



    Send by my Sony XZ1 using Tapatalk

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • After reading your RAID thread @Blabla , I can see why you're considering using a ZFS mirror for your new 6TB drives. Putting the debate about Btrfs to one side, @ryecoaaron has a point when he asked do your really need RAID at all.


    I don't think it can be said too often that RAID is not backup, whether it's mdadm based or using zfs, and I think it was fair to point your system does not have ECC memory, which some regard as mandatory for zfs. Even with latest zfs 0.7.3, scrub and resilver times for a 6TB mirror will run into multiple hours. How are you going backup your data?


    You have to ask yourself how you are using your data, it is for example write once and read many times or constant read & writes which requires real time duplication across some from of RAID to ensure uptime in case of disk failure. Will timed, or ondemand, rsync between separate drives suffice as @ryecoaaron suggested? Do you even want the two 6TB drives in the same box?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!