Is ZFS supported in Kernel 4.13-4.15?

  • The biggest issue for me is time.
    RAID1 is not a backup, but at least your data are on two different hard drives. Everything that is really important is saved on an external hard drive.


    RAID1 is for everything that I don't want to be on only 1hdd, but also not so important that I need to backup it every time.

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • @ryecoaaron has a point when he asked do your really need RAID at all.

    Yes, he has. Yes, every OMV user has to do his homework on his own and has to think about what he's doing: what's availability, what's data integrity, what's data protection/security. And how different technical approaches deal with these concepts, providing either this or that or everything at the same time.


    While I'm not talking about the individual situation at all (though I agree that there's zero need for availability) we should always keep in mind what a massive difference a stupid/useless RAID1 with an ext4 or XFS on top is compared to a zmirror. Since the latter provides unlike RAID1 data integrity with self-healing capabilities and due to ZFS involved and snapshots possible also some level of data protection. So while a RAID-1 can be considered useless in almost every situation a zmirror in the same situation can make a lot more sense since while wasting the same amount of disks providing a lot more benefits.


    I think it was fair to point your system does not have ECC memory, which some regard as mandatory for zfs

    It's not. The 'scrub of death' is a myth. Please stop repeating this over and over again (same with the 'ZFS memory requirements' that are in reality 'ZFS deduplication memory requirements').


    It's as easy as: if you really love your data then invest the little bit in ECC memory too, if you don't care about data integrity then do not and accept silent bit rot. The filesystem in question doesn't really matter just ZFS and btrfs are even in non ECC situations better since they will inform you more early that silent data corruption is happening (regular scrubs will tell).

  • @tkaiser


    You know what? I think I’ll go on repeating my statement as often as I like:


    “I think it was fair to point your system does not have ECC memory, which some regard as mandatory for zfs.”


    “I think it was fair to point your system does not have ECC memory, which some regard as mandatory for zfs.”


    “I think it was fair to point your system does not have ECC memory, which some regard as mandatory for zfs.”


    “I think it was fair to point your system does not have ECC memory, which some regard as mandatory for zfs.”


    It is an irrefutable fact that “some regard the use of ECC memory with zfs as mandatory”. Of course they may be right or wrong, and that’s not to say zfs cannot function with non-ECC memory. But in the words of the great @tkaiser :


    if you really love your data then invest the little bit in ECC memory too, if you don't care about data integrity then do not and accept silent bit rot.


    So it seems, you’d advise using ECC memory too. And you are simply repeating the oft quoted Matt Ahrens, who said back in 2014:


    There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.


    I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.” (https://arstechnica.com/civis/…5679&p=26303271#p26303271)


    I mentioned ECC memory, just in case Blabla was not aware of the EEC v. Non-ECC debate. His choice of course. You also repeated the same superficial statement about hardware costs : “invest the little bit in ECC memory too”. Apart from the fact there is a significant premium to pay for ECC memory in the current market, you need both a CPU and motherboard that support ECC memory. The right motherboard is another potentially costly item.


    Louwrentius blogged about the hidden cost of ZFS for home users nearly two years ago, lots of argument followed, but it's worth a read for those thinking about using zfs at home:


    http://louwrentius.com/the-hid…fs-for-your-home-nas.html

  • So it seems, you’d advise using ECC memory too.

    Sure, but that's not related to ZFS at all. It's quite the opposite: ZFS or btrfs (or any other filesystem with data integrity features) is even more beneficial when used on systems without ECC DRAM. Let's stop to use ZFS and 'ECC DRAM' in the same sentences since unexperienced users usually get it wrong (just like @Blabla who got back from this thread thinking he would need ECC DRAM -- but I might be wrong here)

  • Sure, but that's not related to ZFS at all. It's quite the opposite: ZFS or btrfs (or any other filesystem with data integrity features) is even more beneficial when used on systems without ECC DRAM. Let's stop to use ZFS and 'ECC DRAM' in the same sentences since unexperienced users usually get it wrong (just like @Blabla who got back from this thread thinking he would need ECC DRAM -- but I might be wrong here)

    So let's be absolutely clear. On the one hand you're happy for a user like @Blabla to use zfs on a system that has non-ECC memory, but for other reasons you'd advise using ECC memory, irrespective of what filesystem is in use. So should he use ECC memory or not?

  • So should he use ECC memory or not?

    He should do whatever he wants. It's just important that he understands what he's doing. If his NAS contains data that has a value (and not just TV shows and DVD rips) and he doesn't like silent bit rot then ECC memory is always a great idea regardless of the filesystem used. If he doesn't care about data integrity (which seems to be the case) then that's fine too.


    This decision is not at all related to ZFS since ZFS on systems without ECC memory doesn't cause any more harm (the 'Scrub of death' myth I also believed in for too long). It's exactly the opposite: ZFS and btrfs can help identifying bit rot already happened EARLY enough (when regular scrubs are running) to hopefully restore the corrupted data intact from latest BACKUP (that's mandatory. Always!)


    IMO the real problem is that people are somewhat aware of 'data is at risk' or already experienced a HDD having failed combined with data loss. Now they think about what to do and run into the wrong direction: RAID instead of backup since they heard RAID 'protects from failing disk(s)'. While backup would protect from data losses. And instead of old filesystem + RAID-1 using ZFS/btrfs (or approaches that implement external filesystem checksumming) would also allow to check for or implement data integrity in the best case even with self-healing capabilities (zmirror) [1].


    For whatever reasons he wants to use RAID-1 which I consider almost completely useless (especially at home). In his situation I would focus on backup first or if the data to be stored has no value anyway then giving up on any redudancy here at all (since why?). I also DO NOT recommend using a zmirror even if it's the magnitudes better alternative to mdraid's RAID1. I just wanted to point out that one more time the 'Scrub of death' myth striked and 'ECC memory' and ZFS mentioned in the same sentence led to wrong conclusions.


    [1] Small addendum: When using RAID with old filesystems at least the RAID modes with parity (RAID 5 and 6) allow also for some sort of data integrity but at the block device layer below the filesystem (so a RAID scrub repairing things at the block device layer can then result in the filesystem above running in inconsistencies -- for obvious reasons the RAIDZ and zmirror approach is superiour since allowing for a higher data integrity level)

  • I have zfs working quite well with Kernel 4.13 and spl-dkms / zfs-dkms 0.7.3, installed via sid temporarily enabled.
    Me stupid also upgraded the pool and could not go back to backed up 3.0.91 due to incompatibility.


    Do I understand it right? There is no backward compatibility, if you go the route to zfs 0.7.3, you can't get back to 0.6.x?


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    • Offizieller Beitrag

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Do I understand it right? There is no backward compatibility, if you go the route to zfs 0.7.3, you can't get back to 0.6.x?


    Greetings Hoppel

    During the installation of zfs 0.7.3 you will be asked if you want to upgrade (enhance) the pool with new features. If you go this way, no chance to get back
    to 0.6.x. But It's optional and you can easily decline.

    HP Microserver Gen8 - 16GB RAM - 1x Dogfish 64GB SSD - 4x 3TB WD Red / ZFS raid1 - OMV 7.x bare metal - Docker - Synology DS214 with 2x 4TB WD Red for rsync backup

  • Ok, thank you. That’s great!

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    • Offizieller Beitrag

    So I installed 4.0.9 from here. Sourceforge
    Updated to current 4.0.11. Installed extras from here. Extras Updated.


    Enabled OMV-Extras.org Testing repoUpdated again.


    Installed zfs plugin. Seems to work properly. Thanks



    Some unfair questions. Will debian go to the 7.3 zfs? If backports gets an update to zfs will it be updated to extras repo?



    Thanks

    • Offizieller Beitrag

    Will debian go to the 7.3 zfs?

    Debian has. I build these from Debian Sid (unstable).

    If backports gets an update to zfs will it be updated to extras repo?

    Not sure what you mean. 0.7.3 came from sid. If 0.7.3 comes to stretch-backports, then I don't need to maintain them in the omv-extras repos. If sid gets a newer version (and it probably will), then I will build the packages again and upload them to the omv-extras testing repo.

    • Offizieller Beitrag

    @Skaronator, thank you for filing the zfs bug right before I could :) I will release the 0.7.3-2 packages for OMV 4.x once the bug is fixed.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    @ryecoaaron hehe :P Its fixed in 0.7.3-3

    I put the 0.7.3-3 packages in the repo yesterday 8)

  • Zitat

    For whatever reasons he wants to use RAID-1 which I consider almost completely useless (especially at home). In his situation I would focus on backup first or if the data to be stored has no value anyway then giving up on any redudancy here at all (since why?). I also DO NOT recommend using a zmirror even if it's the magnitudes better alternative to mdraid's RAID1. I just wanted to point out that one more time the 'Scrub of death' myth striked and 'ECC memory' and ZFS mentioned in the same sentence led to wrong conclusions.


    I'm a bit late to the party but:


    >> especially at home


    I disagree. I like my data available. So does the wife.. and kids. If it's not, it's EXTREMELY irritating. I know you mean average joe at home, but people on this forum are here because we like our data, right? I know plenty of people 'at home' who like data availability. Personal data, as well as business data is just as important IMO. Music, Family Photos and Videos, Cal info, etc etc etc. If this is hosted internally there is absolutely every reason for someone to want this to be available. Who wants to wait for a HDD to conk out, grab another from a cupboard, get on (or off) site backups, fit new HDD, restore from backup. That's not a 5 minute process even if your workflow is streamlined. All the time you have some very disappointed children and a disaproving wife. Idk.. maybe some just tell their family that "there's no movies and music today and you can't look at the photos from yesterdays trip because Daddy/Hubby has to fix the computer"


    Not trying to be abrasive. Just my POV


    >> I would focus on backup first



    Totally agree with this. Redundancy after backup. I've become a bit backup mad in recent years when it comes to family data... precious family photos and videos will do that to you. I sync the contents of my family data pool to a 2nd server (which is not at this location) and have both on and offsite backups so my data is at 3 seperate locations. Some stuff also goes to the cloud. A friend of mine does pretty much the same thing and also has 2 HDDs in 2 seperate bank vaults that include this business data also.



    >> I also DO NOT recommend using a zmirror


    OOI, Why?

  • I disagree

    I was talking only about mdraid's RAID-1. If availability is a goal then RAID-1 could be an option (and ten years ago it was even an interesting or good option). But since it only provides just this (and not that great anyway) the way better alternatives are either a zmirror or btrfs' RAID-1 implementation (since allowing for data integrity checking and even self-healing unlike mdraid's RAID-1. And both ZFS and btrfs allow for taking snapshots and sending them to another disk/location so with really limited efforts you get the equivalent to a really well done backup FOR FREE on top)


    The 'I also DO NOT recommend using a zmirror' comment was only meant wrt another thread and someone obviously thinking the moderators here in this forum would be 1st level support monkeys having to work for free. In this specific situation availability was IMO not needed so no need for 100% disk capacity wasted (if I'm willing to throw 2 disks at a problem I use one for data and one for backup and do not waste them for redundancy when there's no reason)

  • I was talking only about mdraid's RAID-1. If availability is a goal then RAID-1 could be an option (and ten years ago it was even an interesting or good option). But since it only provides just this (and not that great anyway) the way better alternatives are either a zmirror or btrfs' RAID-1 implementation (since allowing for data integrity checking and even self-healing unlike mdraid's RAID-1. And both ZFS and btrfs allow for taking snapshots and sending them to another disk/location so with really limited efforts you get the equivalent to a really well done backup FOR FREE on top)
    The 'I also DO NOT recommend using a zmirror' comment was only meant wrt another thread and someone obviously thinking the moderators here in this forum would be 1st level support monkeys having to work for free. In this specific situation availability was IMO not needed so no need for 100% disk capacity wasted (if I'm willing to throw 2 disks at a problem I use one for data and one for backup and do not waste them for redundancy when there's no reason)

    Fair enough, makes sense. Thanks for deleting the duplicate post :)

  • @ryecoaaron


    0.7.4 is now in Debian SID and 0.7.5 has been just released with some minor fixes. Coming soon on Debian SID. (Maybe end of this week, early next week)


    0.7.4 add support for 4.14 Kernel which is important when backports update the kernel to 4.14.

    OMV 4 - Ryzen 7 1700 (8 Cores / 16 Threads 65W TDP) - 32 GB DDR4 ECC
    128 GB OS SSD - 256 GB Plex SSD - 32 TB RAIDZ2 (6x8TB HGST NAS)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!