Is ZFS supported in Kernel 4.13-4.15?

    • Offizieller Beitrag

    The lts offers 5 years of updates to the os. Does that mean omv would be expected to as well? I doubt it. Kind of defeats that as a plus.

    Users in production environments, small businesses and the like, don't like doing version updates every year. That's what LTS is all about. Setting up and actually using the server or workstation for awhile.


    With the configuration worked out and without the need for more features, do you think that users are still running OMV2?
    (I think all would be surprised at how many are still running OMV1.)
    I'm guessing a good part of the user base is still on OMV2 and there's nothing wrong with that. With that noted, being able to update the underlying OS of OMV2, Samba and other packages, keeps older versions more secure and viable for longer use.


    In the bottom line, if older versions are still in use (and they are), it's better if the internals are up to date and secure.

    • Offizieller Beitrag

    I still use omv2 for a plugin or two. Not for anything sensitive. 3.X is soon not going to be updated either. So the point is not that people don't update but that they should. And by rhe time a new version is out it will probably be on a newer version of the os maybe even a new lts.


    So being on an lts will give a false sense of security if omv does stop updates. If omv itself goes lts I would be all in.

    • Offizieller Beitrag

    The lts offers 5 years of updates to the os. Does that mean omv would be expected to as well? I doubt it. Kind of defeats that as a plus.

    After a couple of years, the services that OMV configures should stop changing. So, OMV really shouldn't need any updates but the OS would still need security fixes. I would rather have the option of five years than maybe three years with no updates for a few packages you use.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I'm not trying to pick apart your post, but it's easier to multi-quote for a large chunk:


    Frankly, I'm not a fan of ZFS. Out of the box, ZoL requires a handful of tweaks to "approximate" Linux permissions.


    Do you mean permissions for data in your pool? Ie. Files and folders - POSIX permissions? If so, these don't differ wether you use FreeNAS or OMV. They're just POSIX permissions.


    However, in the search for something that would protect long term data stores; if one wants integrated protection from bitrot and other silent corruption, ZFS is the only viable option available.


    Well, BTRFS will do this, it just doesn't have RAID5/6 and isn't as mature (and VM performance sucks the last I checked) so isn't suitable for all. But yes, if you need RAID5/6, want a proven track record and use VMs, then ZFS is it.


    It's not even necessary to have a Z-array for protection - a basic volume would work, with copies=2 set, on sensitive filesystems.


    For pure bit rot, yes. But you'd be better off with a mirror for some redundancy.


    I'm not making more out of bitrot than it actually is. A flipped bit in a picture and a single pixel changes color. In a document an extraneous character may appear, a word may be spelled incorrectly, etc. However, the effects (cumulative) add up over the long haul which, I believe, should be given due consideration.


    If only this were true. Sadly, a single pixel or char equates to way more than one bit (a pixel is usually 1 byte = 8 bits). With chars, you have size, font etc. See this: https://commons.wikimedia.org/wiki/File:Bitrot_cascade.png


    The 2nd image is pretty much half gone, and is just one bit flip. The worst is 3 bits. Think about it more like a Jenga tower of neodymium magnets all aligned with their poles in the 'correct' direction for stability of the tower. Flip a magnet and...... :cursing:



    For businesses of any size, I can't see where they could chose anything else (other than ZFS) at this point in time. Automated snapshots allow for the retrieval of business sensitive files, accidentally or deliberately deleted, for up to a year. And, snapshot retention can be adjusted for even longer periods. For that reason alone, ZFS should be supported for small and medium sized business use cases.


    Don't forget Windows admins. They have reFS and shadow copies. I wouldn't trust it, but that's the Win equivalent.


    BTRFS? It "sounds good" but what has been delivered is nowhere near living up to what has been promised. Because it's native to Linux, provides bitrot protection and has great (theoretical) features, I really "wanted" BTRFS for data storage. However, any objective assessment of the project leaves one with the idea that it's going to be a long time before the issues are worked out, if ever. (After being on the mailing list for some months, in times past, it seemed as if the clean up of the issues plaguingBTRFS was going nowhere.) In any case, BTRFS wouldn't be the first file system to be abandoned due to development delays and the loss of interest during/after an extended development period.


    I totally agree that it's a bit of a disappointment in terms of it's current state and the time it's taken to get there. But it's got almost zero chance of ever being abandoned and will undoubtably mature to the point where it's like ZFS - yet brings it's own goodness (shrinking pools for one). With the likes of Facebook using BTRFS and contributing heavily to it, it's not going away any time soon. The problem is, companies like Facebook don't use RAID5/6, so that's getting sorted at a snails pace.


    And it's not as if I "like" ZFS, now that I'm familiar with it.


    I think it's the best of a 'bad bunch'. ZFS takes a bit of setup and doesn't integrate as well, yet it's proven and solid. BTRFS plays very well (only with Linux) but still has kinks. I also like the the command structure and command output of ZFS over BTRFS. As I mentioned before though, if BTRFS matures more and gets ported to Unix, I'll jump. What I really want is Bcachefs... wouldn't it be lovely if that matured and was available on Unix/Linux/Windows? :P<3



    Users in production environments, small businesses and the like, don't like doing version updates every year. That's what LTS is all about. Setting up and actually using the server or workstation for awhile.


    With the configuration worked out and without the need for more features, do you think that users are still running OMV2?
    (I think all would be surprised at how many are still running OMV1.)
    I'm guessing a good part of the user base is still on OMV2 and there's nothing wrong with that. With that noted, being able to update the underlying OS of OMV2, Samba and other packages, keeps older versions more secure and viable for longer use.


    In the bottom line, if older versions are still in use (and they are), it's better if the internals are up to date and secure.


    This is purely from a Debian point of view, but Wheezy was officially EOL on 31st May so it isn't secure any more. It would be advisable for any who do use OMV 1 and 2 to jump. OMV 3 will be EOL from Debian's POV on 31st June 2020. Since there's no upgrade path from OMV1/2 onwards, it would be advisable to go straight to OMV 4 if you don't need OMV 3 plugins.


    @ryecoaaron so is this something that's being considered? Ubuntu I mean. Or is this just a wish list thing at the moment? :)

    • Offizieller Beitrag

    so is this something that's being considered? Ubuntu I mean. Or is this just a wish list thing at the moment?

    If being officially considered by me, then yes :) I tried to convince Volker to swtich for OMV 4.x.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Do you mean permissions for data in your pool? Ie. Files and folders - POSIX permissions? If so, these don't differ wether you use FreeNAS or OMV. They're just POSIX permissions.

    You're right about ZoL permissions being the same on FreeNAS and OMV, but the permissions that result from tweaking ZFS parameters (while close) are not actual POSIX permissions. (That is as if native Linux file systems were actually "POSIX" compliant. They're "mostly" compliant which is more along the lines of a fuzzy "generally accepted" standard.)


    Linux permissions, such as they are, come in two varieties; basic (Owner, Group, Others) and extended (ACL based). The ACL add-on is a "patch" that's stored in extended file attributes. There can be interesting and odd effects when basic and extended permissions clash.


    Fortunately, the pool attributes I'm using; (acltype=posixacl, aclinherit=passthrough and xattr=sa) are close enough to were everything seems normal. On the other hand, my pool is just static storage.


    For pure bit rot, yes. But you'd be better off with a mirror for some redundancy.

    I'd argue this one with you until the CoW (file systems) come home. :) I don't see any array as "redundancy". I see an array (regardless of flavor) as a single disk and a single point of failure. I'm only using a ZFS mirror of (2x4TB) for convenience, so I can have 4TB of storage with 100% bitrot protection. Functionally (while I'll grant that a mirror is better) I'd see a single 8TB disk with copies=2 as the rough equivalent of a 4TB Zmirror. In either case, using a Zmirror or a single 8TB basic volume, I'd have full data backup on an external host. The backup on the external host is what I call redundancy.


    Well, BTRFS will do this, it just doesn't have RAID5/6 and isn't as mature (and VM performance sucks the last I checked) so isn't suitable for all. But yes, if you need RAID5/6, want a proven track record and use VMs, then ZFS is it.

    A BTRFS mirror (RAID1 and 10) still has issues too, as of the latest kernel 4.16. From my point of view, if the Dev's will admit to "mostly OK", it's far from OK.


    Even in a single disk scenario, I used BTRFS tools to recover from just a small bit of file corruption and, well, I'm not sure exactly what happened other than zeroing logs/counts and, maybe, resetting checksums on potentially corrupted files that were still corrupt. ((And of all the simple things one would want in such a scenario; it proved to be impossible to find out what the names of the corrupted files were, so they could be replaced.)) Fortunately, the disk in question was just one of 3 backups so a real recovery was easy enough.
    For my purposes, that experience alone will be enough for me to keep BTRFS at arms length until the "as yet to be discovered" bugs in the file system and it's utilities are unearthed and patched. The utilities themselves need more refinement.


    I totally agree that it's a bit of a disappointment in terms of it's current state and the time it's taken to get there. But it's got almost zero chance of ever being abandoned and will undoubtably mature to the point where it's like ZFS - yet brings it's own goodness (shrinking pools for one). With the likes of Facebook using BTRFS and contributing heavily to it, it's not going away any time soon. The problem is, companies like Facebook don't use RAID5/6, so that's getting sorted at a snails pace.

    While I'll be the first to admit that I can't predict the future, I still think the future of BTRFS is far from certain. There's this principle concerning "time", that comes into play, when excessive amounts of it are wasted. It's called "being overtaken by events". There have been file systems that had great promise, that didn't deliver, and fell to the wayside while others took their place. ReiserFS/Reiser4 comes to mind. It had SUSE, a heavy hitter, and other corporate sponsors as well. (On the other hand, when the developer murdered his wife, "that" didn't do much to help the project. :) )


    In the bottom line, I'm going to stick with ZFS because, for my purposes, there's no other viable choice.


    Still, I can't help but wonder at the state of file systems in general. As areal densities keep increasing, with OEM's stuffing more bits onto platters by packing them in tighter, one would think that bitrot protection would have come to the forefront when drive capacities exceeded 2TB. With areal densities now reaching insane levels, in the 8TB and up range, I believe "bitrot" will become a much more well known term in times to come.

  • Hi,


    Just to let you know that I have successfully update my ZFS pool from kernel 4.14/ZFS 0.7.6 to kernel 4.16/ZFS 0.7.9


    1st => Update ZFS from 0.7.6 to 0.7.9., than reboot the NAS. The pool cannot be mounted (as expected) but don't panic ;)
    2nd => Update from kernel 4.14 to kernel 4.16, than reboot the NAS. The pool is mounted and healthy.


    Thanks all for your advices.

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

  • If being officially considered by me, then yes :) I tried to convince Volker to swtich for OMV 4.x.

    I wonder if this will happen for OMV 5.... @votdev is this a possibility? :)


    The ACL add-on is a "patch" that's stored in extended file attributes. There can be interesting and odd effects when basic and extended permissions clash.


    I don't use ACLs. :) That might be why I see the permissions in my pools as less of a headache.


    I'd argue this one with you until the CoW (file systems) come home. :) I don't see any array as "redundancy". I see an array (regardless of flavor) as a single disk and a single point of failure. I'm only using a ZFS mirror of (2x4TB) for convenience, so I can have 4TB of storage with 100% bitrot protection. Functionally (while I'll grant that a mirror is better) I'd see a single 8TB disk with copies=2 as the rough equivalent of a 4TB Zmirror. In either case, using a Zmirror or a single 8TB basic volume, I'd have full data backup on an external host. The backup on the external host is what I call redundancy.


    But you're sacrificing a ton of possible speed not using multiple drives. Also, a failing drive is more likely to have both copies wrecked. Copies=2 is a special use case parameter, if you ask me. I only use it for drives that are offline the majority of the time, where they are stored in a space restricted location - ie. the bank.


    Even in a single disk scenario, I used BTRFS tools to recover from just a small bit of file corruption and, well, I'm not sure exactly what happened other than zeroing logs/counts and, maybe, resetting checksums on potentially corrupted files that were still corrupt. ((And of all the simple things one would want in such a scenario; it proved to be impossible to find out what the names of the corrupted files were, so they could be replaced.)) Fortunately, the disk in question was just one of 3 backups so a real recovery was easy enough.
    For my purposes, that experience alone will be enough for me to keep BTRFS at arms length until the "as yet to be discovered" bugs in the file system and it's utilities are unearthed and patched. The utilities themselves need more refinement.


    While I'll be the first to admit that I can't predict the future, I still think the future of BTRFS is far from certain. There's this principle concerning "time", that comes into play, when excessive amounts of it are wasted. It's called "being overtaken by events". There have been file systems that had great promise, that didn't deliver, and fell to the wayside while others took their place. ReiserFS/Reiser4 comes to mind. It had SUSE, a heavy hitter, and other corporate sponsors as well. (On the other hand, when the developer murdered his wife, "that" didn't do much to help the project. :) )
    In the bottom line, I'm going to stick with ZFS because, for my purposes, there's no other viable choice.


    Still, I can't help but wonder at the state of file systems in general. As areal densities keep increasing, with OEM's stuffing more bits onto platters by packing them in tighter, one would think that bitrot protection would have come to the forefront when drive capacities exceeded 2TB. With areal densities now reaching insane levels, in the 8TB and up range, I believe "bitrot" will become a much more well known term in times to come.


    Yeah it's BTRFS headaches like that which make me stick with ZFS for the time being. A friend of mine thinks HAMMER is great, but he's not a Linux guy. I know very little about it tbh, but from what I can see it looks fairly good.

  • The 4.1.7 version of omv-extras has buttons to hold/unhold the current kernel/headers (linux-image-$arch and linux-headers-$arch) and disable/enable the backports repo.

    How can I know if current kernel is in Hold status or not ?
    Ther is no visual indication on GUI.


    May be you can disable the button that is not applied. I.E:
    - If kernel is in hold status, only Unhold button is enabled, and opposite...
    Regards

    OMV 4.x. OMV-Extras ZFS iSCSI Infiniband. Testing OMV 5.1. Testing OMV arm64

    Einmal editiert, zuletzt von vcp_ai ()

  • How can I know if current kernel is in Hold status or not ?Ther is no visual indication on GUI.


    May be you can disable the button that is not applied. I.E:
    - If kernel is in hold status, only Unhold button is enabled, and opposite...
    Regards


    You can see what is held with:



    Code
    apt-mark showhold


    For example, I see:



    Code
    linux-headers-amd64
    linux-image-amd64


    Not sure how easy the variable hold button would be to implement, but another option is a "Show Status" or "Show Holds" button that gives the output of above command.

    • Offizieller Beitrag

    How can I know if current kernel is in Hold status or not ?

    Click it again.


    May be you can disable the button that is not applied. I.E:
    - If kernel is in hold status, only Unhold button is enabled, and opposite...

    Way too much work for something that will be rarely used.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    How easy would a 'Show holds' button be?

    What if I add a tab to the apttool plugin that shows packages on hold?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Perfect

    apttool 3.5 in repo.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    I don't use ACLs. :) That might be why I see the permissions in my pools as less of a headache.

    I came to the same conclusion awhile back - use basic permissions only. When I noticed the two could conflict (basic denies, where ACL allows), I read a white paper on the possibilities and potential consequences. In the bottom line, if both are used, the two permission types (basic and ACL's) can create access "weirdness". Regardless, it's my belief that basic permissions are sufficient if data sets are segregated properly.


    But you're sacrificing a ton of possible speed not using multiple drives. Also, a failing drive is more likely to have both copies wrecked. Copies=2 is a special use case parameter, if you ask me. I only use it for drives that are offline the majority of the time, where they are stored in a space restricted location - ie. the bank.

    Copies=2 could be used for a single sensitive filesystem like "Documents", with the rest of a basic volume not getting the benefit of bitrot protection. That would save disk space but, if that's the goal, the user is already on a precarious path.


    Until something better and proven is available, for now and the foreseeable future, I'll be using ZFS.

  • Guys looking at the status of the scrubs I noticed that I can upgrade my pool. Should I do it after the scrub? I don't have a backup of the data on my ZFS mirror, is it a risky operation?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • yes, you can upgrade.


    it's not risky, only be sure that do not need to mount the pool in other O.S. that do not support the upgrade. EG: BSD

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!