[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • OMV 1.0
    • Is there an updated procedure somewhere in this 40 page thread for installing ZFS? If so, can someone save me the trouble of flipping through 40 pages?

      The original procedure as documented 3 years ago does not seem to apply to the most recent release. I go into the update management and don't see any "extras"

      thanks
    • When I've run into "Could not get a lock", a reboot cleared the issue. There may be other ways to do it.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.13, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • GreatBigGeek wrote:

      Is there an updated procedure somewhere in this 40 page thread for installing ZFS? If so, can someone save me the trouble of flipping through 40 pages?

      The original procedure as documented 3 years ago does not seem to apply to the most recent release. I go into the update management and don't see any "extras"

      thanks
      In basic terms, installing the ZFS plugin will give you "ZFS". (And the installation may take awhile.) After that, it's a matter of having wiped drives available and creating a pool type of your choice.

      When the pool is created, I run the following command lines as root. (Substitute the name of your pool in for ZFS1 .)

      Source Code

      1. zfs set aclinherit=passthrough ZFS1
      2. zfs set acltype=posixacl ZFS1
      3. zfs set xattr=sa ZFS1
      4. zfs set compression=lz4 ZFS1
      (Note: The above can also be done in the ZFS plugin, in the Overview tab, by clicking on the pool name and the Edit button.)

      The above gives you the equivalent of Linux extended file attributes and permissions along with compression. This should be done after the pool is created, but before data is copied, to avoid having files with mixed attributes.

      To get the best utility from ZFS, it's better to create child file systems, under the pool, in the plugin. ("File systems", as ZFS terms them, are the equivalent of root folders but have assignable ZFS properties.) This is done in the Overview tab, by clicking on your pool name, and the Add Object button, and naming the Filesystem.

      Hope this helps.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.13, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk

      The post was edited 3 times, last by flmaxey: edit2 ().

    • flmaxey wrote:

      When I've run into "Could not get a lock", a reboot cleared the issue. There may be other ways to do it.
      That’s pretty much the easiest way to sort unless you know what process is using it. You can have a look with htop or ps but if the system is able to be rebooted while you get a coffee, why not :)

      Just for info: On Ubuntu, a common cause of this is the apt-daily services (which I always disable). They don’t come as standard with Debian though, so it’ll be something else.
    • ellnic wrote:

      hey don’t come as standard with Debian though, so it’ll be something else.
      OMV adds a daily apt update.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      ellnic wrote:

      hey don’t come as standard with Debian though, so it’ll be something else.
      OMV adds a daily apt update.
      I didn’t realise that. All this time using it, and I never realised. :D

      Is it still apt-daily? Just for info purposes:

      Source Code

      1. sudo systemctl mask apt-daily.service
      2. sudo systemctl mask apt-daily.timer
      3. sudo systemctl mask apt-daily-upgrade.service
      4. sudo systemctl mask apt-daily-upgrade.timer
    • ellnic wrote:

      Is it still apt-daily?
      Nope, cron-apt. Look at /etc/cron.d/cron-apt
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • So yeah basically comment out line 5 or adjust. I personally do most stuff manually so I have no need for it. I’ll probably disable it by commenting out and leave installed in case I change my mind. Still can’t believe I didn’t notice this in 3 or so years OMV use. X/ or wait, it’s been more than 3 years. See this is how fried I am right now. Too much on and no sleep does not make an attentive ellnic :S
    • There are lots of ways to do this, with or without ZFS on the separate hard drive. Rsnapshot is probably the best of the more simple approaches. It provides for versioned backup, similar to ZFS snapshots, with targets that can be set on another drive.

      Unfortunately, it's a bit on the labor intensive side to set up in that rsnapshot requires pairs of shared folders for a source and target - one from a filesystem in your ZFS pool and another shared folder on the destination drive.
      Outside of the entire server crashing, with ZFS snapshots and rsnapshot backups on another drive, you should have a recovery path from nearly any data disaster.

      If you have questions, others might offer an opinion. Otherwise, I'll be back next weekend.

      (BTW: Regrets, from your other post, regarding the data loss.)

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.13, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • flmaxey wrote:

      As is the case with many articles out there, the above is just an opinion piece written by someone with a reasonable grasp of English, but a lack of decor. (It's a shame when someone tries to emphasize points, using crass language, achieving the exact opposite effect.)

      In this article, there's no verifiable data present and no peer reviewed white papers referenced. It's simply a compilation of "like minded" opinion pieces, where all jump to baseless conclusions. With a few years of experience to draw on, I've noticed that just because like minded people get together and express a common opinion or belief, it doesn't make them correct. History is littered with plenty of examples - Jonestown and Kool-aid come to mind.

      Setting aside ECC correction at the drive level - magnetic media errors are inevitable as media degrades. The problem only gets worse as today's stupendous sized drives grow even larger, with areal densities reaching insane levels. This is why integrating error detection into the file system itself is not only a good idea, as storage media and data stores grow, it will soon be a requirement. EXT4 and the current version of NTFS, as examples, will either become history or they'll be modified for error detection and some form of correction. While that's my opinion, I see this as inevitable. The only real question is, as I see it, what can be done now?

      Where ZFS is concerned, it was developed by Sun Corp specifically for file servers, by Computer Scientists who don't simply express an unsupported opinion. They developed something that works both mathematically and practically. Their work is supported by reams of data, has been peer reviewed and the physical implementations have been exhaustively tested. So, if one is to believe a group of Sun Corp Computer Scientists or "Jody Bruchon", I think I'll go with the scientists.
      On the "damage in RAM" topic - ECC RAM does a reasonable job of correcting memory error issues and server grade hardware is always a good idea. This is nothing new and there's plenty of data to support.
      ____________________________________________________

      The point on backup is well taken however. While I have a couple zmirrors strictly for bitrot correction (yes - it's a verified phenomenon that can be controlled), I have a total of 4 complete copies of data on three different hosts. (One host to be relocated, soon, to an outbuilding.)

      On the other hand, I'm sure there's an article out there, somewhere, that makes the case that "backups" don't really protect data - that it's just false peace of mind for "backup fan-boys" or "techno-nerds".
      Attacking my presentation and my intelligence doesn't attack my facts. Do you have some sort of substantive complaint about what I said? Let's find out...ah, immediately we barrel into "citation needed" territory, so I'm betting the answer is "no." Most of my statements come from practical experience in the field rather than reading something on the internet. What do you expect me to cite a peer-reviewed scientific paper for, exactly? What research have you done into what research exists, and can you cite anything within your strict standards that was published in the last 16 years on the subject of bit rot and data corruption, regardless of which "side" it may seem to be arguing for? I'd love to see you show off the amazing research chops you're flexing and produce some proof, but it's easier to sit back and say "pssshhh, that person doesn't know what they're talking about, what a fool!" and go back to whatever you were doing, emotionally validated with minimal effort.

      "Setting aside ECC correction at the drive level - magnetic media errors are inevitable as media degrades." - There's no verifiable data present and no peer reviewed white papers referenced. Please cite your sources for your assertion, specifically for the implied part where said media degradation happens within a short enough time span to be of significance, and also for the next statement about how "the problem only gets worse as today's stupendous sized drives grow even larger."

      "Where ZFS is concerned, it was developed by Sun Corp specifically for file servers, by Computer Scientists who don't simply express an unsupported opinion." - There's no verifiable data present and no peer reviewed white papers referenced. Please cite your sources for your assertion. Your logical fallacy of appeal to authority doesn't hold any water. Computer scientists are just as capable of being wrong as anyone else.

      To quench your thirst for a research paper, here's one that's at the bottom of the Wikipedia page on ZFS that puts the level of risk into clear perspective: An Analysis of Data Corruption in the Storage Stack, by Lakshmi N. Bairavasundaram et al. Some relevant quotes:

      "Of the total sample of 1.53 million disks, 3855 disks developed checksum mismatches." Note that this is 0.25% of all disks.

      "There is no clear indication that disk size affects the probability of developing checksum mismatches."

      "RAID reconstruction encounters a non-negligible number of checksum mismatches." Someone will bring this line up anyway, so I'm addressing it now: the 0.25% of disks with checksum mismatches will have those mismatches picked up by a RAID scrub, meaning a regularly scrubbed array will catch but not fix errors, just like ZFS without RAID-Z! A RAID-Z setup will auto-fix this, but it has already been established that the probability of having a checksum mismatch at all is extremely low.

      "There are very few studies of disk errors. Most disk fault studies examine either drive failures or latent sector errors." How can you cite studies that probably do not exist? If they exist, are they working with a statistically significant data set? More importantly, this study is from February 2008, over 10 years ago; is it still relevant given the drastic differences in technologies used between 2008 and 2018?

      The scientists support what I've said. Keep in mind that at no point did I say that ZFS should not be used. The original point of my article was to denounce ZFS zealots and fanboys that run around data storage forums advocating for ZFS inappropriately and even dangerously. If you want to use ZFS then do it. It's not my decision to make, and frankly, I have no reason to care what your personal choice is. My problem is when you tell newbies who want advice about filesystem options to use ZFS because it has checksums and fixes your data errors and bit rot is a thing that happens all the time, but you leave out that it can't fix errors with only checksums, requires RAID-Z to fix errors on-the-fly, and the actual chances of being slammed with bit rot within 14 months of deployment is roughly 0.003855 million out of every 1.53 million disks. It's an irresponsible thing to do and sneering at me for calling others out for such irresponsible advocacy doesn't contribute anything meaningful to the discussion.

      I want to thank you for at least making an effort to discuss some real points, unlike the next person being quoted.

      ellnic wrote:

      Well IDK who Jody Bruchon is... nor does anyone else. He's certainly no one the industry recognises as a file system expert - or has even heard of for that matter. He looks like some guy with his own name.com, a blog, and he likes youtube.. and a bad opinion on everything. Don't forget the awesome films he's directed. It was quite comical quoting such an awful reference.. I think it made the point perfectly.
      [IMG:https://i.imgflip.com/1mszxi.jpg]

      Here's the big secret you missed in the school of life: it doesn't matter who Jody Bruchon is because the truth of a statement does not hinge on the popularity or reputation of the person saying it (logical fallacies of appeal to popularity, ad hominem, and appeal to authority). You were so offended by my disagreement with something that you are too emotionally invested in that you went to the trouble to actively search for things unrelated to my ZFS article to sneer at (thanks for increasing my YouTube view counts, by the way!), yet you haven't actually said anything about ZFS, RAID, data integrity, etc. nor refuted one single thing I wrote. At least flmaxey tried to present some arguments, lacking in scientific citations as they were. As for your "not a file system expert" nose-thumbing, it would be a real shame if you found out that in addition to building Linux RAID servers that measure their uptime in years for SMEs and making low-quality YouTube videos for you to mock copiously, I also write file systems. I'd like to see a few links to the file system code you've worked on since you're so much smarter than the rest of us. Come on, let's see your work, big shot! If you're going to play the ad hominem game, put something solid behind your side of the wang-measuring contest you're trying to organize.

      Or--and I know this might sound crazy--you could actually point out what I've said that you disagree with and explain why you disagree with some technical details. That would be more useful (and less publicly embarrassing for you) than talking trash about me right after saying that you know nothing about me.
    • The first thing all must understand - this is a forum of opinions, not computer science. Opinions are what they are. Everyone has one.

      jodybruchon wrote:

      flmaxey wrote:

      As is the case with many articles out there, the above is just an opinion piece written by someone with a reasonable grasp of English, but a lack of decor. (It's a shame when someone tries to emphasize points, using crass language, achieving the exact opposite effect.)
      In this article, there's no verifiable data present and no peer reviewed white papers referenced. It's simply a compilation of "like minded" opinion pieces, where all jump to baseless conclusions. With a few years of experience to draw on, I've noticed that just because like minded people get together and express a common opinion or belief, it doesn't make them correct. History is littered with plenty of examples - Jonestown and Kool-aid come to mind.
      Attacking my presentation and my intelligence doesn't attack my facts. Do you have some sort of substantive complaint about what I said? Let's find out...ah, immediately we barrel into "citation needed" territory, so I'm betting the answer is "no." Most of my statements come from practical experience in the field rather than reading something on the internet. What do you expect me to cite a peer-reviewed scientific paper for, exactly? What research have you done into what research exists, and can you cite anything within your strict standards that was published in the last 16 years on the subject of bit rot and data corruption, regardless of which "side" it may seem to be arguing for? I'd love to see you show off the amazing research chops you're flexing and produce some proof, but it's easier to sit back and say "pssshhh, that person doesn't know what they're talking about, what a fool!" and go back to whatever you were doing, emotionally validated with minimal effort.
      Using the word "attack" indicates emotional involvement and a propensity toward hyperbole. Being a military vet and having a good understanding of the word, I can assure you, you were not "attacked". Nor did I mention the word "intelligence". But I will say this again; those who express opinions using foul or crass language and slang jargon, while trying to emphasize a point more strongly, do themselves and readers a disservice.

      Regarding facts:
      I've found, in the fullness of time, the word "fact" is loosely used in modern times. It seems to have differing definitions, depending who is asked, as opposed to the word as it's defined by Merriam Webster. Many seem to have a personal set of "facts" which, in reality, are their own opinions in disguise.

      I stand by what I said above, exactly as written. As to the content, readers of this forum can decide for themselves.

      jodybruchon wrote:

      flmaxey wrote:

      1. Setting aside ECC correction at the drive level - magnetic media errors are inevitable as media degrades. The problem only gets worse as today's stupendous sized drives grow even larger, with areal densities reaching insane levels. This is why integrating error detection into the file system itself is not only a good idea, as storage media and data stores grow, it will soon be a requirement. EXT4 and the current version of NTFS, as examples, will either become history or they'll be modified for error detection and some form of correction. While that's my opinion, I see this as inevitable. The only real question is, as I see it, what can be done now?


      2. Where ZFS is concerned, it was developed by Sun Corp specifically for file servers, by Computer Scientists who don't simply express an unsupported opinion. They developed something that works both mathematically and practically. Their work is supported by reams of data, has been peer reviewed and the physical implementations have been exhaustively tested. So, if one is to believe a group of Sun Corp Computer Scientists or "Jody Bruchon", I think I'll go with the scientists. /---/
      1. "Setting aside ECC correction at the drive level - magnetic media errors are inevitable as media degrades." - There's no verifiable data present and no peer reviewed white papers referenced. Please cite your sources for your assertion, specifically for the implied part where said media degradation happens within a short enough time span to be of significance, and also for the next statement about how "the problem only gets worse as today's stupendous sized drives grow even larger."

      2. "Where ZFS is concerned, it was developed by Sun Corp specifically for file servers, by Computer Scientists who don't simply express an unsupported opinion." - There's no verifiable data present and no peer reviewed white papers referenced. Please cite your sources for your assertion. Your logical fallacy of appeal to authority doesn't hold any water. Computer scientists are just as capable of being wrong as anyone else.
      1. On the first point;
      The statements made regarding storage media and file systems are clearly labeled, correctly, as my opinion (highlighted). An opinion is what it is - it doesn't come with a white paper. Need anything more be said?
      Regarding your statement, "the implied part where said media degradation happens within a short enough time span to be of significance". Did you read my paragraph? Where did that come from? (That's a rhetorical question.)

      2. On the second point: there's so many ZFS references it would be pointless to try to cite them. But a good start can be found -> here. 171 references can be found at the bottom, complete with a bibliography and external links which will lead to still more credible references among ZFS subject matter experts and academia.

      - Where Sun Corp's computer scientists are concerned:
      In addition to the normal constraints of the scientific method and defense of reputation, the scientists involved in designing ZFS had additional checks and balances on their work, such as a Board of Directors and the Corporate bottom line. To even suggest that a set of experienced computers scientists and the Board of Directors at Sun Corp., with work reviewed collectively by their peers and the computer science community as a whole, that they all somehow got it wrong, defies common sense and logic. Again, my opinion.

      ____________________________________________________________________________________________

      To get back to the focus of this forum - where practical solutions are preferred.

      I will concede, what I perceive (?) to be your main point:
      That seems to be, having a checksum assigned to a file (in itself) is of questionable value. If any "scrubbing" file system encounters a checksum mismatch, what is done about it? In many cases, nothing. As a consequence, some feel that there's little benefit in file checksums. I disagree. Even if the outcome is only "corruption detection", that's still far better than a file system that will allow silent data corruption with no warning at all.

      Addressing, and correcting magnetic media bitrot with ZFS, is a matter of implementation:
      My approach is the use of a ZFS mirror. During a scrub, if a checksum mismatch is detected on one file, the second copy on the mirrored drive still matches its checksum (to a near vanishing degree of probability). In a zmirror scrub, the clean file overwrites the corrupted file. And with the stat's of the scrub noted, I'd have a heads up that a drive might need attention in the near future.

      This can be tested. Boot with a live distro, use sector editor and change a byte, in one file, on one disk. Afterward, a scrub will correct it. This was an anecdotal test, to be sure, but it works and that's all the proof I need. While there's no such thing as "perfect", I know of no better solution to address silent data corruption than ZFS.

      _______________________________________________________________________________________________

      For bitrot protection that can be realistically implemented in a home or a small business, I've looked at what's out there. BTRFS is still a mess (my opinion) and I haven't tested SNAPRAID's checksummed files, to see if files are recovered after being slightly corrupted.

      So, until something better comes along, ZFS appears to be "it". (Again, my opinion.)

      To wrap this up:
      If you know of something better for the detection and correction of magnetic media data corruption, that can be implemented at home or in small business environments, I'm interested.

      Otherwise, I think we're done here.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      OMV 4.1.13, Intel Server SC5650HCBRP, 32GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk

      The post was edited 2 times, last by flmaxey: edits ().

    • "The first thing all must understand - this is a forum of opinions, not computer science." - You don't get to say this after you appealed so heavily to the authority of "computer scientists." If you think that a statement's validity hinges on the credentials of the person making it then you are objectively incorrect. Peter Gutmann's paper on recoverability of data from magnetic media from which the "DoD secure wipe" ritual was spawned aged rather poorly, and eventually Gutmann himself penned a retrospective that clarified where the paper wasn't quite right and how changes in hard drive technology (specifically using PRML instead of MFM, and everything) made a single wipe pass sufficient to destroy the data on any modern hard drive beyond retrieval, yet multi-pass secure wipes on PRML and EPRML disk drives are strongly advocated by data security experts of all kinds even today.

      Regarding your statements on "attack" and "facts," you are attempting to argue semantics to the point of splitting hairs. This is a clear exposition of bad faith. If you do not understand that the word "attack" may have multiple context-sensitive meanings then that's not my problem. Blathering on and on about unspecified people using "facts" incorrectly does nothing to bolster your argument. Nothing up to this point in your response has any relevance to the topic being discussed. It is a thinly veiled attempt to shift perception of the discussion which also means it was a waste of time for both of us.

      Regarding your declaration of opinion as a shield: you gave no such charitable reading to my original article and derided it for not presenting scientific papers to back it up. This absolutely reeks of a double standard: when you say something it's just an opinion that doesn't need to be held to a rigorous standard, yet when I say something that you don't agree with we're now apparently in the realm of hard scientific facts that require a very high standard of peer-reviewed scientific papers to hold any muster. If we're simply trading opinions here then nothing I've ever said needs peer-reviewed scientific papers to back it up either.

      You want to know where I pulled out this implied statement: "the implied part where said media degradation happens within a short enough time span to be of significance." First of all, implied means you didn't explicitly say it. Asking where I got that from is disingenuous. Saying your question is rhetorical won't save you here either. You said that increasing media sizes meant that degradation of that media was inevitable. If that degradation is 50 years away from bringing about any harmful effect on the stored data then it's well beyond the point of physical drive failure and therefore doesn't matter in the slightest. Your statement about degradation implies that the degradation happens soon enough to fall within the useful life of the drive. The study that I cited explicitly says that there is no distinguishable correlation between drive capacity and data loss.

      ZFS references: did you even read my post? I explicitly told you that the study I cited that supports my statements is one of the references at the bottom of the article you linked! I'm starting to think that you only read what you wanted to read and ignored everything else that I typed.

      The entirety of the "computer scientists at Sun" section is the logical fallacy of appeal to authority all over again. If you don't understand what that is, look it up. If you still think I'm wrong, go to Slashdot and read the comments for a while. There is no shortage of accounts about corporations and managers being disconnected in varying degrees from the reality that the technical workers face. I get that you have a lot of respect for Sun Microsystems but the fact is that everyone makes mistakes, even the most highly regarded scientists in any given field. Regardless, I don't see what Sun has to do with any of this. I wrote an article with a defined purpose and I've re-stated that purpose here for extra clarity. I have not said that ZFS is a bad thing or that people should not use it at all, and I've written several times in my article about what benefits ZFS is supposed to bring to the table and what is needed to actually reap those benefits. I've also pointed out several aspects of a computer system that already provide built-in error checking and correction which greatly reduces the chances of ZFS checksum-based error detection ever having an error to detect in the first place, and to top it all off I've pointed out parts of a computer system that lack these integrity checks and that can lose data in ways that ZFS cannot possibly protect against and the study I cited points out all of these too. Your discussion about Sun Microsystems is a red herring; it has nothing to do with ZFS today and addresses zero of the points made in my article.

      Look, the bottom line is that you like ZFS and didn't actually read my article in good faith and have no intentions to do so, and I have no reason to care if you use ZFS or not. At the risk of becoming redundant, my entire purpose in writing my original article was solely to push back against irresponsible advocacy for ZFS. If you're going to tell a noob to use ZFS, you need to tell them about more than "it has checksums and protects against bit rot." Some enterprising young fellow building a storage server for the first time will slap ZFS on there and assume that they're protected against bit rot when it is NOT that simple. ZFS detects errors but does not correct them at all if RAID-Z isn't in use, but the new guy doesn't know that since few people screaming about the virtues of ZFS mention anything beyond the marketing points. He'll build a machine, end up with bit rot, and permanently lose data because "it was protected" except it wasn't. If you're going to tell people to use ZFS, you must tell them the entire story. That's ignoring the performance, complexity, and licensing downsides which can be annoying but won't lead to irreparable data loss.

      Do I know of something better? No, I don't. As with all things in life, the decision to use ZFS or not is a risk management decision, and ZFS checksums and RAID-Z auto-healing are only part of the ZFS risk equation. To illustrate: ZFS is a complex beast that comes with radically different administration tools and functionality than most other filesystems available, so one concern is that of user-centric risks such as lack of knowledge or experience with the tools and mistakes made with those tools that lead to data loss. No, btrfs isn't any better; I would never trust btrfs with anything important or performance-critical, so there is no need to bring btrfs up any further because I would be the last person to defend btrfs. My personal choice is clear: XFS on top of Linux md RAID-5 with external backups and automatic array scrubs. My original article explains why; I won't duplicate any of it here.

      You're right. We're done here. Unless another user wants to chime in, I think I'll let these two posts speak for themselves, for better or worse. If I have anything else to say, I'll add it to the original article where it belongs.