[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • When I've run into "Could not get a lock", a reboot cleared the issue. There may be other ways to do it.

    That’s pretty much the easiest way to sort unless you know what process is using it. You can have a look with htop or ps but if the system is able to be rebooted while you get a coffee, why not :)


    Just for info: On Ubuntu, a common cause of this is the apt-daily services (which I always disable). They don’t come as standard with Debian though, so it’ll be something else.

    • Offizieller Beitrag

    hey don’t come as standard with Debian though, so it’ll be something else.

    OMV adds a daily apt update.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OMV adds a daily apt update.

    I didn’t realise that. All this time using it, and I never realised. :D


    Is it still apt-daily? Just for info purposes:


    Code
    sudo systemctl mask apt-daily.service
    sudo systemctl mask apt-daily.timer
    sudo systemctl mask apt-daily-upgrade.service
    sudo systemctl mask apt-daily-upgrade.timer
    • Offizieller Beitrag

    Is it still apt-daily?

    Nope, cron-apt. Look at /etc/cron.d/cron-apt

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • So yeah basically comment out line 5 or adjust. I personally do most stuff manually so I have no need for it. I’ll probably disable it by commenting out and leave installed in case I change my mind. Still can’t believe I didn’t notice this in 3 or so years OMV use. X/ or wait, it’s been more than 3 years. See this is how fried I am right now. Too much on and no sleep does not make an attentive ellnic :S

  • hi,ellnic,


    I have tried this before. No luck either. Looks like omv didn't properly unmount the pool. So the folders already there when it boot up again. I used cron to remove the folders and then import the pool as work around for now.

  • Hello, I'm looking for a tutorial on how to automate the backup of snapshots from my zfs pool to a different drive inside my OMV server. Currently I already run the automated version created by @flmaxey and would like to have more this security in case there is a catastrophic failure on any of the disks in my zfs pool ...

  • Attacking my presentation and my intelligence doesn't attack my facts. Do you have some sort of substantive complaint about what I said? Let's find out...ah, immediately we barrel into "citation needed" territory, so I'm betting the answer is "no." Most of my statements come from practical experience in the field rather than reading something on the internet. What do you expect me to cite a peer-reviewed scientific paper for, exactly? What research have you done into what research exists, and can you cite anything within your strict standards that was published in the last 16 years on the subject of bit rot and data corruption, regardless of which "side" it may seem to be arguing for? I'd love to see you show off the amazing research chops you're flexing and produce some proof, but it's easier to sit back and say "pssshhh, that person doesn't know what they're talking about, what a fool!" and go back to whatever you were doing, emotionally validated with minimal effort.


    "Setting aside ECC correction at the drive level - magnetic media errors are inevitable as media degrades." - There's no verifiable data present and no peer reviewed white papers referenced. Please cite your sources for your assertion, specifically for the implied part where said media degradation happens within a short enough time span to be of significance, and also for the next statement about how "the problem only gets worse as today's stupendous sized drives grow even larger."


    "Where ZFS is concerned, it was developed by Sun Corp specifically for file servers, by Computer Scientists who don't simply express an unsupported opinion." - There's no verifiable data present and no peer reviewed white papers referenced. Please cite your sources for your assertion. Your logical fallacy of appeal to authority doesn't hold any water. Computer scientists are just as capable of being wrong as anyone else.


    To quench your thirst for a research paper, here's one that's at the bottom of the Wikipedia page on ZFS that puts the level of risk into clear perspective: An Analysis of Data Corruption in the Storage Stack, by Lakshmi N. Bairavasundaram et al. Some relevant quotes:


    "Of the total sample of 1.53 million disks, 3855 disks developed checksum mismatches." Note that this is 0.25% of all disks.


    "There is no clear indication that disk size affects the probability of developing checksum mismatches."


    "RAID reconstruction encounters a non-negligible number of checksum mismatches." Someone will bring this line up anyway, so I'm addressing it now: the 0.25% of disks with checksum mismatches will have those mismatches picked up by a RAID scrub, meaning a regularly scrubbed array will catch but not fix errors, just like ZFS without RAID-Z! A RAID-Z setup will auto-fix this, but it has already been established that the probability of having a checksum mismatch at all is extremely low.


    "There are very few studies of disk errors. Most disk fault studies examine either drive failures or latent sector errors." How can you cite studies that probably do not exist? If they exist, are they working with a statistically significant data set? More importantly, this study is from February 2008, over 10 years ago; is it still relevant given the drastic differences in technologies used between 2008 and 2018?


    The scientists support what I've said. Keep in mind that at no point did I say that ZFS should not be used. The original point of my article was to denounce ZFS zealots and fanboys that run around data storage forums advocating for ZFS inappropriately and even dangerously. If you want to use ZFS then do it. It's not my decision to make, and frankly, I have no reason to care what your personal choice is. My problem is when you tell newbies who want advice about filesystem options to use ZFS because it has checksums and fixes your data errors and bit rot is a thing that happens all the time, but you leave out that it can't fix errors with only checksums, requires RAID-Z to fix errors on-the-fly, and the actual chances of being slammed with bit rot within 14 months of deployment is roughly 0.003855 million out of every 1.53 million disks. It's an irresponsible thing to do and sneering at me for calling others out for such irresponsible advocacy doesn't contribute anything meaningful to the discussion.


    I want to thank you for at least making an effort to discuss some real points, unlike the next person being quoted.


    Well IDK who Jody Bruchon is... nor does anyone else. He's certainly no one the industry recognises as a file system expert - or has even heard of for that matter. He looks like some guy with his own name.com, a blog, and he likes youtube.. and a bad opinion on everything. Don't forget the awesome films he's directed. It was quite comical quoting such an awful reference.. I think it made the point perfectly.


    Here's the big secret you missed in the school of life: it doesn't matter who Jody Bruchon is because the truth of a statement does not hinge on the popularity or reputation of the person saying it (logical fallacies of appeal to popularity, ad hominem, and appeal to authority). You were so offended by my disagreement with something that you are too emotionally invested in that you went to the trouble to actively search for things unrelated to my ZFS article to sneer at (thanks for increasing my YouTube view counts, by the way!), yet you haven't actually said anything about ZFS, RAID, data integrity, etc. nor refuted one single thing I wrote. At least flmaxey tried to present some arguments, lacking in scientific citations as they were. As for your "not a file system expert" nose-thumbing, it would be a real shame if you found out that in addition to building Linux RAID servers that measure their uptime in years for SMEs and making low-quality YouTube videos for you to mock copiously, I also write file systems. I'd like to see a few links to the file system code you've worked on since you're so much smarter than the rest of us. Come on, let's see your work, big shot! If you're going to play the ad hominem game, put something solid behind your side of the wang-measuring contest you're trying to organize.


    Or--and I know this might sound crazy--you could actually point out what I've said that you disagree with and explain why you disagree with some technical details. That would be more useful (and less publicly embarrassing for you) than talking trash about me right after saying that you know nothing about me.

  • "The first thing all must understand - this is a forum of opinions, not computer science." - You don't get to say this after you appealed so heavily to the authority of "computer scientists." If you think that a statement's validity hinges on the credentials of the person making it then you are objectively incorrect. Peter Gutmann's paper on recoverability of data from magnetic media from which the "DoD secure wipe" ritual was spawned aged rather poorly, and eventually Gutmann himself penned a retrospective that clarified where the paper wasn't quite right and how changes in hard drive technology (specifically using PRML instead of MFM, and everything) made a single wipe pass sufficient to destroy the data on any modern hard drive beyond retrieval, yet multi-pass secure wipes on PRML and EPRML disk drives are strongly advocated by data security experts of all kinds even today.


    Regarding your statements on "attack" and "facts," you are attempting to argue semantics to the point of splitting hairs. This is a clear exposition of bad faith. If you do not understand that the word "attack" may have multiple context-sensitive meanings then that's not my problem. Blathering on and on about unspecified people using "facts" incorrectly does nothing to bolster your argument. Nothing up to this point in your response has any relevance to the topic being discussed. It is a thinly veiled attempt to shift perception of the discussion which also means it was a waste of time for both of us.


    Regarding your declaration of opinion as a shield: you gave no such charitable reading to my original article and derided it for not presenting scientific papers to back it up. This absolutely reeks of a double standard: when you say something it's just an opinion that doesn't need to be held to a rigorous standard, yet when I say something that you don't agree with we're now apparently in the realm of hard scientific facts that require a very high standard of peer-reviewed scientific papers to hold any muster. If we're simply trading opinions here then nothing I've ever said needs peer-reviewed scientific papers to back it up either.


    You want to know where I pulled out this implied statement: "the implied part where said media degradation happens within a short enough time span to be of significance." First of all, implied means you didn't explicitly say it. Asking where I got that from is disingenuous. Saying your question is rhetorical won't save you here either. You said that increasing media sizes meant that degradation of that media was inevitable. If that degradation is 50 years away from bringing about any harmful effect on the stored data then it's well beyond the point of physical drive failure and therefore doesn't matter in the slightest. Your statement about degradation implies that the degradation happens soon enough to fall within the useful life of the drive. The study that I cited explicitly says that there is no distinguishable correlation between drive capacity and data loss.


    ZFS references: did you even read my post? I explicitly told you that the study I cited that supports my statements is one of the references at the bottom of the article you linked! I'm starting to think that you only read what you wanted to read and ignored everything else that I typed.


    The entirety of the "computer scientists at Sun" section is the logical fallacy of appeal to authority all over again. If you don't understand what that is, look it up. If you still think I'm wrong, go to Slashdot and read the comments for a while. There is no shortage of accounts about corporations and managers being disconnected in varying degrees from the reality that the technical workers face. I get that you have a lot of respect for Sun Microsystems but the fact is that everyone makes mistakes, even the most highly regarded scientists in any given field. Regardless, I don't see what Sun has to do with any of this. I wrote an article with a defined purpose and I've re-stated that purpose here for extra clarity. I have not said that ZFS is a bad thing or that people should not use it at all, and I've written several times in my article about what benefits ZFS is supposed to bring to the table and what is needed to actually reap those benefits. I've also pointed out several aspects of a computer system that already provide built-in error checking and correction which greatly reduces the chances of ZFS checksum-based error detection ever having an error to detect in the first place, and to top it all off I've pointed out parts of a computer system that lack these integrity checks and that can lose data in ways that ZFS cannot possibly protect against and the study I cited points out all of these too. Your discussion about Sun Microsystems is a red herring; it has nothing to do with ZFS today and addresses zero of the points made in my article.


    Look, the bottom line is that you like ZFS and didn't actually read my article in good faith and have no intentions to do so, and I have no reason to care if you use ZFS or not. At the risk of becoming redundant, my entire purpose in writing my original article was solely to push back against irresponsible advocacy for ZFS. If you're going to tell a noob to use ZFS, you need to tell them about more than "it has checksums and protects against bit rot." Some enterprising young fellow building a storage server for the first time will slap ZFS on there and assume that they're protected against bit rot when it is NOT that simple. ZFS detects errors but does not correct them at all if RAID-Z isn't in use, but the new guy doesn't know that since few people screaming about the virtues of ZFS mention anything beyond the marketing points. He'll build a machine, end up with bit rot, and permanently lose data because "it was protected" except it wasn't. If you're going to tell people to use ZFS, you must tell them the entire story. That's ignoring the performance, complexity, and licensing downsides which can be annoying but won't lead to irreparable data loss.


    Do I know of something better? No, I don't. As with all things in life, the decision to use ZFS or not is a risk management decision, and ZFS checksums and RAID-Z auto-healing are only part of the ZFS risk equation. To illustrate: ZFS is a complex beast that comes with radically different administration tools and functionality than most other filesystems available, so one concern is that of user-centric risks such as lack of knowledge or experience with the tools and mistakes made with those tools that lead to data loss. No, btrfs isn't any better; I would never trust btrfs with anything important or performance-critical, so there is no need to bring btrfs up any further because I would be the last person to defend btrfs. My personal choice is clear: XFS on top of Linux md RAID-5 with external backups and automatic array scrubs. My original article explains why; I won't duplicate any of it here.


    You're right. We're done here. Unless another user wants to chime in, I think I'll let these two posts speak for themselves, for better or worse. If I have anything else to say, I'll add it to the original article where it belongs.

  • Is it possible to directly share a ZFS filesystem using Samba? Or do i need to still create a shared folder inside the filesystem?


    e.g. /zpool01/Music/Music vs /zpool01/Music/

  • Is it possible to directly share a ZFS filesystem using Samba?

    Nope, a ZFS filesystem is handled like a device.

    Or do i need to still create a shared folder inside the filesystem?

    Yes, because SMB/CIFS requires a shared folder in OMV.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • Nope, a ZFS filesystem is handled like a device.

    Yes, because SMB/CIFS requires a shared folder in OMV.

    Thanks for the insight. One last question then, should each shared folder have its own filesystem or is it ok to put all the shared folders in one filesystem?

  • One last question then, should each shared folder have its own filesystem or is it ok to put all the shared folders in one filesystem?

    I would say, it depends! Personally I have configured several ZFS file systems on the zfs pool. Per each ZFS file system there is one shared folder with "/" as path entry. But this is the way I did it. You can also do it differently.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • The drill is the same. Add a ZFS filesystem (in place of a Linux folder) as a shared folder, but note the relative path is "/". Then create a Samba share, using the shared folder, to put it on the network.

    Thanks this was the piece of information I was missing. Now I've got 1 filesystem per shared folder.

    • Offizieller Beitrag

    Is it possible to directly share a ZFS filesystem using Samba? Or do i need to still create a shared folder inside the filesystem?

    You always have to create a shared folder but the shared folder can be the root (/) of the filesystem.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!