[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • OMV 1.0
    • NASNoob1 wrote:

      I will test it out with my 1 GB WD Green soon. It's a trash drive so, perfect for testing.

      This would be better.

      NASNoob1 wrote:


      Will the ZFS plugin be supported a long time in OMV? Don't want to set up ZFS and then in a few updates I can't use it anymore.

      I don't actually use the Plugin to create and administer my pools, I use the command line. I use the plugin because it integrates well with the other settings. I can't see support going anytime soon. A lot of work has been put into it of late. It's unlikely to just get abandoned.
    • I'm pretty sure 7zip is lz4 which is what you would be best choosing as a compression method for ZFS if you go down that route - so a quick guess, no. But you'd be better off trying it on your WD green and checking. ZFS can tell you how much space you are saving.

      This is from one of my media pools:

      Source Code

      1. compressratio 1.01x


      As you would expect, there is not much saving. But you will find that changes depending on the files you put in the pool. LZ4 can give you very good savings with VERY little CPU cost. It's a bit of a miracle algorithm.

      Once you've setup your pool, turn compression on:

      Source Code

      1. zfs set compression=lz4 [Poolname]

      Copy data in, then:


      Source Code

      1. zfs get all [Poolname] | grep compressratio
    • It's working but on a 1 TB drive i Only can access 898 GiB. This is way too loosy.
      I will get away from ZFS. Furthermore I will get away because all persons using ZFS I know totally stress me out using datasets, Snapshots, RAID and all this stuff I absolutely don't need.
      I use rsync once a day or a week and I have my data always safe on an external drive. Snapshots and all this stuff are totally stupid and only for people with too much time.

      These people just thing ZFS is the wonder of IT and better than everything else. Nothing can damage your data. They even say zfs send <whatever> sends 5 TB in one hour. I don't like liars. So I will stay away from that.

      I go back to EXT4.

      The post was edited 1 time, last by NASNoob1 ().

    • NASNoob1 wrote:

      Snapshots and all this stuff are totally stupid and only for people with too much time.
      Snapshots are awesome where I work. Especially when someone deletes a folder of new files created since the last backup.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • This guy always talked about meta data and pointers who get saved.
      Snapshots are fine, but not the zfs ones.

      I use rsync to my second and third disk and if someone deltes something, I can get it back easily too, no problem :P No over-complicated zfs needed :P

      I am a normal user, no pro. All I want is that it's working without being in ssh 10 hours a day.
    • NASNoob1 wrote:

      It's working but on a 1 TB drive i Only can access 898 GiB. This is way too loosy.

      Have you set the correct ashift for your drive?

      NASNoob1 wrote:

      I will get away from ZFS. Furthermore I will get away because all persons using ZFS I know totally stress me out using datasets, Snapshots, RAID and all this stuff I absolutely don't need.
      I use rsync once a day or a week and I have my data always safe on an external drive. Snapshots and all this stuff are totally stupid and only for people with too much time.

      These people just thing ZFS is the wonder of IT and better than everything else. Nothing can damage your data. They even say zfs send <whatever> sends 5 TB in one hour. I don't like liars. So I will stay away from that.

      I go back to EXT4.

      Fine, you do that. No one is forcing you to use it. You came here asking for help getting it up and running, but what you really have a problem with is the implementation of the filesystem itself. You go back to EXT4. I'll stick with my ZFS datasets, so when I open my family photos that I haven't looked at for a while, they're not half gone and my music remains blip free. I don't think I am stupid for wanting to use ZFS - what I do think is stupid is coming to a forum where other people are helping you in their free time and then throwing your toys out the cot.


      NASNoob1 wrote:

      This guy always talked about meta data and pointers who get saved.
      Snapshots are fine, but not the zfs ones.

      I use rsync to my second and third disk and if someone deltes something, I can get it back easily too, no problem :P No over-complicated zfs needed :P

      What you are talking about is normal backup. Backup which will backup a corruption if it occurs. But ultimately it's up to you what you choose to use. If you are happy with what you are using, then by all means carry on that way - we stupid ZFS users won't stop you.

      NASNoob1 wrote:

      I am a normal user, no pro. All I want is that it's working without being in ssh 10 hours a day.

      I don't SSH 10 hours a day, and OMV makes this so simple you don't need to. If you're referring to the commands I pointed you to earlier, they were just to help you figure out if compression was worthwhile and they wouldn't be done more than once.

      The post was edited 1 time, last by ellnic ().

    • Source Code

      1. so when I open my family photos that I haven't looked at for a while, they're not half gone and my music remains blip free.
      This one I don't understand.
      Why should this happen with Ext4?

      I said to this other person outside of this forum too, that ZFS is not some kind of magic which is capable of everything.
      And even if photos are gone: every person should have backups around.


      Source Code

      1. we stupid ZFS users won't stop you.
      The other person outside reacted exactly like this too. I never said ZFS users are stupid.

      Whatever. I'm back to Ext4, all my data is safe with that too. Every person uses what he wants to use and what he's capable of administrating.
    • NASNoob1 wrote:

      This one I don't understand.
      Why should this happen with Ext4?

      What you need here is to understand bit rot.

      NASNoob1 wrote:

      I said to this other person outside of this forum too, that ZFS is not some kind of magic which is capable of everything.

      Really? Damn.. I was going to ask it to predict this weeks lottery numbers.

      NASNoob1 wrote:

      And even if photos are gone: every person should have backups around.

      Of course, and ZFS helps mitigate the risk of those backups being junk.

      NASNoob1 wrote:

      The other person outside reacted exactly like this too. I never said ZFS users are stupid.

      .... I don’t think this can be explained to you.

      NASNoob1 wrote:

      Whatever. I'm back to Ext4, all my data is safe with that too.

      Then that’s all that matters, isn’t it? If you are happy that your data is safe, then there’s no need to change anything. To quote the Inbetweeners: “You can lead a horse to water, but you can’t make it stick Lego up it’s bum.”
    • Of course, and ZFS helps mitigate the risk of those backups being junk.

      Read this:
      jodybruchon.com/2017/03/07/zfs…about-bit-rot-and-raid-5/


      If you are happy that your data is safe, (...)
      Exactly. Everyone uses what he wants to use. You like ZFS, that's it. I just use Ext4 because ... I can put my data on and I don't need to worry about <whatever>.
      Now let's stop here talking. This thread is not for fighting out whats better.

      #ImOut here
    • As is the case with many articles out there, the above is just an opinion piece written by someone with a reasonable grasp of English, but a lack of decor. (It's a shame when someone tries to emphasize points, using crass language, achieving the exact opposite effect.)

      In this article, there's no verifiable data present and no peer reviewed white papers referenced. It's simply a compilation of "like minded" opinion pieces, where all jump to baseless conclusions. With a few years of experience to draw on, I've noticed that just because like minded people get together and express a common opinion or belief, it doesn't make them correct. History is littered with plenty of examples - Jonestown and Kool-aid come to mind.

      Setting aside ECC correction at the drive level - magnetic media errors are inevitable as media degrades. The problem only gets worse as today's stupendous sized drives grow even larger, with areal densities reaching insane levels. This is why integrating error detection into the file system itself is not only a good idea, as storage media and data stores grow, it will soon be a requirement. EXT4 and the current version of NTFS, as examples, will either become history or they'll be modified for error detection and some form of correction. While that's my opinion, I see this as inevitable. The only real question is, as I see it, what can be done now?

      Where ZFS is concerned, it was developed by Sun Corp specifically for file servers, by Computer Scientists who don't simply express an unsupported opinion. They developed something that works both mathematically and practically. Their work is supported by reams of data, has been peer reviewed and the physical implementations have been exhaustively tested. So, if one is to believe a group of Sun Corp Computer Scientists or "Jody Bruchon", I think I'll go with the scientists.
      On the "damage in RAM" topic - ECC RAM does a reasonable job of correcting memory error issues and server grade hardware is always a good idea. This is nothing new and there's plenty of data to support.
      ____________________________________________________

      The point on backup is well taken however. While I have a couple zmirrors strictly for bitrot correction (yes - it's a verified phenomenon that can be controlled), I have a total of 4 complete copies of data on three different hosts. (One host to be relocated, soon, to an outbuilding.)

      On the other hand, I'm sure there's an article out there, somewhere, that makes the case that "backups" don't really protect data - that it's just false peace of mind for "backup fan-boys" or "techno-nerds".
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)

      The post was edited 1 time, last by flmaxey: edit ().

    • Well IDK who Jody Bruchon is... nor does anyone else. He's certainly no one the industry recognises as a file system expert - or has even heard of for that matter. He looks like some guy with his own name.com, a blog, and he likes youtube.. and a bad opinion on everything. Don't forget the awesome films he's directed. It was quite comical quoting such an awful reference.. I think it made the point perfectly.
    • alex.ba wrote:

      Hi all,

      I am quite new to OMV and also to ZFS on Linux. I have used a BSD based system for years which worked quite well in regards to ZFS (But has other issues). I installed a fresh OMV 4.13 Image, upgraded to the latest Kernel 4.17, installed the extras and could install ZFS plugin without any issues.
      It appeared in the WEBGui and seems to work. Now after some days of testing I have 1 questions I already asked in another thread here but maybe that is the best place where all ZFS experts are.
      This is the the Thread from last week: Degraded ZFS not notifications

      So besides the email notifications (would be next step) I have the problem that after plugging off one disk (to simulate disk error) I still see a healthy pool.
      I tested that several times and as long as I do no execute a scrub it will show that everything is fine. Of course a regular scrub is really important however for a xTB big pool I only do that every 4-5 days. (Which I think is also the recommendation)

      What is your experience in case of an HDD error? I still have the BSD based system and here it immediately shows a degraded pool after removing one disk.

      The kernel itself also shows that the disk have been removed (I posted the LOG in the other thread where you can see it after only a few secs)
      I am still very much interested in Linux based NAS with all the advantages of ZFS however I am a bit confused about the behavior.

      Thanks for your help

      S
      This is not something I have experienced, but give me some time today and I will see if I can reproduce.
    • hi, guys,

      I have some problems with my omv/zfs nas.

      I used this combination since omv2.1. I recently upgraded to omv4 with some new hardware (ryzen3 1200 and 16gb of ecc ram). Everything else is fine except the nas don't import the pool automatically after reboot. I searched here and tried everything I can find with no luck.

      Anyone knows how to solve this problem.