Solved? OMV and software raid 5

    • Offizieller Beitrag

    I have a different opinion. There are others reasons conceivable to want to export a pool in a situation where someone want to be absolutely sure not to threaten the data in the pool (Mount the pool read only, doing some "house keeping" with the filesystem where the pool is normally mounted, and so on).

    Good things to note. Along these lines, I guess I don't have much in way of imagination. That comes from being so #@$& conservative. (On the other hand, back in the day, being conservative in a production environment served me well. Old habits die hard.)


    Also, I appreciate your note on lz4 compression. While I'm not using ZFS compression at the moment, I'm getting around to reading up on it. Of what I've learned, in general, ZFS is very impressive along these lines. It will read/write compressed or uncompressed files transparently. Similarly, a ZFS volume can have mixed compressed/uncompressed content and there's no issue.

  • I was a VM issue !!.... The old saying applies here, it's better be lucky then to be smart. :D


    Frankly, I'm amazed that your ferreted out such a detail.

    I have been called a frog but never a ferret.... when something bugs me I usually find a solution to the problem.... :thumbup:


    bookie56

  • Hi everyone, I have OMV3.0.91 with backported kernel on my NAS.
    When I installed the ZFS plug-in I got this error:

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    Hi everyone, I have OMV3.0.91 with backported kernel on my NAS.
    When I installed the ZFS plug-in I got this error:

    I've installed the ZFS plugin on two different hardware platforms, in times past, with Zero issues. Today, I attempted to install it and got errors. They were was not the same as yours but they were fatal.


    I'm examine this in a VM that has had the ZFS plugin successfully installed before. This is a "WAG", but it might be something with the install script or a change in sources. In don't know. This sort of thing is beyond my experience.


    I'll check this out in the VM and, maybe, do a fresh rebuild to see what happens.


    More later.

    • Offizieller Beitrag

    Hi everyone, I have OMV3.0.91 with backported kernel on my NAS.
    When I installed the ZFS plug-in I got this error:

    I made two attempts to install the ZFS plugin on the same machine. The first attempt ended in a fatal error.


    The second had a minor error, but was successful. (Maybe some packages loaded the first time around? Who knows.)
    The kernel module build was slow every time. (Probably normal.)
    In any case, the second attempt added the ZFS plugin. I built a mirror to test and verify it.
    _________________________________________________________________


    Here is an excerpt from my installation dialog that you might compare to your own.
    (Why is this important? ZFS is not part of the Linux kernel. It's a module that's built, and plugs into the kernel.)


    Setting up zfs-dkms (0.6.5.9-2~bpo8+1) ...
    Loading new zfs-0.6.5.9 DKMS files...
    Building for 4.9.0-0.bpo.3-amd64
    Building initial module for 4.9.0-0.bpo.3-amd64

    Done.


    Compared to your install dialog:


    Setting up zfs-dkms (0.6.5.9-2~bpo8+1) ...
    Loading new zfs-0.6.5.9 DKMS files...
    Building for 4.9.0-0.bpo.3-amd64 4.9.0-0.bpo.4-amd64

    Module build for kernel 4.9.0-0.bpo.3-amd64 was skipped since the
    kernel headers for this kernel does not seem to be installed.

    Building initial module for 4.9.0-0.bpo.4-amd64
    configure: error:
    *** Please make sure the kmod spl devel <kernel> package for your
    *** distribution is installed then try again. If that fails you
    *** can specify the location of the spl objects with the
    *** '--with-spl-obj=PATH' option.
    Error! Bad return status for module build on kernel: 4.9.0-0.bpo.4-amd64 (x86_64)

    Consult /var/lib/dkms/zfs/0.6.5.9/build/make.log for more information.

    ____________________________________________________________




    In any case, please do the following on the command line:


    apt-get clean
    apt-get update


    After that, try the ZFS plugin install again. (Note that building the module takes awhile.)

  • Great! now is working! tomorrow I'll build a ZFS mirror :)

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    @flmaxey: Very detailed instruction! But one addition: Meanwhile it is recommended to set compression not to "on" but to "lz4". The advantage is it checks if a file is already compressed. In that case compression is stopped at an early stage. Therefore it doesn´t matter to use lz4 compression also for already compressed content.

    I have compression set to off, but I've taken a bit of time to read up on it and of the compression techniques ZFS supports, as it seems, using lz4 compression might be a good idea for general use.


    One of the reasons I didn't even look at it initially is; in every scenario I've ever known (back in the day) compression caused a big hit to performance, or it made recovery difficult if not impossible, or (in some cases) both.


    Keeping in mind I'm completely uncompressed at this point and that my data stores are largely static files:
    Do you think it's a good idea to turn lz4 compression on? Are there any concerns / considerations for performance and pool recovery? (I'm going to assume, operationally, ZFS will operate as before - with file checksums and scrubbing working as they do without compression. I've read nothing that would suggest otherwise.)


    Anything would be appreciated.
    Thanks

    • Offizieller Beitrag

    Great! now is working! tomorrow I'll build a ZFS mirror :)

    Good deal. :thumbup:


    Of the automated jobs in the "ZFS How To" post, don't forget to setup a zpool scrub -s (pool name) to stop a scrub 30 minutes before the time of your scheduled maintenance reboot. (In any.)


    __________________________________


    If you go this route, using ZFS on OMV 3.0, it might be something of a commitment. I wouldn't upgrade to OMV 4 or another version of ZFS, until you know there won't be any issues in the upgrade. Keep that in mind.

  • Thanks for the suggestion :) I read in an other post that the latest version of zfs is 0.7, but is still not compatible with omv.
    In case it will be compatible will i be able to update zfs? I'm not sure how Fs updates are working :(


    Send by my Sony XZ1 using Tapatalk

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • I read in an other post that the latest version of zfs is 0.7, but is still not compatible with omv.
    In case it will be compatible will i be able to update zfs?

    Usually OMV uses the distro's packages to provide such basic functionality. As you can see that's 0.6.$something now. When you look into the thread @macom referenced above you find packages containing ZoL 0.7.3. So you could use ZFS 0.7.3 if you know what you're doing and are an Linux expert. Otherwise this is a great recipe to end up with ZFS storage that is inaccessible.

  • in every scenario I've ever known (back in the day) compression caused a big hit to performance, or it made recovery difficult if not impossible, or (in some cases) both.

    Experiences with software and hardware from decades ago aren't valid any more. Pretty obvious IMO.


    While storage performance with spinning rust hasn't changed much over the last years, CPUs are much much more capable and therefore trading in few CPU cycles for higher storage performance and less storage capacity needed should be considered standard these days.


    Even slow ARM thingies are fast enough (and that's the reason why all OMV images for ARM devices -- except Raspberries -- ship with a lzib compressed rootfs and enabled lzo compression for normal usage -- if lz4 would be available with btrfs on ARM I would of course prefer lz4 since same compression speed but magnitudes faster at decompression).


    You'll notice a performance hit with zlib even on beefy hardware (so that's a reasonable choice for archive/cold storage for example) but lz4 with ZoL should actually improve storage performance.

  • Usually OMV uses the distro's packages to provide such basic functionality. As you can see that's 0.6.$something now. When you look into the thread @macom referenced above you find packages containing ZoL 0.7.3. So you could use ZFS 0.7.3 if you know what you're doing and are an Linux expert. Otherwise this is a great recipe to end up with ZFS storage that is inaccessible.

    I don't want to try anything, if ZFS on OMV3 is 0.6.X I'm fine with it.
    My only curiosity is if I will update ZFS to 0.7.3 will the FS that I created with 0.6.X get the new feature/fixes?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • My only curiosity is if I will update ZFS to 0.7.3 will the FS that I created with 0.6.X get the new feature/fixes?

    IMO the greatest features of ZoL 0.7 and above are in another area: more efficient use of available memory and performance improvements (partially needing some hardware support, the most impressive speed gains currently AFAIK require mainboards with Intel's QuickAssist Technology inside the PCH).


    Besides that ZoL version and zpool versions are two different things. In case newer zfs releases support higher zpool versions you can try to update the pools. But this is something that can fail (I'm a bit cautios here since I ran in a couple of problems with GRUB and zpool version mismatches, though all on Solaris 11 and not Linux).


    The best idea is to wait until ZoL 0.7 is stable (meant as: installable without problems in OMV), then upgrade Debian/OMV (or activate the relevant backports stuff), then ask again for advise.

  • thanks a lot for your help!
    Since I'm interested in some feature of OMV$ (mostly the possibility to install the latest python-bittorrent needed for the latest deluge), do you suggest to update to omv4 before create the ZFS pool?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    Experiences with software and hardware from decades ago aren't valid any more. Pretty obvious IMO./----/

    Umm, thanks for that unsolicited response of a direct question to cabrio_leo .....


    Every time I read your posts, which always contain an opinion, but little in the way of something practical or usable, I can't help but smile as my old aged cataract-filled eyes glaze over. (I try to avoid laughing heartily, because it's hard on my arthritic joints and it cracks the barnacles on my arse.)
    This post does make me long, wistfully, for the days of painting on cave walls for information storage. Ahh,, yes, true long term data storage, bitrot free, and only threatened by seismic events. It makes one wonder if compression algorithms like lz4 are really progress?
    ______________________________________________


    With that said, we NOOB's can count our fingers and toes, with very little help and passable accuracy. When it comes to inserting yourself into NOOB threads, in the literal middle, I can't help but believe that it's a waste of your expansive skills set and valuable unpaid time. (As you have, more or less, noted before.)


    So, as you have done for another forum member who is an active participant in this thread, perhaps you should put me in your spam filter as well? :thumbup:


    Thanks

  • Umm, thanks for that unsolicited response of a direct question to cabrio_leo .....

    ^^ @tkaiser expressed it much better than I could have. I agree with him completely on this issue. ^^

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo () aus folgendem Grund: missleading wording

  • Of the automated jobs in the "ZFS How To" post, don't forget to setup a zpool scrub -s (pool name) to stop a scrub 30 minutes before the time of your scheduled maintenance reboot. (In any.)

    If you shutdown the system while a scrub is already active, it is continued after the next system start. I have observed this by accident, because the scrub duration was more than 40 hours at the end. Normally the job is done in about 3 hours. In the meantime the system was powered off.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Since I'm interested in some feature of OMV$ (mostly the possibility to install the latest python-bittorrent needed for the latest deluge), do you suggest to update to omv4 before create the ZFS pool?

    I can't answer this since currently I play with OMV and ZFS only on toy grade hardware (a little ARM thingie with just 1 GB DRAM but bleeding edge self-built kernel and ZoL). Others more familiar with ZFS and OMV4 on x86 might advise.


    thanks for that unsolicited response of a direct question to cabrio_leo

    Might surprise you but this here is a forum. I'm not into conversations but if it's necessary to correct outdated information (or feelings/assumptions based on that encouraging other users doing bad things) I will simply do.

    • Offizieller Beitrag

    ^^ @tkaiser expressed it much better than I could have. I agree with him completely ^^

    Yeah, he has good points and, for instance, I agree with his stance on mdadm RAID and hardware RAID adapters in all details. But you have to admit that there's something to said about "finesse". One can answer a question, even if it's asked out of ignorance (especially if it's asked out of ignorance) or provide a data point, without being condescending. Otherwise, a forum becomes a platform for egos and self indulgence, and soon thereafter, dies.


    In times past, on occasion, I wondered why open source projects died. There were, without doubt, excellent distributions and software packages that simply disappeared. While several factors were probably involved, I'm getting an understanding of what a few of them are.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!