What is OMVs "recommended/native" storage solution? (setting up a brand new storage)

  • Even after 8 years of using OMV the answer is not clear for me. Recently I have acquired a set of new drives as I outgrew my current storage, so essentially I am starting from zero. I have set up TrueNAS Scale with 2 striped RAIDZ1 vdevs, but not really liking TrueNAS for the limitations it brings essentially just for a (much) better ZFS support via the frontend I am thinking where to move further, considering the following options


    1) ZFS, while works in OMV, its not supported out of the box and the plugin offers just the bare minimum of options, so when replacing a drive, scheduling scrubs and snapshots and restoring data from them basically I have to google the zfs commands and do it from the command line.


    2) MegerFS with Snapraid is what I was using during the past years, and while it works well, this is also implemented via external solutions. Also I am quite happy I haven't had to restore any data via Snapraid, I am not sure what my success rate would be (and also it has to be done via the command line with no GUI support)


    3) Btrfs seems to be supported the best (some of us remember at one point it was supposed to be the only filesystem supported by OMV) with automated regular scrubs, snapshots, etc., however it seems to me this file system essentially dead for storage arrays and superseded by ZFS (despite being a quite a fan of it and having all my current disk under OMV formatted to btrfs - however i still think its a great file system for the OS). Btrfs still does not have a stable RAID5 implementation, so i don't see how to implement something similar to my current striped RAIDZ1 config.


    At this moment I am leaning towards importing my TrueNAS pool to OMV, schedule scrubs via a cronjob and give up on automated snapshots (as I would need to create multiple cronjobs for scheduling then destroying snapshots and I would likely mess that up). Otherwise there is nothing really in TrueNAS feature set that I could not do with the help of a few docker apps I am running anyway.

    SuperMicro CSE-825, X11SSH-F, Xeon E3-1240v6, 32 GB ECC RAM, LSI 9211-8i HBA controller, 2x 8 TB, 1x 4 TB, 1x3TB, MergerFS+SnapRAID

    Powered by Proxmox VE

    • Offizieller Beitrag

    schedule scrubs via a cronjob

    You don't even need to do that. The plugin automatically sets up monthly scrubs.

  • I've always liked (and still do) mdadm raid (depending on your disk availability, but I'm running raid 10 - 4 drives...two would be a simple mirror), with lvm on top (to give create/grow/shrink/snapshot), and ext4 as the fs. Solid, dependable, boring. Well established (and supported by omv) stuff that works.

  • I've always liked (and still do) mdadm raid (depending on your disk availability, but I'm running raid 10 - 4 drives...two would be a simple mirror), with lvm on top (to give create/grow/shrink/snapshot), and ext4 as the fs. Solid, dependable, boring. Well established (and supported by omv) stuff that works.

    I personally use mdadm raid 5 and a mdadm raid 1, both with an xfs file system. I have no need for create/grow/shrink of volumes, so I don’t use lvm, and of course xfs does not support shrink anyway, but it is a faster filesystem than ext4 for larger files, and it supports parallel I/O so it’s better for multiple users/services accessing the same filesystem.


    I agree that simple is usually better, but zfs is a compelling option too since it is a combination filesystem and raid in one solution. I have used it before in test setups of freenas and true nas. Even though others are using it, I wish it had native support in the standard Linux kernel instead of having to use the proxmox kernel. I would probably feel more comfortable with it then.

  • I agree that xfs is faster for bigger files. My general use case tends to be lots of little files, and I find ext4 a better fit for that. when I last tried zfs the memory overhead was onerous to get the benefits of it, and it felt a bit clunky to use....Although sitting at the bottom of the rack next to me is an old proliant ml50 with a zfs array in it. It's not been turned on for a few years. I'm currently using it as a shelf for the microservers. It's too loud to sit next to.


    I don't use a lot of media files...I muck about with various services that require lots of little config and data/db files....and containers...so lvm's ability to create and remove logical volumes and shrink/grow where necessary is useful for me. Also document management and creation...so ext4 beats xfs for me.


    Your comments are entirely valid though...and herein lies the problem. There isn't really a one size fits all solution. What I do fits my use case...but other people are not me....

    • Offizieller Beitrag

    2) MegerFS with Snapraid is what I was using during the past years, and while it works well, this is also implemented via external solutions. Also I am quite happy I haven't had to restore any data via Snapraid, I am not sure what my success rate would be (and also it has to be done via the command line with no GUI support)

    Have you checked the tools for both the MergerFS and SnapRAID plugins lately? You'll find that most of the functions that may be required for restorations, drive swaps, etc., are in the GUI. Doc's -> MergerFS -> SnapRAID.

    • Offizieller Beitrag

    this is also implemented via external solutions.

    And yet it doesn't work any different than native omv filesystems. And remember that mergerfs is only pooling filesystems that are natively supported by OMV. It is not a filesystem. Snapraid is not a filesystem either. Just a form of redundancy.

    omv 7.4.2-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • molnart The answer to your question is that it's horses for courses. While making a choice don’t lose sight of the fact that the primary reason for RAID is uptime and it’s no substitute for proper backups. It’s always going to be a balance between capacity, performance and redundancy level.


    You know enough about mergerfs + SnapRaid to understand its pros & cons, so that leaves, MD RAID, BRTFS and ZFS.


    Plenty of people have stuck with MD RAID for better or worse, but the number of forum posts re: failed/inactive arrays shows people did/do not understand how to deal with failures and other admin tasks.


    The inclusion of BRTFS support first in OMV6 and then OMV7 is meant to offer something better. In many ways it does, but there are gaps compared to MD RAID. There is no detailed info available via the WebUI for BTRFS filesystems spanning multiple devices. BTRFS has no in-built monitor daemon to alert the user via notifications of failed devices. OMV only offers details about device stats, there is no additional functionality to monitor the logs in real-time for BTRFS problems. As BTRFS tends to go read-only when problems occur you would get a generic email when it fails to mount because one or more devices are missing on a re-boot. You need to resort to the CLI to deal with “missing” devices, BTRFS balance, etc.


    Even if you put the 50% storage efficiency of BTRFS RAID 1/10 to one-side, it’s hard to find a valid argument to use BTRFS RAID for more than 3 or 4 drives on an array that can only sustain one device failure and does not guarantee to mount read/write and so provide the Uptime associated with other RAID implementations that can mount in degraded mode.


    Lastly, if you say Uptime isn’t really the primary concern of home users as it is really about combining a number of drives to make a pool and BTRFS makes it really easy to add disks to an existing pool, then even here you can be caught out. For example, add a 8TB to a BTRFS RAID1 with 2 x 2TB drives and you only get to use 4TB of that 8TB drive.


    As you are contemplating using ZFS, you must already be aware of its advantages over BTRFS and how it can be used in OMV. It’s less tightly integrated in OMV, automated scheduled scrubs are easily covered and automated snapshot admin can be done via various 3rd party scripts/progs. Most CLI ZFS commands are straightforward and relatively intuitive, such as replacing a failed drive. Unlike BRTFS, you have a “rollback” command. More importantly a degraded ZFS pool will mount read/write and so maintain uptime. ZFS has a comprehensive monitor system – ZED – so you get prompt notification of problems.


    To answer BernH point, it’s not obligatory to use a pve kernel with ZFS. The standard debian kernel works well enough with the zfs-dkms package. Debian kernels do not change frequently, so the DKMS build times is not too much of a problem. You are however stuck with zfs 2.1.11 unless you choose to install the debian backport kernel and associated packages which will currently give you zfs 2.2.3 which is on par with pve kernel >= 6.5


    tetricky Mentioned on ZFS "the memory overhead was onerous to get the benefits of it .." Without knowing the details it's hard to say anything precise here about a comparison with ext4. ZFS was not designed with speed as it's top priority and its need for memory is often overstated nor is memory the only factor to determine performance. For example, a workload that's heavy on random IOPS is not a great match for a raidz1/2 pool. A workload that generates a lot of sync writes on a pool made up of HDDs may not perform well without a separate log device even if a mirrored vdevs are used.


    molnart You didn’t say how many disks, or what size, you intend to use, but for a strip of raidz1 it must be at least six. If you’re doing mostly sequential read/write I would have thought a single RAIDZ2 would be a safer bet.


    I could go on a lot about OMV versus TrueNAS Scale, but suffice to say I’d still pick CORE over SCALE any day if ZFS storage was the main use of the NAS. But for home use OMV wins over SCALE’s closed appliance model, offering a sensible implementation of docker-compose and VM/LXC with overall a much greater flexibilty of use.

  • Thanks for the clarification on zfs in the standard kernel. I was aware that previous kernels used in the last couple of Debian versions had rudimentary support and required the proxmox kernel for better support but I was not aware it had improved a bit in the standard kernels. I should have been a little clearer in my statement, but regardless, I just learned something new too.

    • Offizieller Beitrag

    1. ZFS, while works in OMV, its not supported out of the box
    2. the plugin offers just the bare minimum of options
    3. when replacing a drive,
    4. scheduling scrubs
    5. snapshots and restoring data from them basically I have to google the zfs commands and do it from the command line.

    1. Whether ZFS is available by default or it's added as a plugin is inconsequential. The only difference is installing it. With the Proxmox kernel, ZFS is well supported by the kernel and trouble free.


    2. Not true. It's a matter of knowing where to find and edit various properties. The following is an example of the numerous editable properties of an individual filesystem. However, setting ZFS attributes should be a one time thing. These selections should be decided before implementing a pool and creating child filesystems. Editing ZFS properties after a filesystem has been created and populated with data, creates folders / files with mixed attributes.


    3. Over the years (at least 8 years), I've replaced two drives in my primary server's Zmirror. Replacing a drive is an area where I'd rather be on the command line.


    4. One scrub, every two weeks, is scheduled when the plugin is installed. For most users a 2 week scrub is fine.


    5. This -> doc would guide you through how to setup and configure zfs-auto-snapshot. The zfs-auto-snapshot package will automate snapshots and purge unneeded snapshots on a configurable schedule. Thereafter, it's on auto-pilot. The document also explains snapshot interval considerations and how to "unhide" snapshots, enabling easy restorations from a network client.

  • I don't have the answer, but there is a question often overlooked.


    What happens if the machine around the drives dies...not the drives themselves?


    I know with mdadm (software raid) it's relatively easy to rebuild array on a complete separate machine. I have had to do that....I have also been privy to a situation where a proprietary raid controller card has died and taken all the data with it (settings stored on the card itself, and not recoverable from even an identical card).

    • Offizieller Beitrag

    What happens if the machine around the drives dies...not the drives themselves?

    In that case you just need to import the pool on a new machine. You can do it from the GUI or from the command line. If you did not export the pool you must force the import. It can also be done easily from the GUI or from the CLI.

    • Offizieller Beitrag

    a situation where a proprietary raid controller card has died and taken all the data with it (settings stored on the card itself, and not recoverable from even an identical card).

    What card was this? Most (all?) LSI based cards do not do this. I have replaced quite a few LSI-based cards and never lost any data. The new card just recognized the array.

    omv 7.4.2-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.14 | compose 7.2.1 | k8s 7.2.0-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.8


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • molnart You didn’t say how many disks, or what size, you intend to use, but for a strip of raidz1 it must be at least six. If you’re doing mostly sequential read/write I would have thought a single RAIDZ2 would be a safer bet.

    yep, I am using 6 drives. The reason I went for striped RAIDZ1 instead of RAIDZ2 is the higher iops for a potential resilver. i have got some used SAS drives and I figured having a faster resilver is a safer than risking another drive failing during a long resilver. and also, having better iops in general does not hurt

    But for home use OMV wins over SCALE’s closed appliance model, offering a sensible implementation of docker-compose and VM/LXC with overall a much greater flexibilty of use.

    exactly. although I like some of the features of TrueNAS, OMV is where my heart is.

    5. This -> doc would guide you through how to setup and configure zfs-auto-snapshot.

    thanks, that is a valuable source indeed.



    One thing I do not understand is what are volumes and filesystems in OMV's ZFS implementations? Which is the equivalent of TrueNAS' dataset? I am already running a simple ZFS pool with 2 mirrored SSDs for my docker apps, where I have created two folders, one for the application data and one for the images. Is there a way to convert those to "datasets" (or their OMV equivalent) so i can snapshot them separately?

    SuperMicro CSE-825, X11SSH-F, Xeon E3-1240v6, 32 GB ECC RAM, LSI 9211-8i HBA controller, 2x 8 TB, 1x 4 TB, 1x3TB, MergerFS+SnapRAID

    Powered by Proxmox VE

    Einmal editiert, zuletzt von molnart ()

    • Offizieller Beitrag

    One thing I do not understand is what are volumes and filesystems in OMV's ZFS implementations? Which is the equivalent of TrueNAS' dataset? I am already running a simple ZFS pool with 2 mirrored SSDs for my docker apps, where I have created two folders, one for the application data and one for the images. Is there a way to convert those to "datasets" (or their OMV equivalent) so i can snapshot them separately?

    It sounds like once you created a ZFS pool, you're using it by creating Linux folders at the root of the pool. While that can be done, you're missing out on a lot of functionality.

    Once a ZFS pool is created; in the OMV GUI, you would highlight pool, click on the add(+) button and "add filesystem" from the popdown. It's called a "filesystem" because the created data set has the characteristics of a formatted partition. While filesystems can inherit ZFS properties from the parent pool, a filesystem can have it's own set of ZFS properties and, as you suggested, they can be snapshot-ed separately from the pool with different snapshot intervals and different retention periods.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!