Chosing filesystem

  • Hello eveyone,


    I'm on OMV6 with a 2x4tb + 2x14tb HDD coming from an old Synology SHR btrfs build worked directly once plugged into omv.


    I would like now to update to OMV7 with a fresh install and I'll use this moment to upgrade to 4x14tb.

    I would like to change the filesystem to get something cleaner.

    My use of OMV is media storage + docker for some apps like jellyfin (with sometimes 5-10 users at the same time), calibreweb, Minecraft server, etc


    I will install omv on usb/SD card, have a 250gb SSD for docker and the 4 disk for data

    I have a backup of my data and they are not crucial.

    My goals are space efficiency and data availability (if a disk fails, the best would be the data to be still accessible, or at worst 24h max offline)


    The choices I see for the filesystem is:

    - EXT4 with mergerfs and snapraid

    => Easy and simple but, if I understand well, when a disk fail data will be offline during the fix command ?

    Are "rebuild" time as long as raid/zraid ?


    - BTRFS with raid 5.

    => Best integrated in OMV, data available during replacement (even if take a long time to rebuild)

    I know about the write hole, I have a ups and since my data are not crucial, I maybe could take my chances ?


    - ZFS with zraid1

    => Could be the answer but isn't that overkill for my need ?

    What downside are to ZFS apart from integration that needs plugin (like mergerfs and snapraid btw) and not upgradable (not a problem for me, won't change the hdd capacity for a while) ?


    - Some hybrid BTRFS snapraid mergerfs

    => I could have data availability during disk replacement if I got it right. But it would maybe be a too "messy" install (like if I replace a disk, it wouldnt appear in OMV GUI cause I gotta do it in cli)



    Thanks in advance guys !

  • macom

    Hat das Thema freigeschaltet.
  • Btrfs with raid 5 is not a good choice. It’s raid 5 is not officially supported or stable


    Zfs is good but does have higher ram requirements (official recommendations for zfs usually state ecc ram and 1GB per TB of storage if I recall correctly). It does have an optional gui plugin. For better support it is also recommended to install the kernel plugin and the promox 6.2 or 6.5 kernel as it make a newer version of zfs available.


    The other option is to use a mdadm raid and your filesystem of choice. Ext4 is ok, but I personally prefer xfs as it is faster with larger files and allows for parallel I/O, but it does have the drawback of not supporting a filesystem shrink, although that is normally not an issue since shrinking a filesystem is rarely required.

  • BTRFS is great in RAID1 as it is stable and can easily be expanded later if you want to use bigger disks.

    ZFS in RAID5 is super stable and easy to recover. But in day to day running it has a lot of overhead so if performance is key then its not the 1st choice although you can improve speed a bit by adding SSD cache.

    Ext4/Xfs + Mdadm is stable in all Raid configurations but doesnt have the newer filesystem features like Snapshots, Scrubbing and Wow to prevent bit rot (as BTRFS and ZFS do have).

    So like everything in life its a trade off.

    OMV6 HP t630

    OMV6 Xeon / i5 - SCSI PC

    OMV6 on Raspberry Pi4

    OMV5 on ProLiant N54L (AMD)

    • Offizieller Beitrag

    Zfs is good but does have higher ram requirements (official recommendations for zfs usually state ecc ram and 1GB per TB of storage if I recall correctly)

    That's an urban legend and a ZFS "marketing" mistake :)

    ECC is recommended in any system. The difference is that ZFS says it and the others don't say it.

    The high RAM demands are only if you use deduplication, the vast majority of users do not. You can configure ZFS with 2GB of RAM and it will work without problems.

    If you read carefully you will see that none of this is really a requirement, they are just recommendations that others do not make but that they could also make with the same arguments. https://openzfs.github.io/open…tml#hardware-requirements

    My recommendation for a Raid5 will always be ZFS on any system, even with 2GB of noecc RAM.


  • Zfs is good but does have higher ram requirements (official recommendations for zfs usually state ecc ram and 1GB per TB of storage if I recall correctly).



    This is another urban myth re: ZFS & RAM. It would mean someone with a 30TB to 40TB pool would need min of 32-64 GB of ECC RAM depending if you stuck with Linux's default arc_max of 50% of host memory or not. This is just not true.


    See here for current recommendation: hhttps://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#hardware-requirements.


    ZFS in RAID5 is super stable and easy to recover. But in day to day running it has a lot of overhead so if performance is key then its not the 1st choice although you can improve speed a bit by adding SSD cache.



    ZFS equivalent of RAID5 is raidz1. Like BTRFS, ZFS is a filesystem and volume manager combined so naturally their are "overheads" compared to say EXT4/XFS alone, just as there are in BTRFS. If a HDD based ZFS pool is used for bulk data which is mostly accessed sequentially, then performance will be more than adequate to saturate a 1 Gbe connection for network shares. For local read/writes speeds, see these example benchmarks: https://calomel.org/zfs_raid_speed_capacity.html I don't know what the BTRFS equivalent would be, but AFAIK BTRFS does not have optimised read/write paths for filesystems that span multiple devices. In short I don't l see ZFS performance being a problem for the intended use.


    Petitgnoll6 I've made little use of mergerfs+ SnapRAID so will let others sing its praises. Suffice to say it's well suited to storing bulk media data which is write once/read many times and for which there's no need for reat-time RAID.


    Sadly, BRTFS RAID5 is not production ready and you wonder if it ever will be. Even if the 50% efficiency of BRTFS RAID1 is acceptable, it's tendency to turn read-only when there are problems can mean a degraded multiple device BTRFS filesystem will not mount read/write. This may not match your criteria of "data availability". In contrast, a ZFS degraded pool will happily mount read/write and keep chugging untll you replace the faulted device.


    Personally, I don't see why ZFS is "overkill" for you. Possible downsides are the administration zfs scrub and auto snapshots which needs some user input compared to what OMV provides for BTRFS. Also, you need to manually configure the options for "shadow copies" with ZFS SMB shares. Not every aspect of ZFS is covered by the OMV plugin. For example, you'd need to use the CLI to offline and replace a disk. But that's also true for BTRFS. Unlike BTRFS, ZFS does a proper "rollback" command. Unlike BTRFS, ZFS does have a very effective monitor daemon ZED and so you'll you get timely and meaningful emails re: the state of your pool via OMV's notification system.

  • My recommendation for a Raid5 will always be ZFS on any system, even with 2GB of noecc RAM.


    Not raidz1, but zfs with 2GB of non-ECC RAM:


    • Offizieller Beitrag

    - EXT4 with mergerfs and snapraid

    => Easy and simple but, if I understand well, when a disk fail data will be offline during the fix command ?

    Are "rebuild" time as long as raid/zraid ?

    Good data and drives won't be off-line, but you should take them off-line. Since the file data on good drives are part of the "fix" command's parity calculations, for recreating a failed disk, you wouldn't want users altering those files during the fix operation. That might result in unrecoverable errors.

    Rebuild times are dependent on many things. In the case of a SnapRAID rebuild, the largest factor would be the number and size of data files to be recreated.


    - BTRFS with raid 5.

    => Best integrated in OMV, data available during replacement (even if take a long time to rebuild)

    I know about the write hole, I have a ups and since my data are not crucial, I maybe could take my chances ?

    I wouldn't do RAID5 in BTRFS but that's your call. While it was years ago and I'm sure things have improved in the mean time, I had a couple of experiences with BTRFS and it's utilities that resulted in the loss of the entire filesystem.

    - ZFS with zraid1

    => Could be the answer but isn't that overkill for my need ?

    What downside are to ZFS apart from integration that needs plugin (like mergerfs and snapraid btw) and not upgradable (not a problem for me, won't change the hdd capacity for a while) ?

    While I have 1 server configured with RAIDZ1, I've been using ZFS in Zmirrors, for data integrity, for at least 8 years. I've replaced two drives, in a Zmirror, during that time period without a problem. What I appreciate most is that ZFS gives indicators that a drive is going bad before SMART begins to give warnings. That's far better than taking unrecoverable errors as a drive actually begins to die.

    The plugin, in my opinion, is not even a consideration. That's more a matter of licensing that anything else. ZFS integration in OMV is very good. The plugin provides 95% of the ZFS features most users will need, in the GUI, along with displaying snapshots captured, etc. If something special is needed, there's always the CLI.

    This -> document will guide you through the installation of zfs-auto-snapshot for preconfigured fully automated, self rotating and purging snapshots. The document even covers "unhiding" past snapshots for simple file and folder recoveries using a network client.

    - Some hybrid BTRFS snapraid mergerfs

    => I could have data availability during disk replacement if I got it right. But it would maybe be a too "messy" install (like if I replace a disk, it wouldnt appear in OMV GUI cause I gotta do it in cli)

    While SnapRAID will work with BTRFS, a more simplified filesystem like EXT4 or XFS would be better. And while mergerfs does a great job of aggregating disks, understanding the effects of it's storage polices is key to understanding how it works and what to expect.

    SnapRAID and MergerFS work well together, but throwing a complex filesystem into the mix seems to be asking for unforseen issues.

    ____________________________________________________________________________________________________________

    Zfs is good but does have higher ram requirements (official recommendations for zfs usually state ecc ram and 1GB per TB of storage if I recall correctly).

    I think the 1 to 1 memory requirement is mostly TrueNAS / FreeNAS propaganda. With dedup off (I've never used dedup in any case) I've been running a 4TB mirror with 4GB of ram, on an Atom powered Mobo with zero issues. While I realize that meets their 1GB to 1TB requirement, my little Atom backup server is not utilizing anything near the available ram. Even with page cache included, it's typically using around 1GB, maybe a bit more.

    While ZFS will use a LOT of ram during copy operations, that's simply good utilization of the resource. ZFS gives ram back if other app's need it.

  • I wouldn't do RAID5 in BTRFS but that's your call.

    I still wonder about these statements. Didn’t we point out multiple times that BTRFS RAID 5 is as (un)stable as any RAID 5? As far as I know every RAID 5 would have a potential write hole issue in case of power loss. BTRFS devs are just the only ones who flag their RAID 5 as unstable due to it.

  • I still wonder about these statements. Didn’t we point out multiple times that BTRFS RAID 5 is as (un)stable as any RAID 5? As far as I know every RAID 5 would have a potential write hole issue in case of power loss. BTRFS devs are just the only ones who flag their RAID 5 as unstable due to it.

    If you wonder about these statements then perhaps this will convince you: https://lore.kernel.org/linux-…4.GX10769@hungrycats.org/


    AFAIK, nothing much has changed since that was written in 2020. BRTFS RAID5 is not marked unstable simple because of a potential write hole.

    • Offizieller Beitrag

    I still wonder about these statements. Didn’t we point out multiple times that BTRFS RAID 5 is as (un)stable as any RAID 5? As far as I know every RAID 5 would have a potential write hole issue in case of power loss. BTRFS devs are just the only ones who flag their RAID 5 as unstable due to it.

    In my case (several years ago) the instability have nothing to do with RAID5. I was working with a plain BTRFS single drive volume. I was testing BTRFS for a mobile application where power disconnects would not be unusual. In theory, power losses should have no effect on a COW volume but that was not the case with BTRFS.

    After a sudden power loss; I forget what the exact error was (it was a sequenced number for operations, that the file system found to be out of sequence or something like). I delved into BTRFS' utilities and errata to "fix" the issue. On the first occasion, I managed to "patch" the filesystem and bring it out of read-only. In two other occasions, I couldn't recover it and had to move to more drastic measures where the filesystem was lost.

    In recent times, I'm still using the same drive (it's a WD USB external) with BTRFS in a backup role. Lately, I haven't had any issues,, but,, after dealing with those earlier events my faith in BTRFS has been damaged. While BTRFS Dev's may have fixed the issues responsible, based on my own experience, I couldn't recommend BTRFS as a primary data store. (That's just my opinion.)

    • Offizieller Beitrag

    If you wonder about these statements then perhaps this will convince you: https://lore.kernel.org/linux-…4.GX10769@hungrycats.org/

    Wow. That list is extensive and the issues go far beyond what they were -> advertising several years ago. (There was zero progress, for years at a time, on the problems they would admit to.) While I observed the effects of a couple of those behaviors, I didn't attempt to get to the bottom of "why" it was happening. My thoughts were, if there are obvious issues out in broad daylight with a simple volume, there would almost have to more that are hidden, waiting to be discovered. (I simply moved on to an FS that could deal with abrupt power loss.)

    Some years ago, there was a forum moderator that was pushing BTRFS on Linux newbies largely because it was integrated into the kernel. I never understood his reasoning or the "unfounded trust". Since most newb's don't have backup, I saw recommending BTRFS as a potential disaster from a forum support perspective. Thankfully, it wasn't widely adopted.
    ____________________________________________________________________________

    For those who may be interested in a plain language explanation of the status of BTRFS, as of a few years ago, this -> article might be an interesting read.

  • If you wonder about these statements then perhaps this will convince you: https://lore.kernel.org/linux-…4.GX10769@hungrycats.org/


    AFAIK, nothing much has changed since that was written in 2020. BRTFS RAID5 is not marked unstable simple because of a potential write hole.

    My first time seeing an actual list of issues with btrfs, but it makes me very glad I didn't decide to use it. The self repair features make it attractive, but what good are they when bugs keep them from working correctly.


    I will gladly stick to what I know works. As I said above ZFS or mdadm + XFS are my comfort level.


    I have been personally using mdadm + XFS for about 12 or 15 years without any filesystem problems, and used ZFS on some test scenarios back then, but went with mdadm + XFS because my home system was not capable enough at the time due to the heavier RAM requirements of ZFS.


    Both of those filesystems have long pedigrees. XFS was created by Silicon Graphics back in the early 90's (I used to sell SGI back then) and ZFS came from Sun Microsystems in the early 2000's. Both platforms were high end commercial servers and workstations, with a lot of expertise and testing put into developing their products, unlike brtfs which, from what I understand, is primarily the product of one guy without a large company backing the development.

  • My first time seeing an actual list of issues with btrfs, but it makes me very glad I didn't decide to use it. The self repair features make it attractive, but what good are they when bugs keep them from working correctly.


    In fairness to BTRFS, it's raid1 and raid10 profiles have been stable for quite a while. Another informative post comparing mdadm, BTRFS and ZFS failure modes can be found here: https://unixdigest.com/article…s-btrfs-and-mdadm-dm.html

  • Thanks for all the answer guys !


    My system will have 4x14tb data and 12gb ECC ram with a xeon 1265v2 (microserver gen8).

    I read that the RAM requirement for ZFS is more a "legend" than something else (if i don't use dedup). But since at the time, I'm left most of the time with 2-4gb RAM when for example the Minecraft server is up.

    Would that be a problem with ZFS ?

    • Offizieller Beitrag

    It is not a problem. If the system needs RAM ZFS will return RAM to the system. ZFS occupies RAM that can be useful only if the RAM is available and another process does not need it. https://www.linuxatemyram.com/

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!