Evaluation of File System Performance on OMV

  • Evaluation of File System Performance on OMV

    Introduction

    I wanted to completely reinstall all my NAS using OMV. Doing so I decided it would be a good opportunity to think about the used file systems, backup and how I could use my existing HDDs the best possible way.


    I did some extensive testing. Most of them in a NAS with 6 HDDs and with most of the file systems available in OMV. I tested Ext4, XFS, JFS and BTRFS. I did not do any tests with ZFS.


    I tested different types of data ranging from very small files to multi gigabyte files. I also explicitly tested btrfs send/receive, since I had realized, that this has its own I/O-usage profile and was not covered good enough by just copying some files.


    Since I have HDDs of different sizes, I was interested in building a raid-5 with those HDDs. BTRFS is capable of doing so, but (as of this writing in summer 2024) this is not yet considered stable for production use. Therefor I was also looking for some alternatives. I found something called Sliced Hybrid Raid (SHR). I also did some modifications on that concept, mainly trying to improve performance.


    To cut a long story short (tl;dr)

    • Ext4 and JFS are slower than XFS and BTRFS.
    • As long as you only deal with small files, that fit into the Linux cache (in my case up to a size of 2-3 GB), the underlying technology doesn’t really matter. They are all similarly fast (due to the file cache).
    • MergerFS has a visible impact on performance.
    • For multi gigabyte files you will notice the different technologies of redundancy. Native Raids are all similarly fast, no matter if built with mdadm, lvm or BTRFS. In contrast, SHR and the Sliced solutions have a significant dropdown in performance (up to 10 times slower in my case). BTRFS Raid-0 on top of Slices was the fastest of them (only factor 2 slower).
    • Used with a big enough cache on SSD (lvmcache or bcache) SHR and the Sliced solutions are all similar fast again. I’d recommend bcache applied on the slices for this scenario.


    I have attached to this entry post, the raw data as cvs file (the .txt file) and an Excel File (the .zip file) with the same data and the visualization, I used for this guide. I also wanted to upload a PDF version of this guide, but the PDF file is slightly bigger than 1 MB and I wasn't allowed to upload it.

  • The long story

    Test scenario, test data, etc.

    To be able to understand all the details of my tests, I need to at least summarize a little bit what a SHR is about: A Sliced Hybrid Raid can create a Raid-5 with HDDs of different sizes without wasting space. This is done by creating equally sized partitions across all of those HDDs. First partition has the size of the smallest disk and is repeated on every other disk. Second partition is the remaining space of the second smallest HDD. This partition is also repeated on the other disks (but not the smallest one), and so on, until all space on the HDDs is used. All partitions at the first position (size of the smallest HDD) are then bundled in a Raid-5 using mdadm- this is called a “Slice”. This is done for all partitions. You will get as many slices as you have different sized HDDs. Here in my tests, I used 2 HDDs of size 3 TB, 2 HDDs of size 5 TB and 2 HDDs of size 10 TB, resulting in 3 Slices. These Slices are then used as Physical Volumes in LVM. All three Slices / Physical Volumes are then used to build a Volume Group which is then completely used as Logical Volume. This is the final SHR. In short: A SHR consists of several slices (mdadm) with an additional layer using LVM.


    pasted-from-clipboard.png


    I've described this in detail in this thread: link


    For the mentioned modifications of that concept, I basically kept the slices, but exchanged LVM with other techniques (BTRFS and MergerFS). I also added some caching using lvmcache or bcache.


    For the tests, I differentiated between the filesystem itself (i.e., Ext4, XFS, JFS, BTRFS) and the redundancy level (i.e., single disk, Raid-0, Raid-1, Raid-5 and SHR). In case of BTRFS I created all the Raid levels once with native BTRFS mechanisms and once with mdadm (Linux software raid) and formatting the resulting device (/dev/md*) with BTRFS with single profile. For BTRFS I also modified the SHR, since BTRFS is able to do its own volume management, there is no need to use LVM. BTRFS can simply use the Slices on its own either by using a native btrfs-Raid-0 as additional layer or by using the “single” profile as additional layer. Just for completeness I also included MergerFS as additional layer on top of the 3 Slices.


    Tests were done on a Qnap TS-853A (Quad-core Intel Celeron N3150 1.6GHz with 16 GB RAM).



    Disks used:

    • 2x 10 TB HDD (WDC WD100EFAX)
    • 2x 5 TB HDD (WDC WD50EFRX)
    • 2x 3 TB HDD (WDC WD30EFRX)
    • 1x 500 GB SSD (WDS500G2B0A)


    Drive configurations I tested:

    • Plain FS directly on the SSD
      (as a reference for max. possible data rate on my system)
    • Plain FS directly on the HD
      (as a reference for data rate with HDDs on my system)
    • Raid-0, Raid-1, Raid-5
      (for Ext4 the Raid(s) were created by mdadm,
      for BTRFS the Raids(s) were created natively by BTRFS
      and also with mdadm, then using BTRFS (single) as FS on top of it)
    • SHR (mdadm + LVM)
    • 3 slices + BTRFS
    • 3 slices + MergerFS
    • cached SHR (mdadm + LVM + lvmcache)
    • cached SHR (mdadm + LVM + bcache)
    • cached SHR (mdadm + bcache + LVM)
    • cached Slices (mdadm + bcache + BTRFS or MergerFS)


    File systems used:

    • Ext4
      (for some test cases I stopped using Ext4, simply because it literally takes a day to format 26 TB with Ext4)
    • XFS
    • JFS
    • BTRFS


    As test data I used:

    • 500 MB of small files; most of them less than 2 MB (typical source code; compressible)
    • 500 MB of small files; each less than 2 MB (jpg photos; not compressible)
    • A single File of 500 MB (not compressible)
    • A single File of 1 GB (not compressible)
    • A single File of 2 GB (not compressible)
    • A single File of 8 GB (not compressible)
    • A single File of 12 GB
    • On BTRFS I also created a set of BTRFS snapshots using btrfs send/receive
      (about 40 GB in size)


    For write tests, those files were on an SSD and during the test copied on the target file system. For read tests, those files were on the file system under test and copied to the SSD (assuming the SSD won’t be the size driver during the performance measurement). I used rsync for the copy operation, because it is known to be fast and it gives nice statistics including data throughput.


    Test on a single HDD were done on the bespoke device. Tests involving Raid-0 / Raid-1 were done on the two 10 TB disks, all other tests (Raid-5, SHR, Slices) were done using all 6 HDDs.


    Test data was copied at least 7 times. Each copy job was measured. The graphics will show the median and the max and min values. See the following picture for the reference values measured with a 500 MB file with the single disks. The dark blue point is the median of the measurements. The upper and lower end of the light blue bar indicate the variation in throughput during those measurements.


    pasted-from-clipboard.png


    Here you can easily see, that BTRFS and XFS are faster than Ext4 and JFS (in case of a 500 MB file). You also see, that all disks have very similar throughput. One reason being the file cache of Linux.


    Single disk scenarios

    Let’s look at the results for the other Test Data.


    500 MB of small, compressible files:

    pasted-from-clipboard.png

    Similar result: Disks have similar data rates, Ext4 & JFS are slower than BTRFS & XFS. On those small files, JFS is even slightly worse than Ext4


    500 MB of small, UNcompressible files:

    pasted-from-clipboard.png

    Again, similar result.


    1 GB file:

    pasted-from-clipboard.png

    Again, similar result, but now, for 1 GB JFS is slightly better than Ext4. We also see, that my 5 TB disks seem to be the slower than the others.

    8 GB file:

    pasted-from-clipboard.png

    Now this file size can no longer be handled by the file cache. Hence, we see some difference between the disks. But still the performance of Ext4 and JFS is slower than the other two file systems. We also see, that in this case my 3 TB disks seem to be the slower.


    12 GB file:

    pasted-from-clipboard.png

    Similar result to the 8 GB file. We also see, that at some point SSDs are not as good with large continuous files as spinning disks.



    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

    Edited 3 times, last by Quacksalber: inserted link to "Creation of SHR". ().

  • Now let’s see, how the different data files perform compared on one file system.


    500 MB – XFS – single dispasted-from-clipboard.png


    500 MB – JFS – single diskpasted-from-clipboard.png


    500 MB – Ext4 – single diskpasted-from-clipboard.png


    500 MB – BTRFS – single diskpasted-from-clipboard.png

    Looking at the last few diagrams, we can see, that despite the fact, that we have different absolute data rates, the relation between the different data files is very similar: small files need more time, medium sized files are faster and real big multi gigabyte files (including btrfs send/receive – as we can see in this figure) are again slower. This is true in the same way for all 4 file systems.


    BTRFS and compression on single disk

    Since I was doing performance measurements anyhow, I got curious about the compression feature of BTRFS. I tested “compress=no” (i.e., without any compression), with “compress=zstd:4” and with “compress-force=zstd:4”. The reason, why I took zstd:4 instead of the default value “zstd:3” is, that I came across some statistics, where the compression rates where evaluated. There was a slightly better compression rate going from 3 to 4, but above 4 the compression rate did not get better any longer, but the time for compression increased.


    Single disk - Small files:pasted-from-clipboard.png

    For small compressible files the compression of the BTRFS file system does not seem to have any significant impact (in terms of data rate). Surprisingly, data rate even increases slightly for small UNcompressible files (old .jpg photos; each of it smaller than 2 MB). Probably that’s because BTRFS senses that the files are not compressible and doesn’t even try then.


    Single disk - Medium sized filespasted-from-clipboard.png

    No measurable impact.


    Single disk - Large filespasted-from-clipboard.png

    Again, we see the influence of the Linux file cache. For a multi gigabyte file, there is a measurable, for very large files a significant dropdown in performance – especially for “compress-force”, where performance dropped to about half of the data rate. The same holds true for btrfs send/receive.

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

  • Comparing redundancy levels (incl. SHR and Slices).

    500 MB file on Ext4

    pasted-from-clipboard.png

    Very similar throughput for almost all technologies. Only MergerFS on top of the 3 Slices has a visible degradation on performance.


    500 MB file on JFS

    pasted-from-clipboard.png

    Very similar for JFS.


    500 MB file on XFS

    pasted-from-clipboard.png

    Again, very similar. In this case I also added one test, where I created a Raid-5 with LVM instead of mdadm. No significant difference visible for a Raid-5 created with lvm compared to the performance of a Raid-5 created with mdadm.


    500 MB file on BTRFS

    pasted-from-clipboard.png

    This time no MergerFS involved, since BTRFS can do this on its own. We also see no significant difference, if the Raids are native BTRFS Raids or BTRFS on top of mdadm.


    1 GB file on XFS

    pasted-from-clipboard.png

    1 GB file on JFS

    pasted-from-clipboard.png

    1 GB file on Ext4

    pasted-from-clipboard.png

    1 GB file on BTRFS

    pasted-from-clipboard.png

    8 GB on XFS

    pasted-from-clipboard.png

    The Linux file cache can’t deal with the 8 GB file any longer. Now we clearly see, that the SHR as well as the Slices with MergerFS have a significant degradation in performance. It’s the same for the 12 GB file on XFS:


    12 GB on XFS

    pasted-from-clipboard.png

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

  • 12 GB on JFS

    pasted-from-clipboard.png


    12 GB on Ext4

    pasted-from-clipboard.png


    12 GB on BTRFS

    pasted-from-clipboard.png


    Also, BTRFS is very similar. But this time we have no MergerFS and two new variants: 3 Slices and on top of them either BTRFS Raid-0 or BTRFS with “single” profile. And while BTRFS single still has a weak performance, BTRFS Raid-0 is somewhere in between showing much better performance than the SHR.



    Looking at the snapshot / backup mechanism of BTRFS (btrfs send/receive) we also see a similar but slightly worse picture: pasted-from-clipboard.png


    Interestingly, performance of Raid-5 is better in this case than the performance of Raid-1 (no matter if native BTRFS or above mdadm).



    Redundancy levels and BTRFS compression

    Now let’s see, what the compression of BTRFS looks like on the different redundancy technologies.


    This figure shows the 500 MB file for a single disk, Raid-0, Raid-1 and Raid-5 (all Raids both native and on top of mdadm) and for all cases without compression, with compression and with compress-force. We can clearly see, that for small files and standard raids, there is no significant difference visible. (that’s also the reason, why I didn’t care about the legend not being readable - It just doesn’t matter)


    500 MB & compression

    pasted-from-clipboard.png

    This is true for all file sizes up to 1 GB. Again, the Linux file cache strikes. It starts to show differences with the 8 GB file, but it really gets visible with the 12 GB file.


    12 GB & compression

    pasted-from-clipboard.png

    Here we see a nice pattern where one is significantly slower than the others. Let’s look at one example in detail:


    12 GB file & compression on Raid-5

    pasted-from-clipboard.png

    As seen before, compress-force only delivers about half the data rate than without compression. Normal compression (compress=zstd:4) shows a small impact. For btrfs send/receive it looks similar, but the performance impact of the “compress=zstd:4” is bigger than in the figure before:


    Btrfs send/receive & compression on Raid-5

    pasted-from-clipboard.png

    Now let’s look at the SHR and the Slices:


    12 GB & compression on SHR and Slices

    pasted-from-clipboard.png

    For BTRFS Raid-0 on top of the Slices we see an almost familiar picture: uncompressed data is fastest. Surprisingly for the other two configurations (plain SHR and BTRFS “single” on top of slices) compress force is the fastest variant. I have no real explanation for that, but I assume it has to do with timing, since I am quite sure that it is not because of the data being compressed so much, that the gain would justify this. Reason behind my assumption is, that this effect would show up all the time and not just here.


    For btrfs send/receive we see almost the same picture:


    Btrfs send/receive & compression on SHR and Slices

    pasted-from-clipboard.png

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

  • Some intermediate thoughts on the results so far

    The reason, why I did all this, was to find a configuration, where I could use all of my disks in one big raid, gaining as much space as possible. This would be easy with BTRFS, since BTRFS can indeed create a Raid-5 with disks of different sizes. The only drawback is, that the developers don’t consider BTRFS Raid-5 ready for production use. Never the less, BTRFS Raid-5 would be among the fastest of all those solutions. If you inform yourself about the known problems, know what you are doing and you have a UPS in place, this is a possible way to go.


    SHR and the variants based on Slices are able to create a pool of Raid-5 but performance drops down significantly. And trust me, you don’t want do have your backup transferred to a device, with only 10% data rate. I tried – it sucked. (That’s where this performance evaluation started…)


    Therefor I did some more tests, mainly with the purpose to improve performance of the SHR or the variants with the Slices.


    One possibility was to adjust alignment of the internal data structures (chunk size, block size, stripes, physical extents, etc.). I tried this, but with no measurable impact. Diagrams for SHR / Slices with and without alignment look all the same. Sure, it’s possible, that I did some mistake with the alignment since this is advanced stuff, even for me. But I rather think, that it simply doesn’t matter for my home use scenarios. As far as I can judge, those alignment scenarios are only of concern for highly specialized, professional scenarios, where you would want to get even the slightest increase in performance. Therefore, I’m not going into details it here.


    The other possibility was the inclusion of a cache. So, I created a 330 GB partition on my SSD and did some experiments / measurements with SHR / Slices combined with lvmcache or bcache. I only used lvmcache together with LVM (i.e., in the original SHR, since only this involves LVM), bcache I tried with both (SHR & Slices). Both caches were only used in configurations with write cache. For bcache this is “writeback” and for LVM there are two flavours: “writeback” and “writecache”. Where “writeback” is what is normally used, because it also comes with a read cache, whereas “writecache” is only a write cache without any read cache at all.


    SHR & Slices combined with caches

    Since I only cached SHR and the variants involving Slices, the performance of the cached file systems will be compared only to Raid-5 configurations.


    500 MB small, compressible files on BTRF

    pasted-from-clipboard.png


    500 MB small, UNcompressible files on BTRFS

    pasted-from-clipboard.png


    500 MB file on BTRFS

    pasted-from-clipboard.png

    1 GB file on BTRFS

    pasted-from-clipboard.png


    As we can see, for the smaller files there are some differences visible, but they are not really significant.


    Now let’s look at the bigger files.


    8 GB file on BTRFS

    pasted-from-clipboard.png


    12 GB File on BTRFS

    pasted-from-clipboard.png


    Btrfs send/receive

    pasted-from-clipboard.png

    With the bigger files and also with btrfs send/receive we clearly see, that the cache significantly improves the data rate. It’s not at the level of a native Raid-5, but it’s clearly better than SHR or Slices without any cache. And we need to take into account, that my SSD has shown a data rate of about 130 MB/s. That means the data rate we currently see, is about all what my SSD can deliver.


    12 GB on XFS

    pasted-from-clipboard.png


    12 GB on JFS

    pasted-from-clipboard.png

    For the rather slow file system JFS (at least compared to XFS & BTRFS), the cached devices are as fast as the native raids.


    Let’s look at compression with cache:


    Cache and BTRFS compression

    pasted-from-clipboard.png

    Bcache and lvm writeback look very similar, only compress-force is slightly slower. That’s different for lvm writecache. Here we see an impact on performance for both compressed variants.

  • Conclusion

    BTRFS is among the best / fastest files systems, no matter which scenario was used. Compression and the ability to create raids from different sized HDDs make it even more interesting, wouldn’t it be for the fact, that Raid-5 is not considered mature enough for production use. Snapshots and the ability to convert snapshots into real backups using send/receive come even on top of it.


    From the more traditional file systems, XFS is faster than JFS and Ext4 in almost every scenario and therefore also a clear recommendation. JFS and Ext4 have both a similar data rate but are visibly slower than XFS and BTRFS. Ext4 has even another annoying aspect: It needs much longer for formatting than the other file systems. With Ext4 formatting my 26 TB test files system needed about a day (no kidding !), whereas all the other file systems (JFS, XSF and BTRFS) only needed seconds or maybe one or two minutes for formatting.


    All native raids (no matter if created with mdadm, lvm or BTRFS) have also a similar speed. However, mdadm has a similar downside, than Ext4. It needs ages to build the raids. With BTRFS or lvm the raid devices are built within seconds or maybe a few minutes.


    If you want to combine disks of different sizes, you either have to bite the bullet and use BTRFS Raid-5, even though it is not yet considered mature enough for production use. But the flaws are well documented and if you know what you are doing, have a UPS in place and create daily backups (and I mean backups, not snapshots) it is worth a try.


    If you don’t feel comfortable that way and you still want to create a raid out of differently sized HDDs, you can either try BTRFS Raid-1 (but you will lose some space compared to Raid-5) or you can try the Slices with BTRFS on top of it. See, if performance satisfies your needs, if it doesn’t install bcache and add a cache on top to the Slices. Be aware, that bcache works on block device level. That means, you can’t add bcache to an existing device with data. You will need to reformat.


    You can even think about combinations: i.e., have the file system for your daily work based on slices, since here you will deal with all sorts of files, but mostly small ones (i.e., smaller than 2 GB) and the Linux file cache will leverage the speed (if that’s not enough you can even add an SSD based cache). The file system for your backups could then be BTRFS Raid-5, since here you will need the speed for writing large files. And maybe another (external?) backup using BTRFS Raid-1 (which IS mature enough for production use), thus distributing the possible risk of BTRFS Raid-5 failing.

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

  • Update 1

    Chunk Size and Consistency Policy

    The reason for the SHR and all the other Sliced bases solutions having a bad performance without a cache has to do with the combination of the parameter “--consistency-policy=ppl” with the chunk size.


    The following measurements / figures were created on a different machine (TerraMaster T6-423) with the exact same disks as above, but with less RAM (only 4 GB). With this amount of RAM the effects are much better visible in the diagrams.


    Let’s have a look at the chunk size of mdadm. Default size (as of this writing) is 512K. You can set it using e.g. “--chunk=4K”. The following measurements were done on a Raid-5 with XFS as file system. The raid itself was created with chunk sizes from 4K to 4096K first without and then with consistency-policy.


    500MB file; no ppl; different chunk size

    pasted-from-clipboard.png


    For the 500MB file the chunk size has no visible effect.


    12GB file; no ppl; different chunk sizes

    pasted-from-clipboard.png


    Same for the 12 GB file. And this looks quite the same for almost all tested operations. It only changes a little bit for the 8 GB file.


    8GB file; no ppl; different chunk sizes

    pasted-from-clipboard.png


    Here we see a little increase with a chunk size of 1204k. Funny thing is, this gives a little bit of an outlook, for the chunk sizes combined with the consistency policy. But generally speaking, we see, that the chunk size doesn’t really matter for the usage without ppl.


    Now let’s see how that changes if we bring in “--consistency-policy=ppl“:


    12GB file; with ppl; different chunk sizes


    pasted-from-clipboard.png


    Now this is clearly an eye opener! For every chunk size less then 1024K we see a really bad performance below 20 MB/s (and sadly this includes the default chunk size of 512K). Changing the chunk size to 1024K then shows a performance of almost 300 MB/s. This looks similar for the other file operations, but this is the figure most clearly showing it. So, looks like the optimal chunk size for this setup is 1024K.


    IMPORTANT:


    If you want to use “--consistency-policy=ppl“ to cover the write hole, you should do your own performance tests using your own setup, to find the optimal chunk size for your scenario.


    With that knowledge I did again some measurements, but this time I concentrated on the SHR and the Sliced based solutions. I only included a few others (e.g., single disk or NVME) to have some sort of reference to compare the outcomes of the measurements against. Filesystem is BTRFS most (if not all) cases.


    Reference

    Single disk NVME

    pasted-from-clipboard.png


    Performance somewhere between 200 MB/s and 300 MB/s


    Single disk HDD

    pasted-from-clipboard.png


    Performance somewhere between 150 MB/s and 200 MB/s


    Raid-1 with mdadm (and BTRFS on top)

    pasted-from-clipboard.png


    Performance somewhere between 150 MB/s and 200 MB/s


    Raid-1 native with BTRFS

    pasted-from-clipboard.png


    Performance somewhere between 150 MB/s and 200 MB/s


    Raid-5 with mdadm (and BTRFS on top)

    pasted-from-clipboard.png


    Performance somewhere between 200 MB/s and 600 MB/s


    Raid-5 native with BTRFSpasted-from-clipboard.png

    Performance somewhere between 150 MB/s and 600 MB/s


    This is what I personally consider as the reference, since this is what I would use, if the developers considered it stable.

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

  • Just out of curiosity I also created a Raid-5 with LVM (yes, you can create a Raid-5 natively only using LVM). This is what it looks like:


    Raid-5 with lvm (and BTRFS on top

    pasted-from-clipboard.png


    It becomes clear, that this is probably not the best idea, since performance is way behind mdadm.


    SHR and other Hybrid Raids (based on slices)

    Now let’s look at the SHR. This is what corresponds best to my setup before (i.e. a chunk size of 512k and including the consistency policy):


    SHR – chunk size 512k, consistency-policy=ppl

    pasted-from-clipboard.png


    Bad performance all over the place.


    If you don’t care about the write hole (maybe because you have a UPS in place) and you create the raids without the consistency policy, it looks like this:


    SHR – chunk size 512k, no ppl

    pasted-from-clipboard.png


    Now this is a completely different picture. Performance is somewhere between 300 MB/s and 600 MB/s, which is better than native Raid-5 with BTRFS and similar to BTRFS on top of a Raid-5 created with mdadm.


    So, what if we need the consistency policy and use a chunk size of 1024k?


    SHR – chunk size 1024k, consistency-policy=ppl

    pasted-from-clipboard.png


    This results in a performance somewhere between 150 MB/s and 300 MB/s which is not as good as the variant before, but still better that the 20 MB/s with the default chunk size. Looking at the reference diagrams, this is comparable to BTRFS on a single HDD, which isn’t that bad. Depending on the technology, your network will be the bottleneck. (1 Gbit/s equals about 125 MB/s).


    Nevertheless, as seen before, you could still combine it with a cache.


    What happens, if we apply this chunk size to the SHR without ppl? Will there maybe be any performance gain?


    SHR – chunk size 1024k, no ppl

    pasted-from-clipboard.png


    Answer is: No. There are some differences, but they are too small to be visible here. Same is true, if you change the chunk size to 64k. Without ppl, no difference visible.


    This shows, that a SHR -if setup properly- is at least as performant as a single disk. If used without consistency policy, its performance can match a classical Raid-5 created with mdadm.


    While this is, what I was looking for, I still wanted to know, if this could get any better by modifying the concept of the SHR. I keep the slices created with mdadm but instead of LVM I use BTRFS (either in single profile or as Raid-0) to put the slices together.


    Slices & BTRFS (single profile)

    pasted-from-clipboard.png


    Again, the answer is: No.


    Slices & BTRFS Raid-0

    pasted-from-clipboard.png


    If used with Raid-0 it gets even worse.


    Slices with ppl and chunk size 1024 & BTRFS (single profile)

    pasted-from-clipboard.png


    This setup has a slightly better performance for small compressible files (compared to the SHR with ppl and chunk size 1024), while having a slightly worse performance for the larger files.


    Slices with ppl and a chunk size of 1024k combined with BTRFS Raid-0 again gives a worse performance than with the single profile.


    Slices with ppl and chunk size 1024 & BTRFS Raid-0

    pasted-from-clipboard.png


    The next modification was to use LVM for the slices. Mainly because LVM has the “integrity” feature, which could make up for the missing bit rot correction of BTRFS in the SHR configuration.


    Slices with lvm and activated integrity & BTRFS (single profile)

    pasted-from-clipboard.png


    Performance is constantly below 100 MB/s. As long as you have a recent backup around, I would rather choose the better performance of the SHR. BTRFS will still detect bit rot and inform you about it. So, it will not go unnoticed. You just need to replace it with a (sane) copy from the backup.

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

  • Summary

    The following two graphics show the afore mentioned technologies in comparison for one file size. They are representative; the other file operations show similar values.


    500 MB small, compressible file

    pasted-from-clipboard.png


    12 GB file

    pasted-from-clipboard.png


    If you have a UPS and current backups, you can go with the fastest alternative(s) here. SHR is as fast as you can expect it to be. Only if you want to include additional measures like a consistency policy against the write hole or integrity against bit rot, you have to “pay” for the increase in security with a degradation in performance.

    Qnap TS-853A

    Qnap TS-451+

    Qnap TS-259 Pro+

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!