JBOD vs MergerFS?

  • Hello guys,


    Just a QQ, what is the difference between JBOD "raid" and using the plugin mergerfs that mounts on the filesystem?


    I mean, I see that for things to work on OMV, I need first to decide if I want to use raid, in this case I can select JBOD, then select the FS, in this case, I was checking BTRFS, and then whats the point of getting MergeFS? Is not JBOD "raid" suppose to do the work?


    I guess mergeFS is somehow most customizable, but apart from that, any other difference?


    Kind regards

  • What you mean by "JBOD (Just a Bunch of Disks) raid", is probably better described as "software RAID".
    RAID controllers combine disks into what looks, logically, like one disk to the host. The controller presents a single disk "RAID Array". Software RAID combines disks that are connected to the PC, using software, to do the same thing. The difference is, a RAID array created on a hardware RAID controller is, usually, permanently married to that controller or the same class of controllers. Software RAID arrays are more portable.


    Regarding MergerFS:
    RAID combines several disks in such a way to where they appear to be one big disk. The down side is, if two or more disks are lost, all is lost (RAID5). MergerFS does the same thing, combines 2 or more disks under a common mount point, to where they appear to be one large drive. The difference is, if one or more disks are lost with MergerFS, the good disks still contain usable data. If SNAPRAID is added to MergerFS, then we're getting into similar capabilities, when compared to a RAID array, without the downside. Some features of MergerFS/SNARAID are actually better than traditional RAID. (There are Pro's and Con's to either approach.)


    BTRFS is another consideration altogether, that has RAID like capabilities. BTRFS is one of the two COW (copy on write) filesystems, that are more common on Linux. (ZFS is the other one.) Any COW filesystem comes with complexities that should be understood before using them.


    If you're a linux beginner, I'd focus more on backup. That's much more important than worrying about the complexities of combining drives. With solid backup in hand, you can learn about and try out more complex filesystems and storage approaches, without fear. If something goes wrong, that's what backup is for.

  • Hello,


    Thanks for the detailed explaination. At the moment I come from FreeNas, by using ZFS, and I can say I know more or less how do COW FS works, by their features, like dedup, snapshots, checksums, etc, etc.


    The only thing I was asking was, about one of the RAID options I saw in OMV, that was the JBOD option. As I guess it is the same as what mergerfs does, which means "RAID0" but without splitting 1 file in all disks.


    So thats why I was asking the pros and cons, about using mergerfs or JBOD OMV RAID option.


    Kind regards

  • Just to point what I am trying to achieve.


    It will be just to join 4 disks, that can work independently, but with the checksum protection option, to scrub the disk and locate any problem with my data.


    This will end up, that in case I lose 1 disk, only data from that disk will be lost, which is not a problem for me, as I will use backups to restore it. And in case something flaps in my disk, I will do the same, restore the corrupted file from my backup.

  • It will be just to join 4 disks, that can work independently, but with the checksum protection option, to scrub the disk and locate any problem with my data.

    You can do what you're describing above with MergerFS which logically combines disks. Adding SNAPRAID will give you checksum based bitrot protection at the cost of one disk as the parity drive. (The parity disk must be the largest.) With mergerfs, all that's needed is an understanding of the storage policy. SNAPRAID, on the other hand, requires a bit of reading to understand how it works.


    You could do the same with BTRFS in a RAID5 equivalent, but BTRFS still has RAID5/6 issues. In either case, a bit of reading should be done to understand the implementation.
    I've been testing BTRFS for bitrot correction performance. While my results are far from conclusive, I have some concerns. I plan to refine the test procedure and look a bit closer, over the winter months.

  • JBOD is the default. Individual drives. sda, sdb, sdc and so on.


    For instance an external USB enclosure for multiple drives might have a JBOD setting. Then the individual drives are available as individual drives. (Using one cable.)


    By default you have separate drives. And can use them separately. Just a Bunch Of Drives. Call it JBOD if you want to.


    Or you can use the separate drives together for software RAID. Pick a RAID level. Any RAID level.


    Or you can use the separate drives together for mergerfs or Snapraid.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • You can do what you're describing above with MergerFS which logically combines disks. Adding SNAPRAID will give you checksum based bitrot protection at the cost of one disk as the parity drive. (The parity disk must be the largest.) With mergerfs, all that's needed is an understanding of the storage policy. SNAPRAID, on the other hand, requires a bit of reading to understand how it works.
    You could do the same with BTRFS in a RAID5 equivalent, but BTRFS still has RAID5/6 issues. In either case, a bit of reading should be done to understand the implementation.
    I've been testing BTRFS for bitrot correction performance. While my results are far from conclusive, I have some concerns. I plan to refine the test procedure and look a bit closer, over the winter months.


    Well, I am not able to use Snapraid, as I do not want to use parity drive, as I wrote before, thats why I have the backup for. So I need a bitrot protection FS that will help me identifying is something gets corrupt or not.
    At the same time, I want my data to be loadbalanced in disks, like if they were individual (NOT RADID0). So I found the option to create individual BRTFS Filesystems, and then join them with mergerfs. Any concerns about this?
    But what I do not know still is the difference between mergerfs, and JBOD RAID, as they seem to be pretty the same.



    I checked JBOD RAID option, and it seems to join disks into 1 like mergerfs does, so still not now that much the difference between both. As if I am not wrong, if I break out, and split the disks, data will still remain there, and would be able to get accessed individually, right?

  • There is no JBOD RAID. That is nonsense speech. JBOD is drives NOT combined. RAID is drives combined. Mergerfs is something in between.


    Using mergerfs the filesystems on the drives are combined but still exists as separate filesystems. That means that a file is either stored on one drive or the other. If one drive fail, the files on the other drive are still there and can be retrieved as normal.


    Using raid0 the drives are combined and one filesystem is created spanning both drives. If one drive fail, the whole filesystem fail. Both drives. Poof! Gone! See you never again...


    Performance for a raid0 is most likely significantly higher than for two drives in mergerfs.


    Using mergerfs you can use various "policies" to decide on what drive a file end up. If you have a media library with files in subfolders split up on starting letter, you cans say that files starting on a, b or c should be on this drive. Files starting with d, e, f or q should be on that drive. Or you can have a policy that says new files should be stored on this drive first, then on that. Or new files should be stored on the drive with the most free space.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

    Edited 3 times, last by Adoby ().

  • You could have raid0 with BTRFS. Then you get spanning drives and checksums allowing bitrot detection. You don't get bitrot correction. That you will have to provide manually. Bitrot detection combined with bitrot correction equals bitrot protection.


    You also need to learn how to use BTRFS. Good luck with that...

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • Let me check and I will let you know. I though JBOD was joining also disks toguether. I might have go wrong.



    You could have raid0 with BTRFS. Then you get spanning drives and checksums allowing bitrot detection. You don't get bitrot correction. That you will have to provide manually. Bitrot detection combined with bitrot correction equals bitrot protection.


    You also need to learn how to use BTRFS. Good luck with that...

    No, I do not want to use RAID0, because that will spin all my disks once I want to read data. And in case I lose 1 drive, I will lose all data also. I prefer individual FS BRTFS and joinning the disks with mergerfs.


    About using BRTFS, I will just check how to run scrubs, and other basic info, not that many, as OMV, lets me mount and format BRTFS, and mergerfs has the plugin to join the FS. So apart from running scrubs on CLI, I can not thing about anything else right now.

  • You could use a disk volume manager like LVM to "span" several JBOD/independent drives. It is similar to raid0, but more efficient in utilizing the storage. Still I would expect all the drives to spin up.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • You could use a disk volume manager like LVM to "span" several JBOD/independent drives. It is similar to raid0, but more efficient in utilizing the storage. Still I would expect all the drives to spin up.

    But does that give me any bitrot protection?


    Why not the way I was saying? JBOD of BRTFS and then mergerfs?

  • The way you are saying will NOT give bitrot protection. It might provide bitrot detection. Try to not mix up detection, correction and protection. It is confusing.


    Try your suggestion. I haven't tried it so I don't know if it will work. Or even if it is possible.


    ZFS with redundancy or BTRFS with redundancy gives real time bitrot protection. Snapraid with redundancy gives bitrot protection, but not real time.


    None of your suggestions gives bitrot protection. Checksums gives only bitrot detection. You have to provide the correction manually.


    Protection=Detection+Correction.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

    Edited once, last by Adoby ().

  • Well yes, sorry, I am only searching for bitrot detection, as I explained earlier, I am not expecting to recover my corruptions or lost of disks by the FS itself, I will do it with my backup, which is for what it is.

  • Well yes, sorry, I am only searching for bitrot detection, as I explained earlier, I am not expecting to recover my corruptions or lost of disks by the FS itself, I will do it with my backup, which is for what it is.

    In past experimentation with BTRFS, I expected to do what you're suggesting; detect bit-rot and replace the corrupted file from back up. The problem I had was, there didn't seem to be any clear way to associate a detected bit error with the name of the affected file. And I spent a bit of time looking around for a process or a utility that could do it - I found nothing.


    So, if you find a way to do that (associate a detected bit error to a file name) I'm real interested. Please post that information.

  • In past experimentation with BTRFS, I expected to do what you're suggesting; detect bit-rot and replace the corrupted file from back up. The problem I had was, there didn't seem to be any clear way to associate a detected bit error with the name of the affected file. And I spent a bit of time looking around for a process or a utility that could do it - I found nothing.
    So, if you find a way to do that (associate a detected bit error to a file name) I'm real interested. Please post that information.

    I think there was an option to see what files were the affected ones. But do not remember now. Once I got the information I will post it here, no problems.


    But it seems thats the way how UNRAID works. I mean by using BRTFS.


    Kind regards

  • Ok more problems.


    Actual default config for Mergerfs seems to be crap. When I copy a file through CIFS/SMB to my mergerfs folder, I get the following speeds (attachment)


    But if I copy the same file to one of my 2 actual disks directly, I do not have any write spike.


    Do you know how to optimize this? I am currently using the mergerfs plugin:


    root@openmediavault:~# mergerfs --version
    mergerfs version: 2.28.2
    FUSE library version: 2.9.7-mergerfs_2.28.0
    fusermount version: 2.9.7
    using FUSE kernel interface version 7.29


    Kind regards

  • Have you read the mergerfs documentation yet?



    Code
    https://github.com/trapexit/mergerfs

    Lots of info there. But how much of it can be configured via the Union File Systems plugin is unknown to me. You might have to configure mergerfs by hand (I do) if the plugin doesn't allow you to input the options you wish to use.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 16GB ECC RAM.

  • I started reading some documentation of direct_io and decided to give it a try. Write speed shouldn't be affected if I disabled it, but it seems to have solved the problem :D

  • In past experimentation with BTRFS, I expected to do what you're suggesting; detect bit-rot and replace the corrupted file from back up. The problem I had was, there didn't seem to be any clear way to associate a detected bit error with the name of the affected file. And I spent a bit of time looking around for a process or a utility that could do it - I found nothing.
    So, if you find a way to do that (associate a detected bit error to a file name) I'm real interested. Please post that information.

    As you told me this, I decided to search and check where did I read that.


    At the first beginning I remember I saw about the path of the corrupted file on UNRAID forums, but I also searched, and found the following:


    http://prntscr.com/qjldxi

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!