Help with FS choose (EXT4 - BTRFS - ZFS)

  • Hello! I have one question, that I need help from some people who can help with their opinion.

    I have a OMV-7 installed and played nice. For now storage I have is 4 hdd with 500gb size with ext4 fs. This old desktop and laptop size drives, but in good smart state. (4 ports in motherboard)

    Created two raid-1 using mdadm (md0, md1).

    In the motherboard I have a pci card with another 4 sata ports, so a can add up to 4 drives additionally. Main system installed on usb drive.

    For now free space on my disks md0 and md1 are going to end, but I have a couple more disk from old laptops and pc, with good smart state, and want to utilize them in my omv installation.

    So I want to find and solution where I can add more drives in omv, of course with different capacity, like 300-500-1000-1500gb, and want to stay with some raid-1 safe function.


    Starting googling with some solution I found that "fresh" fs like btrfs or zfs have ability to add disk to storage pool with increase overall capacity, see that omv 7 has btrfs filesystem integration. Also found that for zfs support required proxmox kernel and zfs plugin. Noted that in zfs we cannot just add drive into vdev for now, and this function will coming in future, but can add new vdev in pool. So this adding new vdev requires min 2 hdd as raid-1 safety.


    After some thinking I decide to switch my md raid-1 arrays to btrfs raid-1 or maybe raid-1c2 or 1-c3 looking for free space after adding disks.

    Below are steps I want to do for this:

    1. added 2x500gb drives, format and create btrfs with single data config.

    2. copy data from md arrays to this disks

    3. switch all shared folders to new drives

    4. if everythink work fine -> drop md arrays, format and add it to btrfs mount point

    5. switch to decided raid version with syncing data between drives


    So is this ok to switch to btrfs, and this steps is enough or i miss something?


    Some info about pc with omv-7

    Pentium(R) Dual-Core CPU E5700 @ 3.00GHz

    8 Gb ddr4

    It used for backup my timemachine, and one location of-site backup of production databases in our company made by restic, (another location is in clouds), run small docker containers for myself like netdata and gitrunner.

    Running smb, s3 via plugin, kaddy via docker and so on. So main function is backup files.

    One raid used for minio s3 data files, here we have backup company server os.

    Second raid used for restic backup company databases.


    Feel free to ask questions and say your opinion to my plan migration :)

  • votdev

    Hat das Thema freigeschaltet.
  • Never forget that RAID of any form is not a backup and while you have some data backed up in the cloud, is it all?


    Be clear that BTRFS RAID1 is nothing like traditional MD RAID1 mirrors. The BTRFS RAID1 allocation profile used on N-devices guarantees two copies of data (& metadata) will be saved on two out of the N-devices. Space efficiency is 50%. Use this BTRFS disk usage calculator to see how disks of unequal size combine https://carfax.org.uk/btrfs-usage/


    This means it can only reliably survive one faulty or missing device. Unlike MD RAID, a BTRFS array may only mount read-only when degraded.


    You are proposing to move from 4 HDDs in 2 x mirrors to 6 HDDS in a BTRFS RAID1. Only you can judge if the level of redundancy meets your criteria of “raid-1 safety”. If you expect a degraded array to always mount read/write, then BTRFS may fail this criteria.


    In your plan there are two stages which place your data at risk. At step 4, dropping the MD RAIDs will mean you have no redundancy, no protection from admin error. If step 4 is successful, in step 5 you will need to change profiles which is a slow process during which you will still be operating with no redundancy and reduced access to your data.


    If you’ve never used BTRFS before, I’d suggest finding a third additional drive to create to carry out a dry-run with BRTFS single converting to BRTFS RAID1 profile and learning how to deal with disk failures in BTRFS.


    You can use drives of unequal size in MD RAID, see here for allocation examples: https://www.seagate.com/gb/en/…s-drives/raid-calculator/


    ZFS doesn’t meet your criteria of easy expansion.

  • Never forget that RAID of any form is not a backup and while you have some data backed up in the cloud, is it all?

    Yes, there is a second copy of backup in the cloud. Omv used to keep copy of data here with “one hand available”, for fast restore if needed, and playing around with omv and docker.

    This means it can only reliably survive one faulty or missing device. Unlike MD RAID, a BTRFS array may only mount read-only when degraded.

    This is not a problem because of cloud copy and read-only data is still available. And a have a bunch of old disks with god smart status if some of disk will destroy itself. I can fast install fresh drive and go on.

    In your plan there are two stages which place your data at risk. At step 4, dropping the MD RAIDs will mean you have no redundancy, no protection from admin error. If step 4 is successful, in step 5 you will need to change profiles which is a slow process during which you will still be operating with no redundancy and reduced access to your data.

    Yes, I understand that. But if the worst thing happened, it is not big pain, because of cloud copy, and, I doesn’t said but I have 3 places where backup data lives: in the server itself on second drive with raid1, omv, and cloud. So if omv out of game, nothing critical is happened.

    You are proposing to move from 4 HDDs in 2 x mirrors to 6 HDDS in a BTRFS RAID1. Only you can judge if the level of redundancy meets your criteria of “raid-1 safety”.

    As far I understand from google I can use BTRFS raids like raid1, or can go raid1c3/4 if free space will give me chance, I will have 2 of 3 disks possible offline without data lost(but in readonly)

    If you’ve never used BTRFS before, I’d suggest finding a third additional drive to create to carry out a dry-run with BRTFS single converting to BRTFS RAID1 profile and learning how to deal with disk failures in BTRFS.


    You can use drives of unequal size in MD RAID, see here for allocation examples: https://www.seagate.com/gb/en/…s-drives/raid-calculator/

    Yep, I can install a bunch disks only for testing and play with it for understand what going on when some disk is offline.


    For mdadm raid disk with unequal size there is a problem: capacity size for data is equal to min disk capacity presented in raid. It’s had a waste capacity that isn’t my target.


    As far I have 2 raid1 mirrors and can’t add just a one drive for grow space without rebuilding full arrays, because mirror raid is about duplicate data on drive and not for add size. Having a raid10 will give me work to full rebuild data position on drives, with maybe wasting some space, if size drives are different, and that is not what I want.

    Opinion about zfs was clear, thank you.


    And what about docker? Can you say that it work well with BTRFS, as I see in google there is some problems with it in zfs.

  • As far I understand from google I can use BTRFS raids like raid1, or can go raid1c3/4 if free space will give me chance, I will have 2 of 3 disks possible offline without data lost(but in readonly)

    raid1c3/4 will reduce your capacity as that's 33.3% and 25% space efficiency respectively.



    And what about docker? Can you say that it work well with BTRFS, as I see in google there is some problems with it in zfs.


    The question of using docker with BTRFS is best asked in a separate thread to get feedback re: issues and performance.


    The OMV "docker compose" plugin allows the choice of where docker is installed. By default that's on the OMV system drive ( your usb drive) which is formatted EXT4 and will use docker's default overlay2 storage drive for docker images and volumes, etc. OMV users who choose to use a dedicated device, typically SDD or NVME, for docker often stick to EXT4 combined with overlay2.


    It's possible when using a BTRFS filesystem to switch docker from using the overlay2 storage-driver to a btrfs driver. The pros & cons are outlined here: https://docs.docker.com/storage/storagedriver/btrfs-driver/


    Another questions is how BTRFS performs with containers making use of databases compared to say EXT4.


    The performance of ZFS with docker's overlay2 fs is much improved in the latest openzfs 2.2 versions.

  • All my drives are on BTRFS (except OS usb stick).


    Docker, media, appdata run without any issues since BTRFS is well implemented on docker

    Qbit downloads are on Raid0 BTRFS (done on a wimp) and Nextcloud data on a RAID1 BTRFS.


    No issues whatsoever.

  • Good to know, but is docker /var/lib/docker/ running on a ssd/nvme and did you stick with overlay2 fs?

  • Good to know, but is docker /var/lib/docker/ running on a ssd/nvme and did you stick with overlay2 fs?

    Yes, it's on a overlayfs:

    Code
     Storage Driver: overlay2
      Backing Filesystem: btrfs


    Full info:

  • So that’s sounds good. I played with test drives with btrfs and found nothing bad with it. Try to migrate on next week and look what it shows in production.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!