Any roadblocks I should be aware of prior to trying to partition the drive(s) & setup a btrfs on seperate partition(s) of individual drives?

  • I have a lot of low priority data I don't mind loosing locally, as it can be recovered from backup if needed, it takes up about 70% of my current RAID, representing about 47% raw storage & 23% of storage loss to redundancy (RAID). It's also low bitrate data, meaning any performance benefits from the RAID are unused.


    In addition to that I have another 20% (13% raw) of high priority data, which changes on a regular basis and I would very much like to keep it on some sort of raid backup.


    Between the two, I'm using up about 90% of my RAID and in lieu of upgrading would like to consider an alternative. My questions if anyone has tried this yet?


    I would like to destroy the current RAID and reclaim 100% of raw data space.

    I would then like to partition each of my three drives to 75% regular partition & 25% to be used with btrfs.

    75% partitions from each of the drives would be united via unionfs to store the 47% of low priority data I can recover from offsite backup if needed. Should I loose a drive, I'll be faced with recovering about 33% of what ever data was stored there, eaily done.

    Can the three 25% btrfs partitions be combined to provide some form of RAID protection? I know I can setup btrfs to use up the whole drive via GUI & combine them to provide RAID protection, but can similar thing be setup via CLI, and would OMV gui handle such a setup?

    :/


    I'll be happy to clarify anything if I'm unclear, but from the looks of it, using this method I could reclaim about 23% of raw storage by simply not having to provide redundancy on low priority data.

  • omv@joy

    Hat den Titel des Themas von „Any roudblocks I should be aware of prior to trying to partition the drive(s) & setup a btrfs on seperate partition(s) of individual drives?“ zu „Any roadblocks I should be aware of prior to trying to partition the drive(s) & setup a btrfs on seperate partition(s) of individual drives?“ geändert.
  • Did you search this and more generic Linux forums for btrfs already?

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

  • Code
    Replace the /dev/sdx1 with your drive and partition numbers:
    
    mkfs.btrfs -L LABEL -m raid1 -d raid1 /dev/sda1 /dev/sdb1 /dev/sdc1
    
    You will have total storage space of 1.5 times a single partition size, assuming each partition is the same size.
  • Yes, I'm comfortable with the idea being possible and realize that I need command prompt to do it since GUI does not support such config.

    What I was primarely curious about is if anyone attempted something like this and if OMV gui (webpage) worked well with the similar config.


    Thanks doscott for command example.

  • What I was primarely curious about is if anyone attempted something like this and if OMV gui (webpage) worked well with the similar config.

    As long as you include a label the btrfs drive will show up in the filesystems window (may require a reboot before showing up) and in any pull downs for creating shares. It will not show up in the raid management window.


    btrfs can be used on a drive without partitions, or in partitions on a drive, or both at the same time. I haven't used this configuration with multiple partitions on a single drive, but I have had OMV configured with btrfs spanning 3 drives without partitions and 1 drive with a single partition. This was a result of a typo when adding the 4th drive to a 3 drive array but there were no issues.

  • Thanks doscott. I'm going to give it a shot and report any issues. There will be three disks, each with 2 partitions. 3 of those (one from each drive) will be united via btrfs, the remaining three will simply be combined via unionfs. From what I see, most of the configuration will have to take place via CLI, but at least I'm getting more comfortable that things will not blow up with web gui.


    crashtest, guilty as charged. I do wish to avoid buying a drive, but I do so with a couple of concerns in mind.

    Primarely is that my enclosure only has space for three drives where they will be fan cooled. Forth drive would have to be jerry rigged somewhere where there is no good access to ventilation.

    While adding a 4th drive would increase available space by 30% from 10% to 40%, but doing so would also increase a chance of drive failure by about 33% (4/3).


    I know this is only possible because I'm willing to outright loose a portion of the local copy of the data, this is due to infrequent changes to said data and avaialblily of remote backup which would prevent a permanent loss, but at least in my case I'll be happy to try this in order to squeeze another year out of the drives I have. Long term solutions looks to be to replace all three drives with a higher capacilty versions, but it's out of my pricerange in 2020.

    • Offizieller Beitrag

    crashtest, guilty as charged. I do wish to avoid buying a drive, but I do so with a couple of concerns in mind.

    Primarely is that my enclosure only has space for three drives where they will be fan cooled. Forth drive would have to be jerry rigged somewhere where there is no good access to ventilation.

    While adding a 4th drive would increase available space by 30% from 10% to 40%, but doing so would also increase a chance of drive failure by about 33% (4/3).

    It sounds, to me, like you might be a candidate for SNAPRAID and, maybe, mergerfs (called the unionfs plugin). WIth SNAPRAID, the parity drive could be an external drive. The only condition for a SNAPRAID parity drive is that it's as large, or larger, than the largest disk being protected.


    That would allow for the recovery of all drive real-estate, of the disks you already have, provide for drive recovery protection AND give you bit-rot protection. (And since it's an external, you could even shut the parity drive off between sync operation.)

    On drive heat, there's no real concern until their operating temp goes beyond 40C (according to a google study).
    ___________________________________________

    But I think the 800lb gorilla in the room is "backup". Realize that the drives you have in an array are "aging" and a raid resilvering operation is a torture test which can, easily, cause a second "geriatric" drive into failure. What I'm getting at is, if you have data you want to keep, you need to back it up, as in a 2nd independent copy of what you want to keep. Otherwise, it's just a matter of time.

  • crashtest, thanks for the idea! I've considered snapraid, but have not thought about using an external drive, good thinking.

    I'm still interested in transitioning the high priority data to btrfs since (as far as I understand it), it would provide better support for user recovery (shadow copies).

    • Offizieller Beitrag

    If you're using a drive housing or external case of some kind, consider:


    mdamd (software) raid, BTRFS, ZFS and other RAID implementations rely on roughly equal bandwidth to the member drives in the array. SATA and SAS can provide it. They're designed for it. USB is not really capable of providing equal bandwidth, so,, any kind of RAID over USB is one of those things that may work or it may not. In any case, it's not reliable.

    SNAPRAID + the UnionFS plugin, with drives formatted to EXT4, does not depend on equal bandwidth. They're more tolerant and more inline with USB connected drives.


    Not a sermon - just a thought. :)

  • Well, after a lengthy backup session & tinkering over the long holiday weekend, the transition is now complete. Everything seems to be up and running, though I've not yet had a chance to tinker around with benefits of btrfs vs a standard raid 5. Here are some of the commands which came in handy during this process, to avoid any issues with OMV, I recommend you remove the file system from OMV first and wipe the Disk as if you are going to be using a WebUI to make a new raid / fs.


    Bash
    parted /dev/sdx mklabel gpt
    parted -a optimal /dev/sdx mkpart logical 0% 40%
    parted -a optimal /dev/sdx mkpart logical ext4 40% 100%
    mkfs.ext4 -L xyz /dev/sdx2


    Replace "/dev/sdx" with device path as seen on the Disks tab. I've used 40/60 split for my setup, in this case using the slower 60% partition as a standard ext4 partition. Replace "xyz" with the label you wish to see within the OMV, unfortunately you will not see the partition unless you first format it via CLI, but after that you'll be able to mount it using WebUI. Sitenote: this disks will mount using uuid instead of their label, path will be /srv/dev-disk-by-uuid-####/, not sure how to force it to mount using the label, but we'll be using unionfs anyways, so not a big deal...


    I will not go into details on how to use union filesystems plugin (I'm sure it's discussed at length elsewhere within the forum), but the partitions will showup as if they were full drives, so no need to go folder by folder. (mergerfsfolders is not needed in this case). In my case, my only deviation from the defaults was the use of "Most free space" policy and addition of ",cache.statfs=1800" to cache the "free space" calculation for about a half hour. I do so to prevent frequent disk switching when writing multiple files to the drives when they are about evenly full.


    Part two


    As per a good starting point forom doscott Btrfs raid can then be created using this command:


    Bash
    mkfs.btrfs -L raid5 -m raid5 -d raid5 /dev/sdx1 /dev/sdy1 /dev/sdz1

    I've labeled mine as "raid5", but you may be using something different. If using two drives in stead of three, use raid 1 (diplication), otherwise replace "/dev/sd?#" with path & partition numbers you were going to use, in my case I've used the faster 1st partition.


    After the filesystem is created, it can be found & mounted via filesystems tab within WebUI. It will actually mount as "/srv/dev-disk-by-label-????", so the label is useful besides within the WebUI.


    Attached is the end result, restore is still in progress, so usage data is off, but everything is seemingly working well :)

  • One thing to keep in mind about btrfs RAID5, you can have an unrecoverable failure if two things happen at the exact same time: power failure and disk failure. Probably nothing to worry about, especially if you use a UPS.


    Useful tools for maintenance can be found at:

    https://github.com/kdave/btrfsmaintenance


    From a root login I did a

    git clone https://github.com/kdave/btrfsmaintenance.git


    The following will install it:

    ./dist-install.sh

    Then edit

    /etc/default/btrfsmaintenance

    and change whatever setting you want. I stuck with the defaults but set the balance and scrub mountpoints to "auto".


    Then run:

    btrfsmaintenance-refresh-cron.sh

    to use cron to run the task (you can use systemd but cron is simpler).


    This is a good btrfs cheat sheet:

    https://blog.programster.org/btrfs-cheatsheet


    One of the neatest things with btrfs is that if you ever run short of space on that raid5 setup is that you can convert, on the fly, in addition to any other raid setup, converting to no raid.


    A heads up on something that took me a while to figure out on a drive failure: in order to mount a btrfs raid in a degraded state from the command line, you need to remove it from fstab or it will remount itself in a read only state.

  • A couple of other things with btrfs I find useful. I run two daily cron jobs as root:


    Code
    /usr/bin/btrfs fi show


    which gives a mailout of:


    Label: 'BTRFS1' uuid: 3c116019-c3d4-46f4-856c-cd624761c77e

    Total devices 4 FS bytes used 3.39TiB

    devid 2 size 1.82TiB used 1.14TiB path /dev/sdc1

    devid 3 size 1.82TiB used 1.14TiB path /dev/sdd1

    devid 4 size 1.82TiB used 1.14TiB path /dev/sdf1

    devid 5 size 5.46TiB used 3.42TiB path /dev/sde


    and


    Code
    /usr/bin/btrfs device stats /srv/dev-disk-by-label-BTRFS1


    which gives a mailout of:


    [/dev/sdc1].write_io_errs 0

    [/dev/sdc1].read_io_errs 0

    [/dev/sdc1].flush_io_errs 0

    [/dev/sdc1].corruption_errs 0

    [/dev/sdc1].generation_errs 0

    [/dev/sdd1].write_io_errs 0

    [/dev/sdd1].read_io_errs 0

    [/dev/sdd1].flush_io_errs 0

    [/dev/sdd1].corruption_errs 0

    [/dev/sdd1].generation_errs 0

    [/dev/sdf1].write_io_errs 0

    [/dev/sdf1].read_io_errs 0

    [/dev/sdf1].flush_io_errs 0

    [/dev/sdf1].corruption_errs 0

    [/dev/sdf1].generation_errs 0

    [/dev/sde].write_io_errs 0

    [/dev/sde].read_io_errs 0

    [/dev/sde].flush_io_errs 0

    [/dev/sde].corruption_errs 0

    [/dev/sde].generation_errs 0


    My setup consists of a raid1 array of 4 drives, 3 x 2TB and 1 X 6TB.

  • Interesting, why did you go with Raid 1 vs 5? Is there a performance benefit or is it simply better at hitting fewer drives during write?


    I admit I've not considered raid 1 for my setup, but it does seem to operate differently comapred to regular raid 1. Instead of mirroring everything to every drive, it operates under "one copy of what's important exists on two of the drives in the array no matter how many drives there may be in it" (source).


    Parity losses will be greater (100% vs 100%/n where n = # of drives), but rebuilding a failed array will be significantly faster & safer.

  • Interesting, why did you go with Raid 1 vs 5? Is there a performance benefit or is it simply better at hitting fewer drives during write?

    Actually I started with 4 x 2TB in raid5, and then lost a drive. I opted to add a 6TB drive, and raid1 and raid5 in this configuration both proved 6TB storage, so I went with raid1. I wasn't worried about performance, but I like the simplicity of raid1. After the next drive failure I will get another 6TB drive and then I will chose either raid1 (8TB capacity) or raid5 (10TB capacity).


    My NAS is a relatively old Thecus N5550 that failed a flash a few months ago. I couldn't get it to post so I replaced it with a QNAP. It ended up that it would post but the HDMI port would not display anything; the VGA port did. I put OMV on it, and overall I like it much better than the QNAP.

    • Offizieller Beitrag

    My NAS is a relatively old Thecus N5550 that failed a flash a few months ago.

    Do you still have it? Since it uses an Atom CPU, it's a BIOS box. Maybe it will boot on a USB Thumbdrive.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!