Beiträge von omv@joy

    Interesting, why did you go with Raid 1 vs 5? Is there a performance benefit or is it simply better at hitting fewer drives during write?


    I admit I've not considered raid 1 for my setup, but it does seem to operate differently comapred to regular raid 1. Instead of mirroring everything to every drive, it operates under "one copy of what's important exists on two of the drives in the array no matter how many drives there may be in it" (source).


    Parity losses will be greater (100% vs 100%/n where n = # of drives), but rebuilding a failed array will be significantly faster & safer.

    Well, after a lengthy backup session & tinkering over the long holiday weekend, the transition is now complete. Everything seems to be up and running, though I've not yet had a chance to tinker around with benefits of btrfs vs a standard raid 5. Here are some of the commands which came in handy during this process, to avoid any issues with OMV, I recommend you remove the file system from OMV first and wipe the Disk as if you are going to be using a WebUI to make a new raid / fs.


    Bash
    parted /dev/sdx mklabel gpt
    parted -a optimal /dev/sdx mkpart logical 0% 40%
    parted -a optimal /dev/sdx mkpart logical ext4 40% 100%
    mkfs.ext4 -L xyz /dev/sdx2


    Replace "/dev/sdx" with device path as seen on the Disks tab. I've used 40/60 split for my setup, in this case using the slower 60% partition as a standard ext4 partition. Replace "xyz" with the label you wish to see within the OMV, unfortunately you will not see the partition unless you first format it via CLI, but after that you'll be able to mount it using WebUI. Sitenote: this disks will mount using uuid instead of their label, path will be /srv/dev-disk-by-uuid-####/, not sure how to force it to mount using the label, but we'll be using unionfs anyways, so not a big deal...


    I will not go into details on how to use union filesystems plugin (I'm sure it's discussed at length elsewhere within the forum), but the partitions will showup as if they were full drives, so no need to go folder by folder. (mergerfsfolders is not needed in this case). In my case, my only deviation from the defaults was the use of "Most free space" policy and addition of ",cache.statfs=1800" to cache the "free space" calculation for about a half hour. I do so to prevent frequent disk switching when writing multiple files to the drives when they are about evenly full.


    Part two


    As per a good starting point forom doscott Btrfs raid can then be created using this command:


    Bash
    mkfs.btrfs -L raid5 -m raid5 -d raid5 /dev/sdx1 /dev/sdy1 /dev/sdz1

    I've labeled mine as "raid5", but you may be using something different. If using two drives in stead of three, use raid 1 (diplication), otherwise replace "/dev/sd?#" with path & partition numbers you were going to use, in my case I've used the faster 1st partition.


    After the filesystem is created, it can be found & mounted via filesystems tab within WebUI. It will actually mount as "/srv/dev-disk-by-label-????", so the label is useful besides within the WebUI.


    Attached is the end result, restore is still in progress, so usage data is off, but everything is seemingly working well :)

    Thanks doscott. I'm going to give it a shot and report any issues. There will be three disks, each with 2 partitions. 3 of those (one from each drive) will be united via btrfs, the remaining three will simply be combined via unionfs. From what I see, most of the configuration will have to take place via CLI, but at least I'm getting more comfortable that things will not blow up with web gui.


    crashtest, guilty as charged. I do wish to avoid buying a drive, but I do so with a couple of concerns in mind.

    Primarely is that my enclosure only has space for three drives where they will be fan cooled. Forth drive would have to be jerry rigged somewhere where there is no good access to ventilation.

    While adding a 4th drive would increase available space by 30% from 10% to 40%, but doing so would also increase a chance of drive failure by about 33% (4/3).


    I know this is only possible because I'm willing to outright loose a portion of the local copy of the data, this is due to infrequent changes to said data and avaialblily of remote backup which would prevent a permanent loss, but at least in my case I'll be happy to try this in order to squeeze another year out of the drives I have. Long term solutions looks to be to replace all three drives with a higher capacilty versions, but it's out of my pricerange in 2020.

    I have a lot of low priority data I don't mind loosing locally, as it can be recovered from backup if needed, it takes up about 70% of my current RAID, representing about 47% raw storage & 23% of storage loss to redundancy (RAID). It's also low bitrate data, meaning any performance benefits from the RAID are unused.


    In addition to that I have another 20% (13% raw) of high priority data, which changes on a regular basis and I would very much like to keep it on some sort of raid backup.


    Between the two, I'm using up about 90% of my RAID and in lieu of upgrading would like to consider an alternative. My questions if anyone has tried this yet?


    I would like to destroy the current RAID and reclaim 100% of raw data space.

    I would then like to partition each of my three drives to 75% regular partition & 25% to be used with btrfs.

    75% partitions from each of the drives would be united via unionfs to store the 47% of low priority data I can recover from offsite backup if needed. Should I loose a drive, I'll be faced with recovering about 33% of what ever data was stored there, eaily done.

    Can the three 25% btrfs partitions be combined to provide some form of RAID protection? I know I can setup btrfs to use up the whole drive via GUI & combine them to provide RAID protection, but can similar thing be setup via CLI, and would OMV gui handle such a setup?

    :/


    I'll be happy to clarify anything if I'm unclear, but from the looks of it, using this method I could reclaim about 23% of raw storage by simply not having to provide redundancy on low priority data.

    Ok, found my own answer:
    Add "--verbose" within extra options of the tftp service (disable / enable service or reboot to apply), than logs will show up within Syslog drop-down of the System Logs tab.
    To see realtime logs via ssh, you can use this command:

    Bash
    tail -f /var/log/syslog | grep tftp

    Hope it saves a couple of hours to someone else who may need this info. Logs do not show transfer results, just what file was requested, no clue how to show failed requests (whereas files were unavailable or transfer has not completed).

    Hello all, trying to setup a local PBX system and need tftp server for phone configuration. Can really use some logs from the TFTP plugin / service to see which files the phones are asking for, but can't find any info on where such logs can be found.


    Don't mind accessing them via ssh if needbe, any idea on what kind of extra options I could specify to create a logfile somewhere?


    Thanks,
    Gene