Beiträge von macester

    Sry for the late reply been on computer/phone free vacation (wife loved it, I hmm =P)



    Regarding the "mkdir mess" well it dosent mess anything up at all, just that you wont be able to take snapshots.



    The btrfs "/media/<UUID>/" is a so called Subvolume or "The Subvolume" so you could always take a snapshot of it, so for shares you could go the mkdir way or subvolume way. (I mainly just use snapshoots for documents share for revisions and for my VM-machine share)



    Nope that's the way to mount it.



    Step .7
    Seems alright, the thing i wrote about "wont release space" was actually the debian btrfs-tools that was buggy.
    I replicated it on a vm, crated shares on omv with the old btrf-tools and i dident get any free space when deleting stuff, mounted the disks in a new omv vm with the btrf-tools from backports and the free space was correctly shown.



    Step. 8
    All good, thought you don't really need the "@" it´s just the btrfs way to demonstrate a subvolume dont as me why =P



    I just upgraded to kernel 19.2 though from 19.0 so I can run raid-5 "trouble free" really been abusing it on my test machine killing disks etc and replacing them, growing the raid, reducing it etc and it seems to work great so I just converted my main OMV to raid-5. (Though raid-1 performance is great with btrfs, GOD raid-5 performance is great benched it with my old ext-4 where read/write was about 280/65 MB/s btrfs i get 310/280MB/s this is DD test with sync enabled)



    Only thing that really gotten worse when moving to raid-5 is the time it takes to scrub, scrubbing 3tb of data with raid-1 took about 8 hours.
    With raid-5 it took about 28 hours though this was just after it was converted and with kernel 19.0. (hope things will get better).



    The Kernel i use is is vanilla mainline compiled with the standard Debian/OMV kernel conf with df patches(to show correct space in the df command).
    Though think i´m gonna skip the df patches for next kernel since, "btrfs df usage /mountpoint/" gives the same data available in btrfs-tools 3.19.



    AS for btrfs-tools i tried to compile 3.18-3.19 (works great on Ubuntu), but is broken on Debian libs locations gets messed up(could manually move them but eh. was an ln -s hell) bug report is posted. 3.17 works great to compile. (But i noticed it´s already in testing repository)



    Btrfs-tools from testing works great and I really dont see the harm to use them since they just are compile from source as the old ones:



    Code
    wget http://ftp.se.debian.org/debian/pool/main/b/btrfs-tools/btrfs-tools_3.17-1.1_amd64.deb && dpkg -i btrfs-tools_3.17-1.1_amd64.deb



    AS for scrubbing i use a simple cron entry in omv interface:



    Code
    btrfs scrub start /media/<UUID>



    I run it ever three weeks.
    I did send a mail to the mailing lists for btrfs to see how often the recommend scrubbing should occur, I got the answer for desktop drives every two weeks and for NAS-grade drive about every four weeks.



    Also a note since df cant calculate the drive space it will look a bit wierd in the OMV interface,
    as for now there realy isent much to do about it since this isent a omv thing rather then a linux thing.



    So as for now I would start with http://carfax.org.uk/btrfs-usage/index.html to calculate your space.
    Then use the command "df -ha" and look at how much is used after that "btrfs fi show" to see how much space your drive really have.



    I use a cron job within OMV to get a status mail to be able to see my space once a week with:



    Code
    btrfs fi show && btrfs fi df /media/<UUID>/


    As for getting the status of rebalancing, scrubbing I just fire up an putty session with:


    Code
    watch -d -n 30 "btrfs balance status /media/<UUID>/; btrfs scrub status /media/<UUID>/; btrfs fi df /media/<UUID>/"


    //Reagards mace



    If you want the latest compiled kernel for btrfs support or instructions to compile it yourself mess me. (Guess this is out of the scope for a guide simply to get btrfs working)

    As a KVM-host
    Webvirtmgr with media plugin for integration with the ui.
    (Plex, Sonarr, Nzbget, Webserver and much more on the VM´s)
    Btrfs raid-5 for storage (Kernel 19.2 with df patches)
    Samba, NFS
    Remote-Shares
    Rsync (Backup)
    UPS-Nut

    5,


    You should build btrfs-tools or install it from wheezy backports(current is from april 2014), the one from wheezy repos is very old with alot of unsupported features.



    install btrfs-tools ftom backports:

    Code
    echo 'deb http://http.debian.net/debian wheezy-backports main' > /etc/apt/sources.list.d/wheezy-backports.list


    Code
    apt-get update


    Code
    apt-get -t wheezy-backports install btrfs-tools




    7,


    You dont want to create shares this way, this will make a "mkdir" folder so you wont be able to take snapshots etc,, (another reason i've noticed is when you remove files from a created share btrfs wont "release" the free space on the drive so you will have to run a rebalance)


    ssh to omv and create subvolumes to be able to take advantage of snapshoots etc..



    create a subvolume aka "shared folder":

    Code
    btrfs subvolume create /media/<UUID>/@newsubvolume


    then import it under shared folders instead.


    If you want to be able to take snapshots of @newsubfolder then,



    create folder for snapshots:

    Code
    #btrfs subvolume create /media/<UUID>/@newsubvolume/.snapshots


    create snapshot:

    Code
    btrfs subvolume snapshot /media/<UUID>/@newsubvolume /media/<UUID>/@newsubvolume/.snapshots


    to recover:

    Code
    mv /media/<UUID>/@newsubvolume/.snapshots /media/<UUID>/@newsubvolume



    Maybe runs scrub once week or so, (should be auto healing?) been running it every three days for 2 months and haven't seen a error corrected yet.
    (about 6TB of data have moved back and forth on the disks in this time.)



    Been running on my test-omv (evaluating omv as a KVM-host with btrfs) for awhile now to see if i'm gonna swap out my regular server,
    so far it's been running great.



    Running a three disk array in Raid-1, moved the disk between my Ubuntu 15.04 test box back and forth a fwee times to try and see if i can brake it, converted it to a Raid5 on the Ubuntu machine added a forth disk converted it to a Raid10 then back to Raid1 and so on...Works like a charm!




    //Regards Mace