btrfs install

    • OMV 1.0
    • btrfs install

      I couldn't find instructions for installing OMV with btrfs, if they exist please post a link.

      I have a simple system with 3 disks (the 2 data disks will be mirrored, RAID1):

      /dev/sda Data disk 1, 4TB HD
      /dev/sdb Data disk 2, 4TB HD
      /dev/sdc Boot disk, 64GB SSD


      1 - Install OMV. Install from image (CD, USB, whatever) to boot disk (SSD for me). Reboot, make sure OMV starts OK from your boot disk.

      2 - Install OMV-extras plugin. You need this to get a newer linux kernel (3.16) that supports btrfs. Follow this guide -…install-omv-extras-plugin

      3 - Select 3.16 kernel. Log into web interface, go to System /, select the Kernel tab, click Install Backports 3.16 kernel, Save and Apply, Reboot, go back to and make sure "Debian GNU/Linux, with Linux 3.16..." is selected.

      4 - Get terminal access. Some steps below use the command line so you will need a terminal. You can use the local terminal if you have a keyboard and monitor connected to your OMV. I prefer to use a remote terminal via SSH (Secure SHell), here's how: From OMV web interface select Services, SSH, check Enable. Use a SSH client (like PuTTY) to connect to your OMV. You will need to know the hostname or IP address (System, Network, Interfaces). Connect and login as root.

      5 - Install btrfs-tools. Even though btrfs is part of the 3.16 kernel, you need the utility programs in btrfs-tools to, for example, create a btrfs filesystem. From the command line type the following to get the newer backport btrfs-tools:
      echo 'deb wheezy-backports main' > /etc/apt/sources.list.d/wheezy-backports.list
      apt-get update
      apt-get -t wheezy-backports install btrfs-tools

      6 - Create the btrfs filesystem. A default mirrored (RAID1) filesystem with mirrored metadata and non-mirrored data is created with this command:
      mkfs.btrfs /dev/sda /dev/sdb
      I wanted my data to also be mirrored so I did this:
      mkfs.btrfs -d raid1 /dev/sda /dev/sdb
      For more information, see this beginners guide to btrfs -

      7 - Mount the filesystem. In the OMV web interface go to: Storage, File Systems, select any of the data disks (for my system, /dev/sda or /dev/sdb) then click the mount button. The disks should now show as btrfs in the "File system" column.

      8 - Create subvolumes. This is optional but recommended since creating shares on the btrfs root will cause problems, like preventing snapshots (see macester's post below). Type the command df -h in the terminal and look for a line like this:
      /dev/sda 7814037168 1344 7811881600 1% /media/eee51012-49d2-423f-8f79-80a8594130ec

      NOTE: the long string after /media/ is the UUID. If you copy and paste from this post, be sure to replace this example UUID with the one for your system.

      To create subvolume for backups, type this:
      btrfs subvolume create /media/eee51012-49d2-423f-8f79-80a8594130ec/@Backups

      How about another for data:
      btrfs subvolume create /media/eee51012-49d2-423f-8f79-80a8594130ec/@Data

      To verify they were created, type ls -l /media/eee51012-49d2-423f-8f79-80a8594130ec and you should see something like this:
      drwxr-xr-x 1 root root 0 Mar 11 23:43 @Backups
      drwxr-xr-x 1 root root 18 Mar 11 23:57 @Data

      9 - Create shares. In the OMV web interface go to: Access Rights Management, Shared Folders, click on the Add button. To create a share for pictures, for Name enter Pictures, for Volume select /dev/sda (or whatever your btrfs volume is), for Path click the folder button and select @Data then click OK, now add the Pictures folder so the full path is /@Data/Pictures, select the Permissions you want, then click Save.

      - Create snapshots of subvolumes
      - Create a System, Scheduled Jobs to run scrub occassionally to check, and attempt to correct, any error in the btrfs filesystem

      Corrections and improvements welcome.

      The post was edited 15 times, last by KanyonKris ().

    • 5,

      You should build btrfs-tools or install it from wheezy backports(current is from april 2014), the one from wheezy repos is very old with alot of unsupported features.

      install btrfs-tools ftom backports:

      Source Code

      1. echo 'deb wheezy-backports main' > /etc/apt/sources.list.d/wheezy-backports.list

      Source Code

      1. apt-get update

      Source Code

      1. apt-get -t wheezy-backports install btrfs-tools


      You dont want to create shares this way, this will make a "mkdir" folder so you wont be able to take snapshots etc,, (another reason i've noticed is when you remove files from a created share btrfs wont "release" the free space on the drive so you will have to run a rebalance)

      ssh to omv and create subvolumes to be able to take advantage of snapshoots etc..

      create a subvolume aka "shared folder":

      Source Code

      1. btrfs subvolume create /media/<UUID>/@newsubvolume

      then import it under shared folders instead.

      If you want to be able to take snapshots of @newsubfolder then,

      create folder for snapshots:

      Source Code

      1. #btrfs subvolume create /media/<UUID>/@newsubvolume/.snapshots

      create snapshot:

      Source Code

      1. btrfs subvolume snapshot /media/<UUID>/@newsubvolume /media/<UUID>/@newsubvolume/.snapshots

      to recover:

      Source Code

      1. mv /media/<UUID>/@newsubvolume/.snapshots /media/<UUID>/@newsubvolume

      Maybe runs scrub once week or so, (should be auto healing?) been running it every three days for 2 months and haven't seen a error corrected yet.
      (about 6TB of data have moved back and forth on the disks in this time.)

      Been running on my test-omv (evaluating omv as a KVM-host with btrfs) for awhile now to see if i'm gonna swap out my regular server,
      so far it's been running great.

      Running a three disk array in Raid-1, moved the disk between my Ubuntu 15.04 test box back and forth a fwee times to try and see if i can brake it, converted it to a Raid5 on the Ubuntu machine added a forth disk converted it to a Raid10 then back to Raid1 and so on...Works like a charm!

      //Regards Mace
    • Mace,

      Thanks for the tips and info. I did what you suggested tonight.

      The backport btrfs-tools went fine, and yes they were much newer (v3.14.1 instead of 0.19).

      I blundered around a bit with subvolumes. I used OMV to mount the btrfs filesystem since that was the only way I was able to create the /media/<UUID> mount. I hope that didn't mess it up and create that "mkdir" mess. Is it necessary to mount from the terminal? If you wouldn't mind, please look over steps 7 and 8.
    • Sry for the late reply been on computer/phone free vacation (wife loved it, I hmm =P)

      Regarding the "mkdir mess" well it dosent mess anything up at all, just that you wont be able to take snapshots.

      The btrfs "/media/<UUID>/" is a so called Subvolume or "The Subvolume" so you could always take a snapshot of it, so for shares you could go the mkdir way or subvolume way. (I mainly just use snapshoots for documents share for revisions and for my VM-machine share)

      Nope that's the way to mount it.

      Step .7
      Seems alright, the thing i wrote about "wont release space" was actually the debian btrfs-tools that was buggy.
      I replicated it on a vm, crated shares on omv with the old btrf-tools and i dident get any free space when deleting stuff, mounted the disks in a new omv vm with the btrf-tools from backports and the free space was correctly shown.

      Step. 8
      All good, thought you don't really need the "@" it´s just the btrfs way to demonstrate a subvolume dont as me why =P

      I just upgraded to kernel 19.2 though from 19.0 so I can run raid-5 "trouble free" really been abusing it on my test machine killing disks etc and replacing them, growing the raid, reducing it etc and it seems to work great so I just converted my main OMV to raid-5. (Though raid-1 performance is great with btrfs, GOD raid-5 performance is great benched it with my old ext-4 where read/write was about 280/65 MB/s btrfs i get 310/280MB/s this is DD test with sync enabled)

      Only thing that really gotten worse when moving to raid-5 is the time it takes to scrub, scrubbing 3tb of data with raid-1 took about 8 hours.
      With raid-5 it took about 28 hours though this was just after it was converted and with kernel 19.0. (hope things will get better).

      The Kernel i use is is vanilla mainline compiled with the standard Debian/OMV kernel conf with df patches(to show correct space in the df command).
      Though think i´m gonna skip the df patches for next kernel since, "btrfs df usage /mountpoint/" gives the same data available in btrfs-tools 3.19.

      AS for btrfs-tools i tried to compile 3.18-3.19 (works great on Ubuntu), but is broken on Debian libs locations gets messed up(could manually move them but eh. was an ln -s hell) bug report is posted. 3.17 works great to compile. (But i noticed it´s already in testing repository)

      Btrfs-tools from testing works great and I really dont see the harm to use them since they just are compile from source as the old ones:

      Source Code

      1. wget && dpkg -i btrfs-tools_3.17-1.1_amd64.deb

      AS for scrubbing i use a simple cron entry in omv interface:

      Source Code

      1. btrfs scrub start /media/<UUID>

      I run it ever three weeks.
      I did send a mail to the mailing lists for btrfs to see how often the recommend scrubbing should occur, I got the answer for desktop drives every two weeks and for NAS-grade drive about every four weeks.

      Also a note since df cant calculate the drive space it will look a bit wierd in the OMV interface,
      as for now there realy isent much to do about it since this isent a omv thing rather then a linux thing.

      So as for now I would start with to calculate your space.
      Then use the command "df -ha" and look at how much is used after that "btrfs fi show" to see how much space your drive really have.

      I use a cron job within OMV to get a status mail to be able to see my space once a week with:

      Source Code

      1. btrfs fi show && btrfs fi df /media/<UUID>/

      As for getting the status of rebalancing, scrubbing I just fire up an putty session with:

      Source Code

      1. watch -d -n 30 "btrfs balance status /media/<UUID>/; btrfs scrub status /media/<UUID>/; btrfs fi df /media/<UUID>/"

      //Reagards mace

      If you want the latest compiled kernel for btrfs support or instructions to compile it yourself mess me. (Guess this is out of the scope for a guide simply to get btrfs working)

      The post was edited 1 time, last by macester ().

    • Could you then post some instructions on how to compile the kernel for omv/debian?? I'd love to update to 3.19.0 / 3.19.5 for better (or even some) raid5 support...I successfully migrated from a old xpenology disk setup with md and lvm to a btrfs raid1 (due lacking srub, etc. support in the 3.16 packport kernel) which is working fine, but I'm going to replace the oldest hdds pretty soon on my setup...time to migrate to a raid5 again, if I get the right kernel and maybe an updated version of the tools too :). Or are there any guides on the forum for the kernel stuff?? I'm not a real expert for the kernel config during build...
    • From what I have read, btrfs raid5/6 is still not ready for production even in the newer kernels. I would wait for OMV based on Jessie. It should have a backports 4.x kernel that btrfs raid5/6 may be ready in.
      omv 5.0.10 usul | 64 bit | 5.0 proxmox kernel | omvextrasorg 5.1.1 plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I guess your right...would be a great thing if we get to jessie during this years course of development...Maybe I'll just get the hdds and convert to a raid10 instead, which seems stable right now. I'm pretty impressed by btrfs at this point...did Volker already talk about making btrfs the default data fs, once the parity features aka raid56 become stable?

      The post was edited 1 time, last by stone49th ().

    • thanks for this guide i will make a try on a test machine.

      i know that the omv community was more focussed on zfs suport to allow move from freebsd nas users to omv.
      it seems that now zfs plugins are released.

      is there any interest from someone to develop this functions inside a btrfs plugin for omv ?

      # btrfs subvolume create /media/<UUID>/@newsubvolume/.snapshots

      # btrfs subvolume snapshot /media/<UUID>/@newsubvolume /media/<UUID>/@newsubvolume/.snapshots

      # mv /media/<UUID>/@newsubvolume/.snapshots /media/<UUID>/@newsubvolume

      so that everything is done through the web interface ?
      would this be something complicated to do ?
    • Dont know weather its a hard task or not, but I bet there is also some effort involved with mounting the volume correctly (had to change the mount options in the config.xml by hand, etc.).

      Also, don't forget about all that functionality for error reporting, restore, configuration, balancing, srubbing, etc. Proper error reporting and stupid-user checks also need to be implemented... guess a very experienced plugin dev would be required here. Anyhow, if one ready the mailing list on btrfs, it's really not that production ready, since raid56 and some other features have just been released in the latest kernels. Another problem is, that the interface to the btrfs tools changes quiet often from release to release, so the effort maintaining and testing a filesystem plugin that does not kill your data or fs would be too high for just another side project of a side project. You have to monitor the mailinglist, keep tracks of the bugs and avoid them. I don't know... I personally wouldn't touch it just now. Btrfs is on a good way, but its still evolving too fast for a rock-solid integration in my opinion.

      Could be the case that I'm completely wrong on this matter, just my thoughts ^^