Help to mount a BTFRS Raid on OMV6

  • Hello , I create a BTFRS Raid in OMV5 one year ago with:

    mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd /dev/sde

    then mount it on OMV WebGUI and use it last year.

    Now I do a fresh install of OMV6, and need to mount the RAID, but OMV do not mount it in the same way than OMV5.

    So my question is: what step need to do to NOT lose mi data on it, and mount on OMV WebGUI.

    I have no problem to use shell.

    I can select BTFRS on OMV, but only show disk /dev/sdb , not the 4 disk used on Raid like show OMV5

    once mounted, size is rigth, but only reference one disk :

    And the other 3 disk are show ready to be formated ( that is not what i want):

  • I too, have a BTRFS RAID (although RAID1) made on OMV5.

    Curious to see what comes from this topic...

    What comes to mind is: when you make the RAID, OS is aware that your RAID exists and "knows" which drives belong to it

    That info is then, passed to OMV when you run the script to install it.

    But where is that "info" stored????

    And how OMV "finds" it????

    Some more expert inputs are required here... :D

  • ZFS has a procedure for exporting a pool and then importing it into another system. BTRFS does not have a similar procedure?

    OMV 5, Intel core i3 3225, 8GB RAM, PendriveUSB system, ZFS RaidZ 5xWD Red 4TB, 1x120GB SSD Docker

    I DO NOT SPEAK ENGLISH. I translate with google, sorry if sometimes you don't understand me well:)

    A backup is like a belt, you will miss it when your pants fall off. Do you want to stay in your underpants?

  • Code
    # >>> [openmediavault]
    /dev/disk/by-label/BTRFS1               /srv/dev-disk-by-label-BTRFS1   btrfs   defaults,nofail,comp
    ress=zstd       0 2

    This is from OMV5 /etc/stab

    What if you add a similar line before the

    # >>> [openmediavault]

    using your parameters. This will not solve your OMV6 issue, but should get you a working setup. With btrfs multidisk setups you only need to reference one drive of the set (and while I have always used the lowest letter it is supposed to work with any letter).

  • as you can see in my first post /dev/sdb is recognized like a BTFRS device and show correct free & alocated space (6,37 TB free for a 3TB disk), so really my problem is to inform webGUI that /dev/sdc to /dev/sde are part of this BTRFS RAID.

    I google a bit and find how to retrieve BTRFS info from a shell:…-manage-btrfs-operations/

    root@CNAS:~# btrfs filesystem show
    Label: 'DATA' uuid: ce4cd574-bea2-440e-a294-ba47ba521af9
    Total devices 4 FS bytes used 1.83TiB
    devid 1 size 2.73TiB used 702.35GiB path /dev/sdb
    devid 2 size 2.73TiB used 702.35GiB path /dev/sdc
    devid 3 size 2.73TiB used 702.35GiB path /dev/sdd
    devid 4 size 2.73TiB used 702.35GiB path /dev/sde

    but my question is still valid and need a developer help.

    How to properly mount DATA filesystem to show correct BTFRS RAID in WebGUI??

  • Update 11/'9/2021

    If I mount from shell BTFRS is mounted, but not recognized by OMVWebGUI

    Really I need some advice of how to mount a previosly created BTRFS Raid on OMN5 on new OMV6 install


    I think I see an error on the commands you're using to mount it on the shell!!!!!

    You were running mount /dev/sdb /dev/disk/by-label/DATA (mounting a /dev on another /dev ;) )

    To mount your RAID, the command is mount /dev/disk/by-label/DATA /srv/dev-disk-by-label-DATA or mount /dev/sdb /srv/dev-disk-by-label-DATA

    The first option is to mount the RAID label (the device you want to mount) and the second will be the mount point for it.

    The second option is to mount only the first device (sdb) and it will recognize the rest.


    [EDIT]You said you created the RAID on OMV5???

    Maybe what I'm saying doesn't count, since my RAID was done prior to install OMV5.

    All was done on the CLI, and mounted before running the script to install OMV5.

    When the script ran, OMV5 detected that there was a btrfs RAID and just added it to the filesystems available.


    One thing I see different than I have is that I have a partition on the drives.

    On your's, it's only the disks !!!!

    Before I created the RAID, I created a partition on each disk (sdb1 && sdc1) and then, I ran the mkfs.btrfs.....

    Had no idea that it was possible to create it without having a partition on the drive!!!!

    So, for me to mount the RAID, all I do in the CLI is: sudo mount /dev/disk/by-label/wolf1 /srv/dev-disk-by-label-wolf1

    Or, if I use the other disk, I can also mount it by running: /dev/disk/by-label/wolf2 /srv/dev-disk-by-label-wolf1

    So, to try different things, if you're willing and have a backup of the files in the RAID:

    On OMV5, what do you have on: cat /etc/fstab

    And what is the mountpoint for your RAID? (/srv/etc.....)

    Install apt install tree (will help a lot) and run:

    tree /dev/disk/by-*

    fdisk -l /dev/sdb (and all the other disks that bellong to the RAID)

  • ZFS has a procedure for exporting a pool and then importing it into another system. BTRFS does not have a similar procedure?

    That is simply not needed. As soon as one disk is mounted, the Kernel recognizes that the filesystem is part of a BTRFS RAID connected to other drives and mounts the other drives as well. As far as I can say this is a problem of OMV6 only.

    There was another thread about it where votdev mentioned some problems but did not go into detail. I think it was sth about missing mount options when drives got mounted after reboot? So perhaps it is about saltstack? idk. However, still things about BTRFS support on OMV6 are quite unclear. Maybe votdev can clear things up.

    Edit: Thats the other thread: Is there btrfs support planned for omv6?

  • my experience:

    now I can select /dev/sdc that is the device that OMV6 detect as BTFRS RAID, and is correctly mounted.

    the size & free space are show correct, and if you select to show disk, all disk on RAID are detected corectly:

    But not sure if mount point show is correct, because in OMV6 is: /srv/dev-disk-by-id-ata-TOSHIBA_DT01ACA300_63NZ4Z9GS

    But in OMV5 is: /dev/disk/by-label/DATA

    I supose that now is deprecated.

    So my next steps are to recreate Shares , and test copy, move, delete files from SMB.

  • I also wonder if it's now possible to set custom mount options like noatime or compression can be set and persist after reboot without any error?

  • DATA appear like device onshared folders:

    and copy / move / and delete from shared folders on Win10 , now works fine and as expected

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!