[SOLVED] Help to mount a BTFRS Raid on OMV6

  • Hello , I create a BTFRS Raid in OMV5 one year ago with:

    Code
    mkfs.btrfs -m raid5 -d raid5 /dev/sdb /dev/sdc /dev/sdd /dev/sde

    then mount it on OMV WebGUI and use it last year.


    Now I do a fresh install of OMV6, and need to mount the RAID, but OMV do not mount it in the same way than OMV5.



    So my question is: what step need to do to NOT lose mi data on it, and mount on OMV WebGUI.



    I have no problem to use shell.




    I can select BTFRS on OMV, but only show disk /dev/sdb , not the 4 disk used on Raid like show OMV5





    once mounted, size is rigth, but only reference one disk :



    And the other 3 disk are show ready to be formated ( that is not what i want):


  • I too, have a BTRFS RAID (although RAID1) made on OMV5.

    Curious to see what comes from this topic...



    What comes to mind is: when you make the RAID, OS is aware that your RAID exists and "knows" which drives belong to it

    That info is then, passed to OMV when you run the script to install it.


    But where is that "info" stored????

    And how OMV "finds" it????


    Some more expert inputs are required here... :D

    • Offizieller Beitrag

    ZFS has a procedure for exporting a pool and then importing it into another system. BTRFS does not have a similar procedure?

  • Code
    # >>> [openmediavault]
    
    /dev/disk/by-label/BTRFS1               /srv/dev-disk-by-label-BTRFS1   btrfs   defaults,nofail,comp
    ress=zstd       0 2

    This is from OMV5 /etc/stab


    What if you add a similar line before the

    # >>> [openmediavault]


    using your parameters. This will not solve your OMV6 issue, but should get you a working setup. With btrfs multidisk setups you only need to reference one drive of the set (and while I have always used the lowest letter it is supposed to work with any letter).

  • as you can see in my first post /dev/sdb is recognized like a BTFRS device and show correct free & alocated space (6,37 TB free for a 3TB disk), so really my problem is to inform webGUI that /dev/sdc to /dev/sde are part of this BTRFS RAID.


    I google a bit and find how to retrieve BTRFS info from a shell: https://www.unixmen.com/instal…-manage-btrfs-operations/


    Code
    root@CNAS:~# btrfs filesystem show
    Label: 'DATA'  uuid: ce4cd574-bea2-440e-a294-ba47ba521af9
            Total devices 4 FS bytes used 1.83TiB
            devid    1 size 2.73TiB used 702.35GiB path /dev/sdb
            devid    2 size 2.73TiB used 702.35GiB path /dev/sdc
            devid    3 size 2.73TiB used 702.35GiB path /dev/sdd
            devid    4 size 2.73TiB used 702.35GiB path /dev/sde
    
    root@CNAS:~#

    but my question is still valid and need a developer help.


    How to properly mount DATA filesystem to show correct BTFRS RAID in WebGUI??

  • Update 11/'9/2021


    If I mount from shell BTFRS is mounted, but not recognized by OMVWebGUI


    Really I need some advice of how to mount a previosly created BTRFS Raid on OMN5 on new OMV6 install

  • [EDIT AGAIN]


    I think I see an error on the commands you're using to mount it on the shell!!!!!

    You were running mount /dev/sdb /dev/disk/by-label/DATA (mounting a /dev on another /dev ;) )


    To mount your RAID, the command is mount /dev/disk/by-label/DATA /srv/dev-disk-by-label-DATA or mount /dev/sdb /srv/dev-disk-by-label-DATA

    The first option is to mount the RAID label (the device you want to mount) and the second will be the mount point for it.

    The second option is to mount only the first device (sdb) and it will recognize the rest.


    [/EDIT AGAIN]


    [EDIT]You said you created the RAID on OMV5???

    Maybe what I'm saying doesn't count, since my RAID was done prior to install OMV5.

    All was done on the CLI, and mounted before running the script to install OMV5.


    When the script ran, OMV5 detected that there was a btrfs RAID and just added it to the filesystems available.

    [/EDIT]



    One thing I see different than I have is that I have a partition on the drives.

    On your's, it's only the disks !!!!


    Before I created the RAID, I created a partition on each disk (sdb1 && sdc1) and then, I ran the mkfs.btrfs.....

    Had no idea that it was possible to create it without having a partition on the drive!!!!


    So, for me to mount the RAID, all I do in the CLI is: sudo mount /dev/disk/by-label/wolf1 /srv/dev-disk-by-label-wolf1

    Or, if I use the other disk, I can also mount it by running: /dev/disk/by-label/wolf2 /srv/dev-disk-by-label-wolf1



    So, to try different things, if you're willing and have a backup of the files in the RAID:

    On OMV5, what do you have on: cat /etc/fstab

    And what is the mountpoint for your RAID? (/srv/etc.....)


    Install apt install tree (will help a lot) and run:

    tree /dev/disk/by-*


    fdisk -l /dev/sdb (and all the other disks that bellong to the RAID)

  • ZFS has a procedure for exporting a pool and then importing it into another system. BTRFS does not have a similar procedure?


    That is simply not needed. As soon as one disk is mounted, the Kernel recognizes that the filesystem is part of a BTRFS RAID connected to other drives and mounts the other drives as well. As far as I can say this is a problem of OMV6 only.


    There was another thread about it where votdev mentioned some problems but did not go into detail. I think it was sth about missing mount options when drives got mounted after reboot? So perhaps it is about saltstack? idk. However, still things about BTRFS support on OMV6 are quite unclear. Maybe votdev can clear things up.


    Edit: Thats the other thread: Is there btrfs support planned for omv6?

  • my experience:


    now I can select /dev/sdc that is the device that OMV6 detect as BTFRS RAID, and is correctly mounted.


    the size & free space are show correct, and if you select to show disk, all disk on RAID are detected corectly:





    But not sure if mount point show is correct, because in OMV6 is: /srv/dev-disk-by-id-ata-TOSHIBA_DT01ACA300_63NZ4Z9GS


    But in OMV5 is: /dev/disk/by-label/DATA


    I supose that now is deprecated.




    So my next steps are to recreate Shares , and test copy, move, delete files from SMB.



  • I also wonder if it's now possible to set custom mount options like noatime or compression can be set and persist after reboot without any error?

  • DATA appear like device onshared folders:




    and copy / move / and delete from shared folders on Win10 , now works fine and as expected

  • new problem in a new production NAS:


    this time I mount a 4x10TB disk in BTRFS pool ( Not in RAID), and works fine like expected by my previos post and test.


    But mounted BTFRS pool is not detected on reboot.



    this is the screenshot when I mount it from webGUI, apparently all disk are good detected and filesystem exposed, but if I reboot the NAS:



    BTRFS filesystem is not detected anymore, because i'm in the first step to configure OMV 6, I can unmount and remount but problem persist.


    The BTRFS filesystem are detected and mounted, but dissapear if reboot the NAS.



    I have 2 boot disk, old disk with OMV5 that works fine.


    and this new OMV 6 Boot disk, so really is not criticall for me , because i can still use OMV5 boot disk and put the NAS online again, but i want to solve this problem in OMV6




    so: Votdev, please tell me what files and steps you want to know to try to debug this.

    • Offizieller Beitrag

    Are the userland apps like blkid, fdisk or mount are showing the BTRFS device? If not, then OMV can not display them too. OMV is not doing any magic here, it only asks the userland apps to get the information and showing them in the UI. If all works till a reboot there is IMO no problem with OMV, i guess there is a problem with the kernel or a bug in Debian somewhere. I'm not using BTRFS, so i am sorry that i can not help you more.

  • i can post output of blkid, fdisk or mount when the BTRFS filesystem are mounted, and the same after a reboot to compare it.


    Perhaps can help to debug it.

  • first post, this is blkid & mount on OMV5 that works fine, remember 4 disk in a BTRFS Pool (Not Raid) sda to sde (sdb on the image is the boot disk), one more disk to store dockers (sdf) config and BBDD , and 15 dockers ( I need to delete some ovelay lines to keep message in size) :






    more info:


  • second post , same info for OMV 6.


    first spoiler is once mounted and prior to reboot:

    see that disk order changed ( respect to OMV5 order but I do not know the reason) :






    You can see the BTRFS pool correctly mounted and usable

  • last post, this is OMV 6 after a reboot, the BTRFS pool is not detected.








    I hope that can help.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!