Automount with BTRFS failed

  • Hello users,


    I've a little (?) problem with omv.


    I've a RAID5-array with eight 8TB-hdd build with omv as /dev/md0, which is operable.


    Then i've build a 50TB BTRFS-Volume named Volume1 which i mounted in omv without problems.


    All looked quite fine until i rebooted.


    Now i have to mount Volume1 every time after reboot manually in omv with /storage/filesy stems/mount
    After it, erverything works fine until next boot.



    Syslog shows:
    monit[744]: 'mountpoint_srv_dev-disk-by-label-Volume1' status failed (1) -- /srv/dev-disk-by-label-Volume1



    /etc/fstab includes:
    # >>> [openmediavault]
    /dev/disk/by-label/Volume1 /srv/dev-disk-by-label-Volume1 btrfs defaults,nofail 0 2
    # <<< [openmediavault]



    There are empty directorys /dev/disk/by-label/Volume1 & /srv/dev-disk-by-label-Volume1, but nothing else.


    /dev/md0 exists.



    I'm hoping someone can help.



    Greets, Primaerplan



    PS: Sorry for my poor English :whistling:

    • Offizieller Beitrag

    /dev/md0 may exist but is the array assembled and functioning? cat /proc/mdstat

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • it seems to be ok.


    After boot, before mounting the volume:


    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active (auto-read-only) raid5 sdi[2] sda[7] sdd[1] sdg[0] sdh[3] sdf[4] sdc[6] sdb[5]
    54697266176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
    bitmap: 0/59 pages [0KB], 65536KB chunk


    unused devices: <none>



    After manually mounting in omv the Volume works fine.


    The only problem is, that the volume has to mount manually every boot.

    • Offizieller Beitrag

    active (auto-read-only)

    Here is the problem. It probably isn't assembled at time of mounting since it isn't functioning correctly.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ok.


    I've now executete the command again after mounting the Volume manually and the output changed to:


    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid5 sdi[2] sda[7] sdd[1] sdg[0] sdh[3] sdf[4] sdc[6] sdb[5]
    54697266176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
    bitmap: 0/59 pages [0KB], 65536KB chunk


    Everything woks fine until next reboot.


    How can i fix this problem permanently?

    • Offizieller Beitrag

    How can i fix this problem permanently?

    That is a question I can't answer. I've tried for years to figure out why these arrays will assemble but fail on the next boot. It doesn't help that I don't have an array in that state myself.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @ryecoaaron


    Thank you for trying to help me.



    @flmaxey:


    I've chosen BTRFS because of my Hardware. I've tried ZFS, but it uses too much Ram an my MoBo isn't able to use ECC-RAM.


    BTRFS work really fine (After manually mount :) )



    @tkaiser:


    I know, but this machine is only a 2nd backup.
    It has to be cheap (most komponents had been availible) :)


    OMV is booting from an old little SSD

  • I've tried ZFS, but it uses too much Ram an my MoBo isn't able to use ECC-RAM


    ZFS neither uses too much RAM (after adjusting settings for the use case) nor does it need ECC RAM. That's just an urban myth spread by someone some time ago that gets now copy&pasted since years for no reason (if lack of ECC RAM would be a catastrophe with ZFS it would be the same with btrfs)


    But these were just general remarks since ZFS won't solve your problem. You have an issue with mdraid so switching from btrfs to ZFS will change nothing. And in case someone wants to convince you to use RAIDz instead please be careful. On Linux there's still no 'sequential resilver' implemented so once a disk dies depending on how data arrived at your RAIDz a resilver can take ages (which is really bad with just single redundancy) and might be the definition of 'pure random IO' operation which is something HDDs suck at.


    I was asking for the type of boot media since I dealt multiple with situations with worn out USB pendrives and SD cards. They simply discarded every write attempt to them so things were ok as long as everything was in Linux' filesystem buffers but gone after a reboot. Quick test for something like this would be a 'touch /root/lala && sync && reboot' and check whether the file's still there.

  • Thanks for the informations of zfs.


    I've read many sites which all recommend 1GB ECC / 1TB HDD and that was much to much for my requirement.


    BTRFS works fine, that's not the problem :)


    What do you mean with the problems of boot media?

  • I've read many sites which all recommend 1GB ECC / 1TB HDD


    That's BS. You do not need ECC RAM to use ZFS (but of course if you really love your data you will spend the few additional bucks and get ECC RAM). 'Checksumming' filesystems like ZFS, btrfs or ReFS are even better on systems without ECC memory since when you scrub regularly you notice data corruption that happened. With ancient filesystems data corruption can remain undetected until it's too late.


    If you love your data you use a checksumming filesystem and also ECC RAM. But the latter is not a requirement for the former.


    The '1 GB RAM per TB storage' formula is also BS when applied to ZFS in general. You need only a fixed amount of RAM per storage when you do deduplication with ZFS since when the DDTs (dedup tables) do not fit into RAM everything slows down a lot. But even this is not a problem when you use really fast SSDs for L2ARC (ARC on fast storage)


    Anyway: Since you want to have some redundancy for whatever reasons (RAID-5) you won't benefit from RAIDz (since rebuild/resilver performance will be way lower compared to mdraid -- at least on Linux, in Solaris for example this has been fixed ages ago) and whether you use btrfs or ZFS on top of mdraid makes no difference.


    What do you mean with the problems of boot media?

    Stuff that should be written to disk not being commited to disk so gone after reboot. As already suggested: it's less time to try this out than to think about. :)

  • ok, thanks for your explanation. It's very interesting.



    I've also tried 'touch /root/lala && sync && reboot'


    After reboot:
    root@openmediavault:~# ls
    lala
    root@openmediavault:~#



    so it seems working. :)
    Any other idea?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!