Automount with BTRFS failed

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Automount with BTRFS failed

      Hello users,

      I've a little (?) problem with omv.

      I've a RAID5-array with eight 8TB-hdd build with omv as /dev/md0, which is operable.

      Then i've build a 50TB BTRFS-Volume named Volume1 which i mounted in omv without problems.

      All looked quite fine until i rebooted.

      Now i have to mount Volume1 every time after reboot manually in omv with /storage/filesy stems/mount
      After it, erverything works fine until next boot.


      Syslog shows:
      monit[744]: 'mountpoint_srv_dev-disk-by-label-Volume1' status failed (1) -- /srv/dev-disk-by-label-Volume1


      /etc/fstab includes:
      # >>> [openmediavault]
      /dev/disk/by-label/Volume1 /srv/dev-disk-by-label-Volume1 btrfs defaults,nofail 0 2
      # <<< [openmediavault]


      There are empty directorys /dev/disk/by-label/Volume1 & /srv/dev-disk-by-label-Volume1, but nothing else.

      /dev/md0 exists.


      I'm hoping someone can help.


      Greets, Primaerplan


      PS: Sorry for my poor English :whistling:
    • /dev/md0 may exist but is the array assembled and functioning? cat /proc/mdstat
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.10
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • it seems to be ok.

      After boot, before mounting the volume:

      root@openmediavault:~# cat /proc/mdstat
      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md0 : active (auto-read-only) raid5 sdi[2] sda[7] sdd[1] sdg[0] sdh[3] sdf[4] sdc[6] sdb[5]
      54697266176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 0/59 pages [0KB], 65536KB chunk

      unused devices: <none>


      After manually mounting in omv the Volume works fine.

      The only problem is, that the volume has to mount manually every boot.
    • Primaerplan wrote:

      active (auto-read-only)
      Here is the problem. It probably isn't assembled at time of mounting since it isn't functioning correctly.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.10
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ok.

      I've now executete the command again after mounting the Volume manually and the output changed to:

      root@openmediavault:~# cat /proc/mdstat
      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md0 : active raid5 sdi[2] sda[7] sdd[1] sdg[0] sdh[3] sdf[4] sdc[6] sdb[5]
      54697266176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 0/59 pages [0KB], 65536KB chunk

      Everything woks fine until next reboot.

      How can i fix this problem permanently?
    • Primaerplan wrote:

      How can i fix this problem permanently?
      That is a question I can't answer. I've tried for years to figure out why these arrays will assemble but fail on the next boot. It doesn't help that I don't have an array in that state myself.
      omv 4.1.9 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.10
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Primaerplan wrote:


      How can i fix this problem permanently?
      Is there a reason why you must use BTRFS? Have you given thought to trying ZFS?
      Good backup takes the "drama" out of computing
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
      2nd Data Backup: OMV 3.0.99, R-PI 2B, 16GB boot, 4TB WD USB MyPassport - direct connect (no hub)
    • Primaerplan wrote:

      I've tried ZFS, but it uses too much Ram an my MoBo isn't able to use ECC-RAM

      ZFS neither uses too much RAM (after adjusting settings for the use case) nor does it need ECC RAM. That's just an urban myth spread by someone some time ago that gets now copy&pasted since years for no reason (if lack of ECC RAM would be a catastrophe with ZFS it would be the same with btrfs)

      But these were just general remarks since ZFS won't solve your problem. You have an issue with mdraid so switching from btrfs to ZFS will change nothing. And in case someone wants to convince you to use RAIDz instead please be careful. On Linux there's still no 'sequential resilver' implemented so once a disk dies depending on how data arrived at your RAIDz a resilver can take ages (which is really bad with just single redundancy) and might be the definition of 'pure random IO' operation which is something HDDs suck at.

      I was asking for the type of boot media since I dealt multiple with situations with worn out USB pendrives and SD cards. They simply discarded every write attempt to them so things were ok as long as everything was in Linux' filesystem buffers but gone after a reboot. Quick test for something like this would be a 'touch /root/lala && sync && reboot' and check whether the file's still there.
    • Primaerplan wrote:

      I've read many sites which all recommend 1GB ECC / 1TB HDD

      That's BS. You do not need ECC RAM to use ZFS (but of course if you really love your data you will spend the few additional bucks and get ECC RAM). 'Checksumming' filesystems like ZFS, btrfs or ReFS are even better on systems without ECC memory since when you scrub regularly you notice data corruption that happened. With ancient filesystems data corruption can remain undetected until it's too late.

      If you love your data you use a checksumming filesystem and also ECC RAM. But the latter is not a requirement for the former.

      The '1 GB RAM per TB storage' formula is also BS when applied to ZFS in general. You need only a fixed amount of RAM per storage when you do deduplication with ZFS since when the DDTs (dedup tables) do not fit into RAM everything slows down a lot. But even this is not a problem when you use really fast SSDs for L2ARC (ARC on fast storage)

      Anyway: Since you want to have some redundancy for whatever reasons (RAID-5) you won't benefit from RAIDz (since rebuild/resilver performance will be way lower compared to mdraid -- at least on Linux, in Solaris for example this has been fixed ages ago) and whether you use btrfs or ZFS on top of mdraid makes no difference.

      Primaerplan wrote:

      What do you mean with the problems of boot media?
      Stuff that should be written to disk not being commited to disk so gone after reboot. As already suggested: it's less time to try this out than to think about. :)