RAID filesystem not mounted after boot

  • Had some storms recently and suffered a power loss. After rebooting my OMV machine I found that the ext4 partition on my RAID1 array was no longer being mounted to /srv/. I could see contents in the /sharedfolders/ mounts however.


    The mount button on the filesystems page was disabled, I presumed at the time because it was being referenced by various shared folders. So I removed all of them. I was then able to mount the filesystem just fine, but the name of the mount in /srv/ changed. Where it used to be dev-disk-by-id-md-name-vault-data0 (or something to that effect) it was now mounted to dev-disk-by-label-data0 and I could see all of the contents I expected. So I updated everything that relied on the original path and moved on.


    I attempted a reboot to see if everything was happy again but the filesystem still was not mounted automatically. So I started to gather some information and come here to seek help...


    After a fresh boot...


    So far as I could figure everything seemed to be in place. blkid recognized the disks and the filesystem, OMV created the fstab entry, there is a mountpoint int the omv config. But, the mount is missing.


    After clicking mount on the web console it mounted just fine...


    Code
    root@vault:~# mount
    sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
    proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
    udev on /dev type devtmpfs (rw,nosuid,relatime,size=4032996k,nr_inodes=1008249,mode=755)
    devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
    tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=816876k,mode=755)
    /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro)
    <SNIP>
    tmpfs on /tmp type tmpfs (rw,relatime)/dev/md0 on /srv/dev-disk-by-label-data0 type ext4 (rw,noexec,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)

    Searching further I examined systemctl after booting...


    The log entries seem to indicate that it succeeded, but it was not actually mounted.
    After mounting manually..


    Information from journalctl to follow...

  • Here are several snippets from journalctl around stuff related to the filesystem...



    I see it started a fsck on the device. Skipping ahead slightly it looks like the fsck finished no problem. Then starts another fsck and tries to mount at the same time? It says it mounted it... But quotaon fails, because it wasn't mounted (i assume).
    Skipping ahead some more, the log is then full of monit being unable to find the mountpoint, because the drive wasn't mounted.
    Then I finally mount the drive from the web console and it actually mounts this time. quotaon still fails though. Maybe because i never actually set up any disk quotas?


    I really appreciate any help I can get.

    • cat /proc/mdstat
      root@vault:~# cat /proc/mdstat
      Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      md0 : active raid1 sdb[0] sdc[1]
      3906887488 blocks super 1.2 [2/2] [UU]
      bitmap: 0/30 pages [0KB], 65536KB chunk
      unused devices: <none>
    • blkid
      root@vault:~# blkid
      /dev/sda1: UUID="abb0efc3-5e5c-430f-9e17-7fc42830fae7" TYPE="ext4" PARTUUID="9e7337ab-01"
      /dev/sda5: UUID="19569405-dfde-4258-b5cc-e787741461c4" TYPE="swap" PARTUUID="9e7337ab-05"
      /dev/sdb: UUID="afc63939-1834-e73e-9bf7-8ef86f2436cc" UUID_SUB="9030e1b3-dddf-f4c7-c3c1-96d8b710e4ae" LABEL="vault:data0" TYPE="linux_raid_member"
      /dev/sdc: UUID="afc63939-1834-e73e-9bf7-8ef86f2436cc" UUID_SUB="c464428a-a774-4a9e-1c55-535349600c44" LABEL="vault:data0" TYPE="linux_raid_member"
      /dev/md0: LABEL="data0" UUID="777b5f67-b0d2-448d-a744-9b4f9fb846fb" TYPE="ext4"
    • fdisk -l | grep "Disk "
      root@vault:~# fdisk -l | grep "Disk "
      Disk /dev/sda: 14.8 GiB, 15837691904 bytes, 30932992 sectors
      Disk identifier: 0x9e7337ab
      Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      Disk /dev/md0: 3.7 TiB, 4000652787712 bytes, 7813774976 sectors
    • cat /etc/mdadm/mdadm.conf
      root@vault:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #


      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions


      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes


      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>


      # definitions of existing MD arrays
      ARRAY /dev/md0 metadata=1.2 name=vault:data0 UUID=afc63939:1834e73e:9bf78ef8:6f2436cc

    • mdadm --detail --scan --verbose
      root@vault:~# mdadm --detail --scan --verbose
      ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=vault:data0 UUID=afc63939:1834e73e:9bf78ef8:6f2436cc
      devices=/dev/sdb,/dev/sdc
    • Post type of drives and quantity being used as well.
      2x ST4000VN008 Seagate IronWolf 4TB
    • Post what happened for the array to stop working? Reboot? Power loss?
      Power loss, but the array seems fine. Just the filesystem doesn't mount automatically, can mount it manually ok.
  • Note: I changed default mount options to remove noexec.

    • Offizieller Beitrag

    This is from your first systemctl:


    Code
    Active: inactive (dead) since Thu 2019-05-09 10:58:44 CDT; 1h 22min ago
    Where: /srv/dev-disk-by-label-data0
    What: /dev/disk/by-label/data0


    This is your second systemctl after you manually mounted:

    Code
    Active: active (mounted) since Thu 2019-05-09 12:23:17 CDT; 35s ago
    Where: /srv/dev-disk-by-label-data0
    What: /dev/md0

    The raid initially is coming up inactive to correct that you would have to run from the cli mdadm --assemble --verbose --force /dev/md0 /dev/sd[bc] that would get it back up and running properly, can you run that now you have manually mounted? don't know.


    I just went through everything again after I couldn't initially see anything wrong.

  • I have yet to try your suggestion. I have looked back over some things again after a fresh boot...


    Early in boot i see the kernel finding the drives and the RAID going active.

    And, in fact /proc/mdstat shows the RAID active.

    Code
    root@vault:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdc[1] sdb[0]
          3906887488 blocks super 1.2 [2/2] [UU]
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>

    Back to the boot log, the one thing that has struck me as strange is seeing it start a fsck on /dev/disk/by-label/data0 immediately before it says it mounted it, and the fsck results of that check are never posted to the log... And additionaly, it had already done a scan earlier which does return results.


    I believe the RAID is properly assembled and active at the end of boot, it's just the mounting of the fstab and the autogenerated systemd service that fails I believe. As an experiment, I tried disabling the fsck in fstab (set <pass> to 0) and it mounted fine after reboot. So it really seems to be related to the fsck during boot and not with issues surrounding the RAID array itself. Any ideas why the fsck is happening twice during boot? I'd really much rather be able to leave it enabled...

    • Offizieller Beitrag

    Any ideas why the fsck is happening twice during boot?

    No sorry and a net search reveals very little.


    Going back to your original post, this started after a power outage, the Raid was coming up inactive, but active after you ran a mount command, the usual procedure to 'restart/mount' the Raid from an inactive state is to run what I posted. Looking at your last post I have no idea on how you would proceed.


    Is there any particular reason why you're using a Raid rather than two drives and using rsync.

  • Zitat von geaves

    No sorry and a net search reveals very little.

    I haven't been able to find anything either. Seems it might be more of a Debian issue than OMV issue.

    Is there any particular reason why you're using a Raid rather than two drives and using rsync.

    Seemed like the path of least resistance. Don't need to back this stuff up off-site, but a RAID 1 at least provided some easy redundancy.

    • Offizieller Beitrag

    Seemed like the path of least resistance.

    Well I stopped using Raid having had a Raid 5 for a number years, I now use mergerfs and snapraid, but I have two extra drives, one runs rsync and therefore backups my media and data, the second a small laptop drive is a standalone for Docker configs.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!