RAID filesystem not mounted after boot

    • OMV 4.x
    • RAID filesystem not mounted after boot

      Had some storms recently and suffered a power loss. After rebooting my OMV machine I found that the ext4 partition on my RAID1 array was no longer being mounted to /srv/. I could see contents in the /sharedfolders/ mounts however.

      The mount button on the filesystems page was disabled, I presumed at the time because it was being referenced by various shared folders. So I removed all of them. I was then able to mount the filesystem just fine, but the name of the mount in /srv/ changed. Where it used to be dev-disk-by-id-md-name-vault-data0 (or something to that effect) it was now mounted to dev-disk-by-label-data0 and I could see all of the contents I expected. So I updated everything that relied on the original path and moved on.

      I attempted a reboot to see if everything was happy again but the filesystem still was not mounted automatically. So I started to gather some information and come here to seek help...

      After a fresh boot...

      Source Code

      1. root@vault:~# blkid
      2. /dev/sda1: UUID="abb0efc3-5e5c-430f-9e17-7fc42830fae7" TYPE="ext4" PARTUUID="9e7337ab-01"
      3. /dev/sda5: UUID="19569405-dfde-4258-b5cc-e787741461c4" TYPE="swap" PARTUUID="9e7337ab-05"
      4. /dev/sdb: UUID="afc63939-1834-e73e-9bf7-8ef86f2436cc" UUID_SUB="9030e1b3-dddf-f4c7-c3c1-96d8b710e4ae" LABEL="vault:data0" TYPE="linux_raid_member"
      5. /dev/sdc: UUID="afc63939-1834-e73e-9bf7-8ef86f2436cc" UUID_SUB="c464428a-a774-4a9e-1c55-535349600c44" LABEL="vault:data0" TYPE="linux_raid_member"
      6. /dev/md0: LABEL="data0" UUID="777b5f67-b0d2-448d-a744-9b4f9fb846fb" TYPE="ext4"
      7. root@vault:~# cat /etc/fstab
      8. # /etc/fstab: static file system information.
      9. #
      10. # Use 'blkid' to print the universally unique identifier for a
      11. # device; this may be used with UUID= as a more robust way to name devices
      12. # that works even if disks are added and removed. See fstab(5).
      13. #
      14. # <file system> <mount point> <type> <options> <dump> <pass>
      15. # / was on /dev/sda1 during installation
      16. UUID=abb0efc3-5e5c-430f-9e17-7fc42830fae7 / ext4 errors=remount-ro 0 1
      17. # swap was on /dev/sda5 during installation
      18. UUID=19569405-dfde-4258-b5cc-e787741461c4 none swap sw 0 0
      19. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      20. # >>> [openmediavault]
      21. /dev/disk/by-label/data0 /srv/dev-disk-by-label-data0 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      22. # <<< [openmediavault]
      23. tmpfs /tmp tmpfs defaults 0 0
      24. root@vault:~# omv-confdbadm read --prettify conf.system.filesystem.mountpoint
      25. [
      26. {
      27. "dir": "/srv/dev-disk-by-label-data0",
      28. "freq": 0,
      29. "fsname": "/dev/disk/by-label/data0",
      30. "hidden": false,
      31. "opts": "defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl",
      32. "passno": 2,
      33. "type": "ext4",
      34. "uuid": "00de9482-c45a-429e-91ea-beb772a72436"
      35. }
      36. ]
      37. root@vault:~# mount
      38. sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
      39. proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
      40. udev on /dev type devtmpfs (rw,nosuid,relatime,size=4032996k,nr_inodes=1008249,mode=755)
      41. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
      42. tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=816876k,mode=755)
      43. /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro)
      44. securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
      45. tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
      46. tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
      47. tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
      48. cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd)
      49. pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
      50. cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
      51. cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
      52. cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
      53. cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
      54. cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
      55. cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
      56. cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
      57. cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
      58. cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
      59. cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
      60. systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=42,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12851)
      61. hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
      62. mqueue on /dev/mqueue type mqueue (rw,relatime)
      63. debugfs on /sys/kernel/debug type debugfs (rw,relatime)
      64. sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
      65. nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
      66. tmpfs on /tmp type tmpfs (rw,relatime)
      Display All


      So far as I could figure everything seemed to be in place. blkid recognized the disks and the filesystem, OMV created the fstab entry, there is a mountpoint int the omv config. But, the mount is missing.

      After clicking mount on the web console it mounted just fine...

      Source Code

      1. root@vault:~# mount
      2. sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
      3. proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
      4. udev on /dev type devtmpfs (rw,nosuid,relatime,size=4032996k,nr_inodes=1008249,mode=755)
      5. devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
      6. tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=816876k,mode=755)
      7. /dev/sda1 on / type ext4 (rw,relatime,errors=remount-ro)
      8. <SNIP>
      9. tmpfs on /tmp type tmpfs (rw,relatime)/dev/md0 on /srv/dev-disk-by-label-data0 type ext4 (rw,noexec,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
      Searching further I examined systemctl after booting...

      Source Code

      1. root@vault:~# systemctl status "srv-dev\x2ddisk\x2dby\x2dlabel\x2ddata0.mount"
      2. ● srv-dev\x2ddisk\x2dby\x2dlabel\x2ddata0.mount - /srv/dev-disk-by-label-data0
      3. Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
      4. Active: inactive (dead) since Thu 2019-05-09 10:58:44 CDT; 1h 22min ago
      5. Where: /srv/dev-disk-by-label-data0
      6. What: /dev/disk/by-label/data0
      7. Docs: man:fstab(5)
      8. man:systemd-fstab-generator(8)
      9. Process: 489 ExecMount=/bin/mount /dev/disk/by-label/data0 /srv/dev-disk-by-label-data0 -t ext4 -o defaults,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl (code=exited, status=0/SUCCESS)
      10. CPU: 65ms
      11. May 09 10:58:44 vault systemd[1]: Mounting /srv/dev-disk-by-label-data0...
      12. May 09 10:58:44 vault systemd[1]: Mounted /srv/dev-disk-by-label-data0.
      Display All
      The log entries seem to indicate that it succeeded, but it was not actually mounted.
      After mounting manually..

      Source Code

      1. root@vault:~# systemctl status "srv-dev\x2ddisk\x2dby\x2dlabel\x2ddata0.mount"
      2. ● srv-dev\x2ddisk\x2dby\x2dlabel\x2ddata0.mount - /srv/dev-disk-by-label-data0
      3. Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
      4. Active: active (mounted) since Thu 2019-05-09 12:23:17 CDT; 35s ago
      5. Where: /srv/dev-disk-by-label-data0
      6. What: /dev/md0
      7. Docs: man:fstab(5)
      8. man:systemd-fstab-generator(8)
      9. Process: 489 ExecMount=/bin/mount /dev/disk/by-label/data0 /srv/dev-disk-by-label-data0 -t ext4 -o defaults,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl (code=exited, status=0/SUCCESS)
      10. Tasks: 0 (limit: 4915)
      11. Memory: 0B
      12. CPU: 0
      13. CGroup: /system.slice/srv-dev\x2ddisk\x2dby\x2dlabel\x2ddata0.mount
      Display All
      Information from journalctl to follow...
    • Here are several snippets from journalctl around stuff related to the filesystem...


      Source Code

      1. May 09 13:04:40 vault kernel: input: HDA ATI SB Line Out Side as /devices/pci0000:00/0000:00:14.2/sound/card0/input16
      2. May 09 13:04:40 vault kernel: input: HDA ATI SB Front Headphone as /devices/pci0000:00/0000:00:14.2/sound/card0/input17
      3. May 09 13:04:40 vault systemd[1]: Found device /dev/disk/by-label/data0.
      4. May 09 13:04:40 vault systemd[1]: Started MD array monitor.
      5. May 09 13:04:40 vault systemd[1]: Starting File System Check on /dev/disk/by-label/data0...
      6. May 09 13:04:40 vault mdadm[474]: mdadm: No mail address or alert command - not monitoring.
      7. May 09 13:04:40 vault systemd[1]: Reached target Sound Card.
      8. May 09 13:04:40 vault kernel: kvm: Nested Virtualization enabled
      9. May 09 13:04:40 vault kernel: kvm: Nested Paging enabled
      10. May 09 13:04:40 vault systemd[1]: mdmonitor.service: Main process exited, code=exited, status=1/FAILURE
      11. May 09 13:04:40 vault systemd[1]: mdmonitor.service: Unit entered failed state.
      12. May 09 13:04:40 vault systemd[1]: mdmonitor.service: Failed with result 'exit-code'.
      13. May 09 13:04:40 vault systemd[1]: Started File System Check Daemon to report status.
      14. May 09 13:04:40 vault kernel: MCE: In-kernel MCE decoding enabled.
      15. May 09 13:04:40 vault kernel: [drm] radeon kernel modesetting enabled.
      16. May 09 13:04:40 vault systemd-fsck[476]: data0: clean, 647367/244187136 files, 752591614/976721872 blocks
      17. May 09 13:04:40 vault kernel: [drm] ring test on 5 succeeded in 2 usecs
      18. May 09 13:04:40 vault kernel: [drm] UVD initialized successfully.
      19. May 09 13:04:40 vault kernel: [drm] ib test on ring 0 succeeded in 0 usecs
      20. May 09 13:04:40 vault kernel: [drm] ib test on ring 3 succeeded in 0 usecs
      21. May 09 13:04:40 vault kernel: EDAC amd64: Node 0: DRAM ECC disabled.
      22. May 09 13:04:40 vault kernel: EDAC amd64: ECC disabled in the BIOS or no ECC capability, module will not load.
      23. Either enable ECC checking or force module loading by setting 'ecc_enable_override'.
      24. (Note that use of the override may cause unknown side effects.)
      25. May 09 13:04:40 vault kernel: random: crng init done
      26. May 09 13:04:40 vault kernel: random: 7 urandom warning(s) missed due to ratelimiting
      27. May 09 13:04:40 vault systemd[1]: Started File System Check on /dev/disk/by-label/data0.
      28. May 09 13:04:40 vault systemd[1]: Mounting /srv/dev-disk-by-label-data0...
      29. May 09 13:04:40 vault systemd[1]: Created slice system-arm.slice.
      30. May 09 13:04:40 vault kernel: EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl
      31. May 09 13:04:40 vault systemd[1]: Mounted /srv/dev-disk-by-label-data0.
      32. May 09 13:04:40 vault systemd[1]: Starting File System Quota Check...
      33. May 09 13:04:40 vault systemd[1]: Started File System Quota Check.
      34. May 09 13:04:40 vault systemd[1]: Starting Enable File System Quotas...
      35. May 09 13:04:40 vault quotaon[499]: quotaon: cannot find /srv/dev-disk-by-label-data0/aquota.group on /dev/md0 [/srv/dev-disk-by-label-data0]
      36. May 09 13:04:40 vault quotaon[499]: quotaon: cannot find /srv/dev-disk-by-label-data0/aquota.user on /dev/md0 [/srv/dev-disk-by-label-data0]
      37. May 09 13:04:40 vault systemd[1]: quotaon.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
      38. May 09 13:04:41 vault systemd[1]: Failed to start Enable File System Quotas.
      39. May 09 13:04:41 vault systemd[1]: quotaon.service: Unit entered failed state.
      40. May 09 13:04:41 vault systemd[1]: quotaon.service: Failed with result 'exit-code'.
      41. May 09 13:04:41 vault systemd[1]: Reached target Local File Systems.
      42. May 09 13:04:41 vault systemd[1]: Starting Create Volatile Files and Directories...
      43. May 09 13:04:41 vault systemd[1]: Started ifup for eth0.
      44. May 09 13:15:50 vault monit[912]: 'mountpoint_srv_dev-disk-by-label-data0' status failed (1) -- /srv/dev-disk-by-label-data0 is not a mountpoint
      45. May 09 13:16:21 vault monit[912]: 'mountpoint_srv_dev-disk-by-label-data0' status failed (1) -- /srv/dev-disk-by-label-data0 is not a mountpoint
      46. May 09 13:16:51 vault monit[912]: 'mountpoint_srv_dev-disk-by-label-data0' status failed (1) -- /srv/dev-disk-by-label-data0 is not a mountpoint
      47. May 09 14:13:16 vault kernel: EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl
      48. May 09 14:13:16 vault systemd[1]: Starting Enable File System Quotas...
      49. May 09 14:13:16 vault quotaon[2733]: quotaon: cannot find /srv/dev-disk-by-label-data0/aquota.group on /dev/md0 [/srv/dev-disk-by-label-data0]
      50. May 09 14:13:16 vault quotaon[2733]: quotaon: cannot find /srv/dev-disk-by-label-data0/aquota.user on /dev/md0 [/srv/dev-disk-by-label-data0]
      51. May 09 14:13:16 vault systemd[1]: quotaon.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
      52. May 09 14:13:16 vault systemd[1]: Failed to start Enable File System Quotas.
      53. May 09 14:13:16 vault systemd[1]: quotaon.service: Unit entered failed state.
      54. May 09 14:13:16 vault systemd[1]: quotaon.service: Failed with result 'exit-code'.
      Display All
      I see it started a fsck on the device. Skipping ahead slightly it looks like the fsck finished no problem. Then starts another fsck and tries to mount at the same time? It says it mounted it... But quotaon fails, because it wasn't mounted (i assume).
      Skipping ahead some more, the log is then full of monit being unable to find the mountpoint, because the drive wasn't mounted.
      Then I finally mount the drive from the web console and it actually mounts this time. quotaon still fails though. Maybe because i never actually set up any disk quotas?

      I really appreciate any help I can get.
      1. cat /proc/mdstat
        root@vault:~# cat /proc/mdstat
        Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
        md0 : active raid1 sdb[0] sdc[1]
        3906887488 blocks super 1.2 [2/2] [UU]
        bitmap: 0/30 pages [0KB], 65536KB chunk
        unused devices: <none>
      2. blkid
        root@vault:~# blkid
        /dev/sda1: UUID="abb0efc3-5e5c-430f-9e17-7fc42830fae7" TYPE="ext4" PARTUUID="9e7337ab-01"
        /dev/sda5: UUID="19569405-dfde-4258-b5cc-e787741461c4" TYPE="swap" PARTUUID="9e7337ab-05"
        /dev/sdb: UUID="afc63939-1834-e73e-9bf7-8ef86f2436cc" UUID_SUB="9030e1b3-dddf-f4c7-c3c1-96d8b710e4ae" LABEL="vault:data0" TYPE="linux_raid_member"
        /dev/sdc: UUID="afc63939-1834-e73e-9bf7-8ef86f2436cc" UUID_SUB="c464428a-a774-4a9e-1c55-535349600c44" LABEL="vault:data0" TYPE="linux_raid_member"
        /dev/md0: LABEL="data0" UUID="777b5f67-b0d2-448d-a744-9b4f9fb846fb" TYPE="ext4"
      3. fdisk -l | grep "Disk "
        root@vault:~# fdisk -l | grep "Disk "
        Disk /dev/sda: 14.8 GiB, 15837691904 bytes, 30932992 sectors
        Disk identifier: 0x9e7337ab
        Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
        Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
        Disk /dev/md0: 3.7 TiB, 4000652787712 bytes, 7813774976 sectors
      4. cat /etc/mdadm/mdadm.conf
        root@vault:~# cat /etc/mdadm/mdadm.conf
        # mdadm.conf
        #
        # Please refer to mdadm.conf(5) for information about this file.
        #

        # by default, scan all partitions (/proc/partitions) for MD superblocks.
        # alternatively, specify devices to scan, using wildcards if desired.
        # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
        # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
        # used if no RAID devices are configured.
        DEVICE partitions

        # auto-create devices with Debian standard permissions
        CREATE owner=root group=disk mode=0660 auto=yes

        # automatically tag new arrays as belonging to the local system
        HOMEHOST <system>

        # definitions of existing MD arrays
        ARRAY /dev/md0 metadata=1.2 name=vault:data0 UUID=afc63939:1834e73e:9bf78ef8:6f2436cc
      5. mdadm --detail --scan --verbose
        root@vault:~# mdadm --detail --scan --verbose
        ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=vault:data0 UUID=afc63939:1834e73e:9bf78ef8:6f2436cc
        devices=/dev/sdb,/dev/sdc
      6. Post type of drives and quantity being used as well.
        2x ST4000VN008 Seagate IronWolf 4TB
      7. Post what happened for the array to stop working? Reboot? Power loss?
        Power loss, but the array seems fine. Just the filesystem doesn't mount automatically, can mount it manually ok.
    • Source Code

      1. root@vault:~# cat /etc/fstab
      2. # /etc/fstab: static file system information.
      3. #
      4. # Use 'blkid' to print the universally unique identifier for a
      5. # device; this may be used with UUID= as a more robust way to name devices
      6. # that works even if disks are added and removed. See fstab(5).
      7. #
      8. # <file system> <mount point> <type> <options> <dump> <pass>
      9. # / was on /dev/sda1 during installation
      10. UUID=abb0efc3-5e5c-430f-9e17-7fc42830fae7 / ext4 errors=remount-ro 0 1
      11. # swap was on /dev/sda5 during installation
      12. UUID=19569405-dfde-4258-b5cc-e787741461c4 none swap sw 0 0
      13. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      14. # >>> [openmediavault]
      15. /dev/disk/by-label/data0 /srv/dev-disk-by-label-data0 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      16. # <<< [openmediavault]
      17. tmpfs /tmp tmpfs defaults 0 0
      Display All
      Note: I changed default mount options to remove noexec.
    • This is from your first systemctl:

      Source Code

      1. Active: inactive (dead) since Thu 2019-05-09 10:58:44 CDT; 1h 22min ago
      2. Where: /srv/dev-disk-by-label-data0
      3. What: /dev/disk/by-label/data0

      This is your second systemctl after you manually mounted:

      Source Code

      1. Active: active (mounted) since Thu 2019-05-09 12:23:17 CDT; 35s ago
      2. Where: /srv/dev-disk-by-label-data0
      3. What: /dev/md0
      The raid initially is coming up inactive to correct that you would have to run from the cli mdadm --assemble --verbose --force /dev/md0 /dev/sd[bc] that would get it back up and running properly, can you run that now you have manually mounted? don't know.

      I just went through everything again after I couldn't initially see anything wrong.
      Raid is not a backup! Would you go skydiving without a parachute?
    • I have yet to try your suggestion. I have looked back over some things again after a fresh boot...

      Early in boot i see the kernel finding the drives and the RAID going active.

      Source Code

      1. May 11 19:37:37 vault kernel: scsi 4:0:0:0: Direct-Access ATA ST4000VN008-2DR1 SC60 PQ: 0 ANSI: 5
      2. May 11 19:37:37 vault kernel: sd 4:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
      3. May 11 19:37:37 vault kernel: sd 4:0:0:0: [sdb] 4096-byte physical blocks
      4. May 11 19:37:37 vault kernel: sd 4:0:0:0: [sdb] Write Protect is off
      5. May 11 19:37:37 vault kernel: sd 4:0:0:0: [sdb] Mode Sense: 00 3a 00 00
      6. May 11 19:37:37 vault kernel: sd 4:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
      7. May 11 19:37:37 vault kernel: scsi 7:0:0:0: Direct-Access ATA ST4000VN008-2DR1 SC60 PQ: 0 ANSI: 5
      8. May 11 19:37:37 vault kernel: sd 7:0:0:0: [sdc] 7814037168 512-byte logical blocks: (4.00 TB/3.64 TiB)
      9. May 11 19:37:37 vault kernel: sd 7:0:0:0: [sdc] 4096-byte physical blocks
      10. May 11 19:37:37 vault kernel: sd 7:0:0:0: [sdc] Write Protect is off
      11. May 11 19:37:37 vault kernel: sd 7:0:0:0: [sdc] Mode Sense: 00 3a 00 00
      12. May 11 19:37:37 vault kernel: sd 7:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
      13. May 11 19:37:37 vault kernel: sd 4:0:0:0: [sdb] Attached SCSI disk
      14. May 11 19:37:37 vault kernel: sr 3:0:0:0: [sr0] scsi3-mmc drive: 32x/32x cd/rw xa/form2 cdda tray
      15. May 11 19:37:37 vault kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
      16. May 11 19:37:37 vault kernel: sr 3:0:0:0: Attached scsi CD-ROM sr0
      17. May 11 19:37:37 vault kernel: firewire_core 0000:04:00.0: created device fw0: GUID 0010dc0001a91b52, S400
      18. May 11 19:37:37 vault kernel: sd 7:0:0:0: [sdc] Attached SCSI disk
      19. May 11 19:37:37 vault kernel: usb 3-3: new full-speed USB device number 2 using ohci-pci
      20. May 11 19:37:37 vault kernel: md/raid1:md0: active with 2 out of 2 mirrors
      21. May 11 19:37:37 vault kernel: md0: detected capacity change from 0 to 4000652787712
      Display All
      And, in fact /proc/mdstat shows the RAID active.

      Source Code

      1. root@vault:~# cat /proc/mdstat
      2. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      3. md0 : active raid1 sdc[1] sdb[0]
      4. 3906887488 blocks super 1.2 [2/2] [UU]
      5. bitmap: 0/30 pages [0KB], 65536KB chunk
      6. unused devices: <none>
      Back to the boot log, the one thing that has struck me as strange is seeing it start a fsck on /dev/disk/by-label/data0 immediately before it says it mounted it, and the fsck results of that check are never posted to the log... And additionaly, it had already done a scan earlier which does return results.

      Source Code

      1. May 11 19:37:37 vault systemd[1]: Found device /dev/disk/by-label/data0.
      2. May 11 19:37:37 vault systemd[1]: Started MD array monitor.
      3. May 11 19:37:37 vault systemd[1]: Starting File System Check on /dev/disk/by-label/data0...
      4. May 11 19:37:37 vault mdadm[479]: mdadm: No mail address or alert command - not monitoring.
      5. May 11 19:37:37 vault systemd[1]: mdmonitor.service: Main process exited, code=exited, status=1/FAILURE
      6. May 11 19:37:37 vault systemd[1]: mdmonitor.service: Unit entered failed state.
      7. May 11 19:37:37 vault systemd[1]: mdmonitor.service: Failed with result 'exit-code'.
      8. May 11 19:37:37 vault systemd[1]: Started File System Check Daemon to report status.
      9. May 11 19:37:37 vault kernel: kvm: Nested Virtualization enabled
      10. May 11 19:37:37 vault kernel: kvm: Nested Paging enabled
      11. ...skipping...
      12. May 11 19:37:38 vault systemd-fsck[480]: data0: clean, 649606/244187136 files, 755176121/976721872 blocks
      13. May 11 19:37:38 vault kernel: [drm] ring test on 5 succeeded in 2 usecs
      14. May 11 19:37:38 vault kernel: [drm] UVD initialized successfully.
      15. May 11 19:37:38 vault kernel: [drm] ib test on ring 0 succeeded in 0 usecs
      16. May 11 19:37:38 vault kernel: [drm] ib test on ring 3 succeeded in 0 usecs
      17. May 11 19:37:38 vault kernel: random: crng init done
      18. May 11 19:37:38 vault kernel: random: 7 urandom warning(s) missed due to ratelimiting
      19. May 11 19:37:38 vault systemd[1]: Started File System Check on /dev/disk/by-label/data0.
      20. May 11 19:37:38 vault systemd[1]: Mounting /srv/dev-disk-by-label-data0...
      21. May 11 19:37:38 vault systemd[1]: Created slice system-arm.slice.
      22. May 11 19:37:38 vault kernel: EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqf
      23. May 11 19:37:38 vault systemd[1]: Mounted /srv/dev-disk-by-label-data0.
      Display All

      I believe the RAID is properly assembled and active at the end of boot, it's just the mounting of the fstab and the autogenerated systemd service that fails I believe. As an experiment, I tried disabling the fsck in fstab (set <pass> to 0) and it mounted fine after reboot. So it really seems to be related to the fsck during boot and not with issues surrounding the RAID array itself. Any ideas why the fsck is happening twice during boot? I'd really much rather be able to leave it enabled...
    • jzooor wrote:

      Any ideas why the fsck is happening twice during boot?
      No sorry and a net search reveals very little.

      Going back to your original post, this started after a power outage, the Raid was coming up inactive, but active after you ran a mount command, the usual procedure to 'restart/mount' the Raid from an inactive state is to run what I posted. Looking at your last post I have no idea on how you would proceed.

      Is there any particular reason why you're using a Raid rather than two drives and using rsync.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      No sorry and a net search reveals very little.
      I haven't been able to find anything either. Seems it might be more of a Debian issue than OMV issue.

      geaves wrote:

      Is there any particular reason why you're using a Raid rather than two drives and using rsync.
      Seemed like the path of least resistance. Don't need to back this stuff up off-site, but a RAID 1 at least provided some easy redundancy.