[SOLVED]RAID vanishes after reboot

    • OMV 1.0
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • [SOLVED]RAID vanishes after reboot

      Hi everyone,
      I'm writing a new thread because I haven't found yet a solution: I've got a NAS with OMV installed on an USB drive and with two hard drives (both 1TB), and I would use it as a backup solution. The problem is that once I've create the RAID with the two hard drives, mounted it and shared it on the LAN, it disappears after I reboot the NAS or after few days without using it, and I always have to re-create the RAID (mirror) from the start; I've got another NAS, with the same OMV version and with the same purpose, but it doesn't show anything like that.

      Searching for a solution in the forum, I' ve found this: [RESOLVED] RAID on LVM logical vols. not avail. after reboot but I don't know if it is the same problem of mine.

      Thank you all!!
    • Having the same problem. Using pure software raid.


      Fresh build. New machine. 3 drive raid5. Got everything working and started transferring content.


      Was going to add a USB drive. So I shutdown. (Was looking for a cable etc... gave up. Rebooted. No raid. The smart polling shows no issues. It looks like the drives just forgot to be raided.

      On a plane right now or I would post the cat results.

      BTW... tried hardware raid last weekend and after reboot the omv corrupted something. So trying software now.
    • ryecoaaron wrote:

      Are the drives connected via USB?



      Negative. They are SATA direct connected to the Motherboard.

      I kicked off the resynch this morning and it is moving s...l...o...w... :) 3.4% done and only 400 min to go :0

      For Subzero:


      Display Spoiler
      Fri Dec 5 07:41:05 2014: Setting parameters of disc: (none).
      Fri Dec 5 07:41:05 2014: Activating swap...done.
      Fri Dec 5 07:41:05 2014: Checking root file system...fsck from util-linux 2.20.1
      Fri Dec 5 07:41:05 2014: /dev/sda1: clean, 36530/29507584 files, 2233851/118008320 blocks
      Fri Dec 5 07:41:05 2014: done.
      Fri Dec 5 07:41:05 2014: Cleaning up temporary files... /tmp.
      Fri Dec 5 07:41:05 2014: Loading kernel module loop.
      Fri Dec 5 07:41:05 2014: Generating udev events for MD arrays...done.
      Fri Dec 5 07:41:05 2014: Setting up LVM Volume Groups... No volume groups found
      Fri Dec 5 07:41:05 2014: No volume groups found
      Fri Dec 5 07:41:05 2014: done.
      Fri Dec 5 07:41:06 2014: Activating lvm and md swap...done.
      Fri Dec 5 07:41:06 2014: Checking file systems...fsck from util-linux 2.20.1
      Fri Dec 5 07:41:06 2014: done.
      Fri Dec 5 07:41:06 2014: Mounting local filesystems...done.
      Fri Dec 5 07:41:06 2014: Activating swapfile swap...done.
      Fri Dec 5 07:41:07 2014: Cleaning up temporary files....
      Fri Dec 5 07:41:07 2014: Cleaning up...done.
      Fri Dec 5 07:41:07 2014: Setting kernel variables ...done.
      Fri Dec 5 07:41:07 2014: Setting up resolvconf...done.
      Fri Dec 5 07:41:08 2014: Configuring network interfaces...RTNETLINK answers: File exists
      Fri Dec 5 07:41:09 2014: Failed to bring up lo.
      Fri Dec 5 07:41:10 2014: done.
      Fri Dec 5 07:41:10 2014: Starting rpcbind daemon....
      Fri Dec 5 07:41:10 2014: Starting NFS common utilities: statd idmapd.
      Fri Dec 5 07:41:10 2014: Cleaning up temporary files....
      Fri Dec 5 07:41:10 2014: INIT: Entering runlevel: 2
      Fri Dec 5 07:41:10 2014: Using makefile-style concurrent boot in runlevel 2.
      Fri Dec 5 07:41:10 2014: Starting rpcbind daemon...Already running..
      Fri Dec 5 07:41:10 2014: Starting NFS common utilities: statd idmapd.
      Fri Dec 5 07:41:11 2014: ERROR: could not insert 'softdog': Device or resource busy
      Fri Dec 5 07:41:11 2014: Starting watchdog keepalive daemon: wd_keepalive.
      Fri Dec 5 07:41:11 2014: Starting enhanced syslogd: rsyslogd.
      Fri Dec 5 07:41:12 2014: Exporting directories for NFS kernel daemon....
      Fri Dec 5 07:41:12 2014: Starting NFS kernel daemon: nfsd mountd.
      Fri Dec 5 07:41:12 2014: Starting bittorrent daemon: transmission-daemon.
      Fri Dec 5 07:41:12 2014: Starting RRDtool data caching daemon: rrdcached.
      Fri Dec 5 07:41:12 2014: Loading ACPI kernel modules....
      Fri Dec 5 07:41:12 2014: Starting ACPI services....
      Fri Dec 5 07:41:13 2014: Starting anac(h)ronistic cron: anacron.
      Fri Dec 5 07:41:13 2014: Starting MD monitoring service: mdadm --monitor.
      Fri Dec 5 07:41:14 2014: Starting statistics collection and monitoring daemon: collectd.
      Fri Dec 5 07:41:14 2014: Starting periodic command scheduler: cron.
      Fri Dec 5 07:41:14 2014: Starting NTP server: ntpd.
      Fri Dec 5 07:41:14 2014: Starting system message bus: dbus.
      Fri Dec 5 07:41:14 2014: Starting nginx: nginx.
      Fri Dec 5 07:41:14 2014: Starting Samba daemons: nmbd smbd.
      Fri Dec 5 07:41:16 2014: Starting Avahi mDNS/DNS-SD Daemon: avahi-daemon.
      Fri Dec 5 07:41:17 2014: Starting quota service: rpc.rquotad.
      Fri Dec 5 07:41:17 2014: Starting OpenMediaVault engine daemon: omv-engined.
      Fri Dec 5 07:41:18 2014: Starting Plex Media Server: done
      Fri Dec 5 07:41:20 2014: Starting S.M.A.R.T. daemon: smartd.
      Fri Dec 5 07:41:20 2014: Starting Postfix Mail Transport Agent: postfix.



      The post was edited 2 times, last by Meridock ().

    • blkid
      Display Spoiler
      root@Caligula:~# blkid
      /dev/sda1: UUID="736ef0ad-5fcc-485b-8436-da7f151e50d4" TYPE="ext4"
      /dev/sda5: UUID="697154cd-8ebb-4fb5-954d-884411c030ea" TYPE="swap"
      /dev/sdd: UUID="ac5fda77-e2b0-9bad-58c5-cb4a08de2ab8" UUID_SUB="3ef60df6-5226-258b-f17e-d34e761b41ba" LABEL="Caligula:raiddevice" TYPE="linux_raid_member"
      /dev/sdb: UUID="ac5fda77-e2b0-9bad-58c5-cb4a08de2ab8" UUID_SUB="238d55c0-3770-15cb-b0ad-3ee8358fb94b" LABEL="Caligula:raiddevice" TYPE="linux_raid_member"
      /dev/sdc: UUID="ac5fda77-e2b0-9bad-58c5-cb4a08de2ab8" UUID_SUB="e109ef7e-2761-a2fa-1b48-8e16436c8aa7" LABEL="Caligula:raiddevice" TYPE="linux_raid_member"
      /dev/md0: LABEL="data" UUID="40a9ff9c-baa8-4c17-a633-a5c48952b701" TYPE="ext4"
      root@Caligula:~#


      lsmod | grep raid
      Display Spoiler
      root@Caligula:~# lsmod | grep raid
      raid456 48453 1
      async_raid6_recov 12574 1 raid456
      async_memcpy 12387 2 async_raid6_recov,raid456
      async_pq 12605 2 async_raid6_recov,raid456
      raid6_pq 82624 2 async_pq,async_raid6_recov
      async_xor 12422 3 async_pq,async_raid6_recov,raid456
      async_tx 12604 5 async_xor,async_pq,async_memcpy,async_raid6_recov,raid456
      md_mod 87742 2 raid456
      root@Caligula:~#
    • I see three devices there being member of the same array. Your first statement said only two devices were used for creating (my guess) a mirror.

      Can your confirm that, and which of the three devices you used for creating the array (sdb,sdc,sdd)?
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • Display Spoiler

      root@Caligula:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      ARRAY /dev/md0 metadata=1.2 name=Caligula:raiddevice UUID=ac5fda77:e2b09bad:58c5cb4a:08de2ab8

      # instruct the monitoring daemon where to send mail alerts
      MAILADDR XXX@gmail.com
      MAILFROM root

      root@Caligula:~# cat /etc/default/mdadm

      # INITRDSTART:
      # list of arrays (or 'all') to start automatically when the initial ramdisk
      # loads. This list *must* include the array holding your root filesystem. Use
      # 'none' to prevent any array from being started from the initial ramdisk.
      #INITRDSTART='none'

      # AUTOSTART:
      # should mdadm start arrays listed in /etc/mdadm/mdadm.conf automatically
      # during boot?
      AUTOSTART=true

      # AUTOCHECK:
      # should mdadm run periodic redundancy checks over your arrays? See
      # /etc/cron.d/mdadm.
      AUTOCHECK=true

      # START_DAEMON:
      # should mdadm start the MD monitoring daemon during boot?
      START_DAEMON=true

      # DAEMON_OPTIONS:
      # additional options to pass to the daemon.
      DAEMON_OPTIONS="--syslog"

      # VERBOSE:
      # if this variable is set to true, mdadm will be a little more verbose e.g.
      # when creating the initramfs.
      VERBOSE=false
      root@Caligula:~#


      This is the output - there is an autostart entry I see.
      and I see the entry but now at 17% on the resynch
      ARRAY /dev/md0 metadata=1.2 name=Caligula:raiddevice UUID=ac5fda77:e2b09bad:58c5cb4a:08de2ab8


      Off to work - will be back in a few hours.

      The post was edited 1 time, last by Meridock ().

    • Hi all.

      I think I have the same issue. For reference I had 4 drives in a RAID6 array, I added another drive, it was about half way through doing the add but the server froze for whatever reason so I had to do a hard reboot. Ever since then the RAID array doesn't show up but it does in all the configs really.

      blkid
      Display Spoiler

      root@DAMONSTER:~# blkid
      /dev/sda1: UUID="e5477dd0-ed1e-49a3-9705-cee63a271343" TYPE="ext4"
      /dev/sda5: UUID="ba0e7955-8967-4430-8341-d70262c209cf" TYPE="swap"
      /dev/sdb: UUID="daa561f3-03dd-1a2a-0dce-1874ecf7b928" UUID_SUB="1d11c96e-610c-fa41-d3b1-9273e355deb4" LABEL="DAMONSTSER:RAID6" TYPE="linux_raid_member"
      /dev/sdc: UUID="daa561f3-03dd-1a2a-0dce-1874ecf7b928" UUID_SUB="3dd12876-70ab-456a-4553-2048d8d0b4e0" LABEL="DAMONSTSER:RAID6" TYPE="linux_raid_member"
      /dev/sde: UUID="daa561f3-03dd-1a2a-0dce-1874ecf7b928" UUID_SUB="59e40195-e490-6c01-0090-c072e61ad717" LABEL="DAMONSTSER:RAID6" TYPE="linux_raid_member"
      /dev/sdf: UUID="daa561f3-03dd-1a2a-0dce-1874ecf7b928" UUID_SUB="3e2c8010-65fb-789e-b722-89f9fbdda2b0" LABEL="DAMONSTSER:RAID6" TYPE="linux_raid_member"
      /dev/sdg: UUID="daa561f3-03dd-1a2a-0dce-1874ecf7b928" UUID_SUB="3a91410e-bf2e-b40f-7177-5bc7be998ab6" LABEL="DAMONSTSER:RAID6" TYPE="linux_raid_member"
      root@DAMONSTER:~#


      lsmod | grep raid
      Display Spoiler
      root@DAMONSTER:~# lsmod | grep raid
      raid456 48453 0
      async_raid6_recov 12574 1 raid456
      async_memcpy 12387 2 async_raid6_recov,raid456
      async_pq 12605 2 async_raid6_recov,raid456
      async_xor 12422 3 async_pq,async_raid6_recov,raid456
      async_tx 12604 5 async_xor,async_pq,async_memcpy,async_raid6_recov,raid456
      raid6_pq 82624 2 async_pq,async_raid6_recov
      md_mod 87742 1 raid456
      raid_class 12832 1 mpt2sas
      scsi_mod 162321 8 scsi_transport_sas,libata,libsas,mvsas,raid_class,mpt2sas,sd_mod,sg
      root@DAMONSTER:~#


      /etc/mdadm/mdadm.conf
      Display Spoiler

      root@DAMONSTER:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      ARRAY /dev/md0 metadata=1.2 name=DAMONSTSER:RAID6 UUID=daa561f3:03dd1a2a:0dce1874:ecf7b928

      # instruct the monitoring daemon where to send mail alerts
      MAILADDR ...@gmail.com
      MAILFROM root
      root@DAMONSTER:~#


      /etc/default/mdadm
      Display Spoiler

      root@DAMONSTER:~# cat /etc/default/mdadm
      # INITRDSTART:
      # list of arrays (or 'all') to start automatically when the initial ramdisk
      # loads. This list *must* include the array holding your root filesystem. Use
      # 'none' to prevent any array from being started from the initial ramdisk.
      #INITRDSTART='none'

      # AUTOSTART:
      # should mdadm start arrays listed in /etc/mdadm/mdadm.conf automatically
      # during boot?
      AUTOSTART=true

      # AUTOCHECK:
      # should mdadm run periodic redundancy checks over your arrays? See
      # /etc/cron.d/mdadm.
      AUTOCHECK=true

      # START_DAEMON:
      # should mdadm start the MD monitoring daemon during boot?
      START_DAEMON=true

      # DAEMON_OPTIONS:
      # additional options to pass to the daemon.
      DAEMON_OPTIONS="--syslog"

      # VERBOSE:
      # if this variable is set to true, mdadm will be a little more verbose e.g.
      # when creating the initramfs.
      VERBOSE=false
      root@DAMONSTER:~#
      DAMONSTER - OMV 1.8 - 42TB RAID6
      XEON 1270 v3 - 16GB SAMSUNG ECC - X10SL7-F - ANTEC 1200 - HIGHPOINT 2720 - HIGHPOINT 640L - CORSAIR RM750 -

      The post was edited 1 time, last by Dimtar ().

    • Plain lvm user here, had the vg for about a year without changes and without issues. Root mountpoint has always been 2775.

      Tried OMV last night, seemed to be working fine. Looked today and found that only SOME of the users group could write and some could read. Checked and doubled checked everything, set 777 on everything on the lvm, set owner to root:users (as suggested several times in other threads) etc etc etc.

      After many hours root (or any other standard linux user) can ls the lvm, see all contents including permissions, but nothing installed by openmediavault can.

      Good luck to everyone else!