RAID rebuild on reboot?

    • OMV 2.x
    • RAID rebuild on reboot?

      I've had a drive on my RAID5 array go bad. I replaced it, formatted the new one (GPT, since it's 3TB), and added it to the array using the webUI. Took a long time to sync, but it was working fine.

      Then I got a SpareMissing event email, so I edited mdadm.conf to set spares = 0, and it was all good.

      But recently, I had to reboot the server, and the array was back to 3 devices with a failed one (removed). So I re-added my new hard-disk, and waited for the resync to finish (again) - this time I didn't get the SparesMissing event email, but now I'm worried... will this happen again on reboot? It takes a long time to re-sync... :/ using OMV 2.1.23 now.

      My array (/dev/md127 - sda is the new drive):

      Source Code

      1. Version : 1.2
      2. Creation Time : Tue Apr 23 20:41:25 2013
      3. Raid Level : raid5
      4. Array Size : 5860538880 (5589.05 GiB 6001.19 GB)
      5. Used Dev Size : 1953512960 (1863.02 GiB 2000.40 GB)
      6. Raid Devices : 4
      7. Total Devices : 4
      8. Persistence : Superblock is persistent
      9. Update Time : Tue Jan 5 10:35:31 2016
      10. State : active
      11. Active Devices : 4
      12. Working Devices : 4
      13. Failed Devices : 0
      14. Spare Devices : 0
      15. Layout : left-symmetric
      16. Chunk Size : 512K
      17. Name : omv:raid5device
      18. UUID : 76ecf19d:6eff237c:48bdc781:f074fd09
      19. Events : 30140
      20. Number Major Minor RaidDevice State
      21. 0 8 16 0 active sync /dev/sdb
      22. 1 8 48 1 active sync /dev/sdd
      23. 4 8 0 2 active sync /dev/sda
      24. 3 8 64 3 active sync /dev/sde
      Display All


      My mdadm.conf:

      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md/raid5device metadata=1.2 spares=0 name=omv:raid5device UUID=76ecf19d:6eff237c:48bdc781:f074fd09
      17. # instruct the monitoring daemon where to send mail alerts
      18. MAILADDR XXXXXXXXXX (redacted)
      19. MAILFROM root
      Display All


      Any help appreciated!
    • Right forum but we don't know if the array will have to resync on next reboot.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Well, I just had to reboot and I got the same thing - mdstat complains there's a "removed" disk, I re-add the new one I bought, and it's off to rebuild again! Do I need to "save" the array's configuration somehow? Isn't adding it on the WebUI enough? And I'm pretty sure I'll get the "spare missing" or whatever post again when the rebuild ends.

      Please help :/ I'm on OMV 2.1.25
    • I have the same issue 8x4tb disks off and IBM 1015 in IT mode Raid 6, fresh array just created a month or less ago.

      Everytime I reboot, i get a removed disk. Its not always the same disk system boots up and shows disk removed.

      I rebuild then the same thing happens again, i was on version 2.1 now 2.2 as of this morning.

      Current stats

      /dev/md1:
      Version : 1.2
      Creation Time : Sun Dec 27 16:42:53 2015
      Raid Level : raid6
      Array Size : 23441323008 (22355.39 GiB 24003.91 GB)
      Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
      Raid Devices : 8
      Total Devices : 8
      Persistence : Superblock is persistent

      Update Time : Wed Feb 24 10:25:30 2016
      State : clean
      Active Devices : 8
      Working Devices : 8
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 512K

      Name : OMV:stor1 (local to host OMV)
      UUID : d793de05:ac9bbfdc:326b3fbb:74949d64
      Events : 507463

      Number Major Minor RaidDevice State
      0 8 96 0 active sync /dev/sdg
      10 8 32 1 active sync /dev/sdc
      2 8 64 2 active sync /dev/sde
      3 8 112 3 active sync /dev/sdh
      5 8 80 4 active sync /dev/sdf
      4 8 48 5 active sync /dev/sdd
      8 8 0 6 active sync /dev/sda
      9 8 16 7 active sync /dev/sdb

      I rebuilt after this last boot, if start the OS with mdadm off and run a resemble and add the disk in all is well but the OS likes to override my trying to stop it from autostarting mdadm so mostly it just starts with a missing disk.

      heres some of the logs from when the disk is kicked

      b 10 13:10:38 OMV kernel: [ 10.629412] md/raid:md1: device sdb operational as raid disk 2
      Feb 10 13:18:01 OMV kernel: [ 8.890599] sd 0:0:18:0: [sdb] physical block alignment offset: 4096
      Feb 10 13:18:01 OMV kernel: [ 8.890602] sd 0:0:18:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
      Feb 10 13:18:01 OMV kernel: [ 8.890604] sd 0:0:18:0: [sdb] 4096-byte physical blocks
      Feb 10 13:18:01 OMV kernel: [ 9.013598] sd 0:0:18:0: [sdb] Write Protect is off
      Feb 10 13:18:01 OMV kernel: [ 9.020049] sd 0:0:18:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
      Feb 10 13:18:01 OMV kernel: [ 9.153070] sdb: unknown partition table
      Feb 10 13:18:01 OMV kernel: [ 9.311346] sd 0:0:18:0: [sdb] Attached SCSI disk
      Feb 10 13:18:01 OMV kernel: [ 9.969457] md: bind<sdb>
      Feb 10 13:18:01 OMV kernel: [ 9.970922] md: kicking non-fresh sdb from array!
      Feb 10 13:18:01 OMV kernel: [ 9.970925] md: unbind<sdb>
      Feb 10 13:18:01 OMV kernel: [ 9.994521] md: export_rdev(sdb)

      This behavior seems to be entirely reproducible

      For now i am trying to avoid reboots and trying to figure out how to start OMV without it trying to assemble the array.

      PS the disks all pass smart short and long tests and read/write tests and it chooses a different disk to kick each time it seems.
    • Got the same Problem

      After adding a new Harddisk to my RAID 5 everything worked fine but after the rebbot it wennt missing and i must resync.

      This is the mail

      ​This is an automatically generated mail message from mdadmrunning on NAS-OMV A SparesMissing event had been detected on md device /dev/md0. Faithfully yours, etc.
      P.S. The /proc/mdstat file currently contains the following: Personalities : [raid6] [raid5] [raid4]md0 : active raid5 sda[0] sde[3] sdd[2] sdc[1] 11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_] unused devices: <none>


      Please let me know if you need another information. Im new on linux and need a little bit more detailed.
    • neodata wrote:

      Please let me know if you need another information. Im new on linux and need a little bit more detailed.

      Degraded or missing raid array questions

      I have no idea why people are losing their array on reboot. After doing lots of research, this isn't an OMV issue or even a Debian issue. It is an mdadm issue. I see report after report on all the distros (Arch, CentOS, Debian, ubuntu, etc).
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • If it is mdadm raid10, you could still potentially have the problem. While I haven't had any issues, this is happening on all raid levels.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      Zitat von neodata: „Please let me know if you need another information. Im new on linux and need a little bit more detailed.“
      <a href="http://forums.openmediavault.org/index.php/Thread/8631-Degraded-or-missing-raid-array-questions/">Degraded or missing raid array questions</a>

      I have no idea why people are losing their array on reboot. After doing lots of…


      Source Code

      1. cat /proc/mdstat


      Personalities : [raid6] [raid5] [raid4]
      md0 : active raid5 sda[0] sde[3] sdd[2] sdc[1]
      11720540160 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]

      unused devices: <none>


      Source Code

      1. blkid



      /dev/sdc: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="2aff9e51-39ad-3bf5-9d39-417d5400e7e6" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
      /dev/sdd: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="a3aa903a-b1ed-b4b1-eb90-69c49e3106f4" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
      /dev/sda: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="58952015-7d8d-9eba-5a15-18e897b8003b" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
      /dev/md0: LABEL="Daten" UUID="1fed2e7a-967b-4473-877f-11a947f88b38" TYPE="ext4"
      /dev/sde: UUID="32d3f50c-d25c-4482-7835-bf0b90cd4649" UUID_SUB="ecc83b31-85a5-9802-c0f5-b3482e7c137d" LABEL="NAS-OMV:meinRAID5" TYPE="linux_raid_member"
      /dev/sdf1: UUID="e18308d2-eea9-4a61-aba0-b86b84025eab" TYPE="ext4"
      /dev/sdf5: UUID="473d9cc6-c276-4122-8fcd-b12ad6475a14" TYPE="swap"


      Source Code

      1. cat /etc/mdadm/mdadm.conf


      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      ARRAY /dev/md0 metadata=1.2 spares=1 name=NAS-OMV:meinRAID5 UUID=32d3f50c:d25c4482:7835bf0b:90cd4649

      # instruct the monitoring daemon where to send mail alerts
      MAILADDR ************
      MAILFROM root

      Source Code

      1. mdadm --detail --scan --verbose


      ARRAY /dev/md0 level=raid5 num-devices=5 metadata=1.2 name=NAS-OMV:meinRAID5 UUID=32d3f50c:d25c4482:7835bf0b:90cd4649
      devices=/dev/sda,/dev/sdc,/dev/sdd,/dev/sde

      this is with the missing harddrive. over the night i let it resync and post it again if needed.

      The post was edited 1 time, last by neodata ().

    • No output of fdisk -l. I don't see any sign of a fifth drive. Did it fail?
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ah sorry i forget to copy it my misstake :> sdb is the missing hard drive it seems it has an error. Am i right?

      Source Code

      1. fdisk -l


      Disk /dev/sde: 3000.6 GB, 3000592982016 bytes
      255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sde doesn't contain a valid partition table

      Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes
      255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sdd doesn't contain a valid partition table

      Disk /dev/sdf: 32.0 GB, 32017047552 bytes
      255 heads, 63 sectors/track, 3892 cylinders, total 62533296 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x00079103

      Device Boot Start End Blocks Id System
      /dev/sdf1 * 2048 59895807 29946880 83 Linux
      /dev/sdf2 59897854 62531583 1316865 5 Extended
      /dev/sdf5 59897856 62531583 1316864 82 Linux swap / Solaris

      WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

      Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
      256 heads, 63 sectors/track, 363376 cylinders, total 5860533168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x0086110b

      Device Boot Start End Blocks Id System
      /dev/sdb1 1 4294967295 2147483647+ ee GPT
      Partition 1 does not start on physical sector boundary.

      Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
      255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sda doesn't contain a valid partition table

      Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes
      255 heads, 63 sectors/track, 364801 cylinders, total 5860533168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sdc doesn't contain a valid partition table

      Disk /dev/md0: 12001.8 GB, 12001833123840 bytes
      2 heads, 4 sectors/track, -1364832256 cylinders, total 23441080320 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 524288 bytes / 2097152 bytes
      Disk identifier: 0x00000000
    • Try (as root):

      mdadm --stop /dev/md0
      mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcde]
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • There is something wrong with /dev/sdb. You could try wiping and then retry the previous commands to rebuild again.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!