So this afternoon I get emails from OMV (which I did not see until 1/2 hour ago) stating that a drive failed, then another email another drive, then another email another drive.. basically got emails identifying 6 drive failures. The email I got just before these emails was an email from OMV telling me that my resource limit had been exceeded.
Anyhow, fast forward to this evening. I see emails, I log on using OMV and ssh. OMV shows me raid md0 Clean, FAILED and the volume no longer shows up in the volume management area.
ssh shows me /dev/md0 is stll mounted on the filesytem and I can see some dirs and files. Seems fine, but I don't really want anything writing to the filesystem at this point so I reboot expecting things to clean themselves up because there is a very low likelihood that all the drives failed at once (and the /dev/md0 was accessible no problem). I don't have any hot or cold spares.
after reboot, raid isn't starting (not surprised) but I now am left with this:
root@CHOMEOMV:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sun Sep 20 00:13:11 2015
Raid Level : raid6
Used Dev Size : -1
Raid Devices : 6
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Oct 29 14:42:47 2016
State : active, FAILED, Not Started
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : CHOMEOMV:Storage001 (local to host CHOMEOMV)
UUID : 7ff2875d:f8466166:ed83b1d8:5d486d37
Events : 1853
Number Major Minor RaidDevice State
6 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 8 64 2 active sync /dev/sde
3 0 0 3 removed
4 0 0 4 removed
5 0 0 5 removed
Alles anzeigen
No drives are reporting any kind of hardware errors or anything like that and all show up fine in the Intel raid bios screen.
Here is the other information:
root@CHOMEOMV:/etc/mdadm# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : inactive sdf[3](S) sdh[5](S) sdg[4](S)
11717412864 blocks super 1.2
root@CHOMEOMV:/etc/mdadm# blkid
/dev/sdc: UUID="7ff2875d-f846-6166-ed83-b1d85d486d37" UUID_SUB="caf3e2d2-1fdf-5748-34ab-d7b1ebe203dc" LABEL="CHOMEOMV:Storage001" TYPE="linux_raid_member"
/dev/sdd: UUID="7ff2875d-f846-6166-ed83-b1d85d486d37" UUID_SUB="b42f435f-a0f1-4cf5-8231-381f0dfae572" LABEL="CHOMEOMV:Storage001" TYPE="linux_raid_member"
/dev/sde: UUID="7ff2875d-f846-6166-ed83-b1d85d486d37" UUID_SUB="24939b24-ab9b-e0de-9cd8-a988a84beb17" LABEL="CHOMEOMV:Storage001" TYPE="linux_raid_member"
/dev/sdf: UUID="7ff2875d-f846-6166-ed83-b1d85d486d37" UUID_SUB="89d4d596-ddf9-369e-ddd6-d65ecc50e2e4" LABEL="CHOMEOMV:Storage001" TYPE="linux_raid_member"
/dev/sdb1: UUID="7cab3464-94c9-4a02-bceb-85ec8aa9b523" TYPE="ext4"
/dev/sdb5: UUID="2b207325-2a6e-4608-9f90-1356de34208c" TYPE="swap"
/dev/sda1: UUID="7cab3464-94c9-4a02-bceb-85ec8aa9b523" TYPE="ext4"
/dev/sda5: UUID="2b207325-2a6e-4608-9f90-1356de34208c" TYPE="swap"
/dev/sdg: UUID="7ff2875d-f846-6166-ed83-b1d85d486d37" UUID_SUB="27192ce4-adbb-5ca3-8af6-fe8821b9189c" LABEL="CHOMEOMV:Storage001" TYPE="linux_raid_member"
/dev/sdh: UUID="7ff2875d-f846-6166-ed83-b1d85d486d37" UUID_SUB="3398896a-a781-4ef3-4ed0-c3e7f81730cc" LABEL="CHOMEOMV:Storage001" TYPE="linux_raid_member"
root@CHOMEOMV:/etc/mdadm# fdisk -l
Disk /dev/sdb: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008b16b
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 299839487 149918720 83 Linux
/dev/sdb2 299841534 312580095 6369281 5 Extended
/dev/sdb5 299841536 312580095 6369280 82 Linux swap / Solaris
Disk /dev/sda: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0008b16b
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 299839487 149918720 83 Linux
/dev/sda2 299841534 312580095 6369281 5 Extended
/dev/sda5 299841536 312580095 6369280 82 Linux swap / Solaris
Disk /dev/sdc: 3999.7 GB, 3999677808640 bytes
255 heads, 63 sectors/track, 486266 cylinders, total 7811870720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 3999.7 GB, 3999677808640 bytes
255 heads, 63 sectors/track, 486266 cylinders, total 7811870720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 3999.7 GB, 3999677808640 bytes
255 heads, 63 sectors/track, 486266 cylinders, total 7811870720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/sdf: 3999.7 GB, 3999677808640 bytes
255 heads, 63 sectors/track, 486266 cylinders, total 7811870720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 3999.7 GB, 3999677808640 bytes
255 heads, 63 sectors/track, 486266 cylinders, total 7811870720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/sdh: 3999.7 GB, 3999677808640 bytes
255 heads, 63 sectors/track, 486266 cylinders, total 7811870720 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/sdh doesn't contain a valid partition table
root@CHOMEOMV:/etc/mdadm# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md127 metadata=1.2 name=CHOMEOMV:Storage001 UUID=7ff2875d:f8466166:ed83b1d8:5d486d37
# instruct the monitoring daemon where to send mail alerts
Alles anzeigen
Question is, where do I go from here? How can I re-add the drives in a way that tells the RAID they are the original drives all good? Because I can't add them in a way where it tries to rebuild because I don't have enough drives for a rebuild.
Help is appreciated.
Cheers
Update #1
I also did this based on an article somewhere and I think that his is also a good sign I could assemble the raid again? I just have no experience with this type of recovery so I need some advice before proceeding: