Hi everybody,
I have a problem with my RAID 5 array after reboot.
One drive was listed as removed.
I tried to add the missing drive, now I have the following state:
cat /proc/mdstat shows:
Code
root@nas-kriwi:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sdb[0] sdd[2] sdc[3]
11720662536 blocks super 1.2
unused devices: <none>
mdadm -D /dev/md0 shows:
Code
root@nas-kriwi:~# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Mar 20 12:14:12 2015
Raid Level : raid5
Used Dev Size : -1
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sun Oct 9 18:42:08 2016
State : active, degraded, Not Started
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Name : nas-kriwi:Daten (local to host nas-kriwi)
UUID : 37e3f043:a8132bf9:55122d62:11073655
Events : 2174
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
3 8 32 1 spare rebuilding /dev/sdc
2 8 48 2 active sync /dev/sdd
Alles anzeigen
Here the other additional information:
blkid:
Code
root@nas-kriwi:~# blkid
/dev/sda1: UUID="526f5cda-3ac1-4914-ac2a-5c623afe7cff" TYPE="ext4"
/dev/sda5: UUID="ff79c131-a4b2-44b1-a758-a12f4c96cf52" TYPE="swap"
/dev/sdb: UUID="37e3f043-a813-2bf9-5512-2d6211073655" UUID_SUB="7d1ca78e-7d4a-3ad6-dbf6-708f888de353" LABEL="nas-kriwi:Daten" TYPE="linux_raid_member"
/dev/sdd: UUID="37e3f043-a813-2bf9-5512-2d6211073655" UUID_SUB="4e35f91f-5aee-5dcf-061c-f59b1f08b2dd" LABEL="nas-kriwi:Daten" TYPE="linux_raid_member"
/dev/sdc: UUID="37e3f043-a813-2bf9-5512-2d6211073655" UUID_SUB="0a575562-0a7b-250a-716c-b9eabdc98f53" LABEL="nas-kriwi:Daten" TYPE="linux_raid_member"
fdisk -l:
Code
root@nas-kriwi:~# fdisk -l
Disk /dev/sda: 60.0 GB, 60022480896 bytes
255 heads, 63 sectors/track, 7297 cylinders, total 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x6315bd60
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 112383999 56190976 83 Linux
/dev/sda2 112386046 117229567 2421761 5 Extended
/dev/sda5 112386048 117229567 2421760 82 Linux swap / Solaris
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdb doesn't contain a valid partition table
Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Disk /dev/sdd doesn't contain a valid partition table
Alles anzeigen
cat /etc/mdadm/mdadm.conf:
Code
root@nas-kriwi:~# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=nas-kriwi:Daten UUID=37e3f043:a8132bf9:55122d62:11073655
# instruct the monitoring daemon where to send mail alerts
MAILADDR xxx@xxx
MAILFROM root
ARRAY /dev/md/Daten metadata=1.2 UUID=37e3f043:a8132bf9:55122d62:11073655 name=nas-kriwi:Daten
Alles anzeigen
mdadm --detail --scan --verbose:
Code
root@nas-kriwi:~# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 spares=1 name=nas-kriwi:Daten UUID=37e3f043:a8132bf9:55122d62:11073655
devices=/dev/sdb,/dev/sdc,/dev/sdd
What does the state means. What does my NAS? The RAID is now not listed in OMV. I can't see any process that rebuild the array and the server is idle...
Thank you for your pleasent help!!
Best regards,
Marco