I just set up OMV in a VM in ESXi a couple weeks ago and the raid array state is showing "clean, degraded" now. I'm using 4 x 8TB WD80EFZX 5400RPM in RAID10. I'm not sure what happened for this to occur. I had just recently copied all the data to it a couple days ago. About 3TB of the 14.4TB is being used. Here's the output from the various commands requested from the pinned post. If you need any more information, please let me know. The SMART status of all the drives is showing green/good.
cat /proc/mdstat
Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
md0 : active raid10 sde[3] sdc[1]
15627790336 blocks super 1.2 512K chunks 2 near-copies [4/2] [_U_U]
bitmap: 117/117 pages [468KB], 65536KB chunk
blkid
/dev/sda1: UUID="5fd7c9d7-d9b4-4c03-ba51-7017ae8018fa" TYPE="ext4" PARTUUID="9856ddaf-01"
/dev/sda5: UUID="1671fa99-4244-4fab-9cad-975e63b1b012" TYPE="swap" PARTUUID="9856ddaf-05"
/dev/sdc: UUID="d856f092-8499-7949-b3f9-705e26a12002" UUID_SUB="3e5f75fb-76e5-6ebb-bd0c-77473af3ad0f" LABEL="acmomv:acmraid" TYPE="linux_raid_member"
/dev/sdb: UUID="d856f092-8499-7949-b3f9-705e26a12002" UUID_SUB="56d31a5b-4c6d-258d-b997-5e807d463250" LABEL="acmomv:acmraid" TYPE="linux_raid_member"
/dev/sdd: UUID="d856f092-8499-7949-b3f9-705e26a12002" UUID_SUB="944b81c7-e25c-988d-971f-2d1f5a3cc058" LABEL="acmomv:acmraid" TYPE="linux_raid_member"
/dev/md0: LABEL="acmraid" UUID="3301097f-2458-4e36-94e5-e633cd21dfcc" TYPE="ext4"
/dev/sde: UUID="d856f092-8499-7949-b3f9-705e26a12002" UUID_SUB="6158fc62-6e62-d339-ecb2-ca05b94822b9" LABEL="acmomv:acmraid" TYPE="linux_raid_member"
fdisk -l | grep "Disk "
Disk /dev/sda: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk identifier: 0x9856ddaf
Disk /dev/sdc: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk /dev/sde: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Disk /dev/md0: 14.6 TiB, 16002857304064 bytes, 31255580672 sectors
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=acmomv:acmraid UUID=d856f092:84997949:b3f9705e:26a12002
# instruct the monitoring daemon where to send mail alerts
MAILADDR example@domain.com
Alles anzeigen
mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=acmomv:acmraid UUID=d856f092:84997949:b3f9705e:26a12002
devices=/dev/sdc,/dev/sde
I noticed when I click Details in the RAID management section, I see this.. what does "removed" mean?:
Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 32 1 active sync set-B /dev/sdc
- 0 0 2 removed
3 8 64 3 active sync set-B /dev/sde
I also notice that my read/write speeds are nothing to rave about. About 67 MB/sec write and 95 MB/sec read on average, but this being the first time I ever set up RAID, I don't know what to expect. Any help is appreciated.