I have a hard drive dock that disconnects all drives in it when a new drive is put into that dock. This dock is a 2 bay hard drive dock that had one of my raid drives in it. I added a new Seagate Ironwolf 4TB drive to this dock and began copying files from my raid array to this new 4TB drive without noticing my raid array was degraded. I moved about 20GB of files from my raid array to this new 4TB drive and then noticed that my array was degraded. I immediately stopped the transfer and rebooted my server, but that did not fix anything. How can I fix this degraded array? Thanks in advance for taking the time to help me out with this.
One of my hard drives says it is removed but it is actually not removed, and is detected in the "Physical Disks" tab in the webGUI. This leads me to believe that I have to re-add it to the array. The problem is I don't actually know how to do this. Rather than playing around with 2TB of my files I have came here to ask for help. Thanks.
My RAID 10 array is as follows:
4 Seagate Barracuda drives of 1TB each in RAID 10.
1 of these drives is connected via SATA and the rest are connected via USB on 2-port 3.5 inch external hard drive docks.
The webGUI outputs the following:
Version : 1.2
Creation Time : Tue May 16 20:37:37 2017
Raid Level : raid10
Array Size : 1953262592 (1862.78 GiB 2000.14 GB)
Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Tue May 23 19:48:14 2017
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : screamserver:ScreamRaid (local to host screamserver)
UUID : ddaf6947:8c3f9552:e1ec6bbc:4be83769
Events : 493
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 0 0 2 removed
3 8 64 3 active sync /dev/sde
Alles anzeigen
Output of commands from RAID help sticky thread:
cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sdc[0] sde[3] sdd[1]
1953262592 blocks super 1.2 512K chunks 2 near-copies [4/3] [UU_U]
unused devices: <none>
mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=screamserver:ScreamRaid UUID=ddaf6947:8c3f9552:e1ec6bbc:4be83769
devices=/dev/sdc,/dev/sdd,/dev/sde
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=screamserver:ScreamRaid UUID=ddaf6947:8c3f9552:e1ec6bbc:4be83769
Alles anzeigen
fdisk -l | grep "Disk "
Disk /dev/sda doesn't contain a valid partition table
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde doesn't contain a valid partition table
Disk /dev/md0 doesn't contain a valid partition table
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
Disk identifier: 0x00000000
Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 120.0 GB, 120034123776 bytes
Disk identifier: 0x00099f0a
Disk /dev/md0: 2000.1 GB, 2000140894208 bytes
Disk identifier: 0x00000000
Alles anzeigen
blkid
/dev/sdc: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="9a520635-d41c-8a42-4f2a-85129f667b55" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
/dev/sdb1: UUID="4086908c-fcf7-467d-923a-867222729129" TYPE="ext4" LABEL="Ironwolf"
/dev/sda: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="f1901490-fb9c-2847-5d14-b10584fda9d9" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
/dev/sde: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="81e429b8-f7bd-7e4c-fbaa-ae82daa59b09" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
/dev/sdd: UUID="ddaf6947-8c3f-9552-e1ec-6bbc4be83769" UUID_SUB="72d5b57e-9acb-115e-f890-92c69155af79" LABEL="screamserver:ScreamRaid" TYPE="linux_raid_member"
/dev/sdf1: UUID="6ce9b17c-b25e-4ead-bb1a-96ed4982cf5f" TYPE="ext4"
/dev/sdf5: UUID="f898a3d3-b314-4b29-a0d7-9e2fbea99ccc" TYPE="swap"
/dev/md0: LABEL="ScreamDrive" UUID="f33e8fe4-1951-4061-90e6-f3241fe7401d" TYPE="ext4"