What I have before was:
root@CHOMEOMV:~# lsscsi -d
[1:0:0:0] disk ATA WDC WD1600BEKX-0 01.0 /dev/sda [8:96]
[2:0:0:0] disk ATA WDC WD1600BEKX-0 01.0 /dev/sdb [8:112]
[0:1:0:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdc [8:0]
[0:1:1:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdd [8:16]
[0:1:2:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sde [8:32]
[0:1:3:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdf [8:48]
[0:1:4:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdg [8:64]
[0:1:6:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdh [8:80]
sda is the main OS boot drive and sdb is an unmounted clonezilla of my boot drive to have a backup.
I created a raid 6 array active consisting of sdc - sdh. /dev/md0
Had completed it's sync over night and was active.
I used lvm and created a physical group, volume group, and logical volume but had not created an fs yet.
This morning I noticed an led on that drive's bay to be much dimmer than the others and instead of powering down and yanking the drive to take a look, I decided to try a failure test anyhow, so I hot pulled sdc for a few minutes, inspected the drive and then put it back in the same slot. The raid array showed clean, degraded
cat /proc/mdstat showed the other drives fine by sdc as failed. (I cannot show you this because that ssh session was closed)
I tried re-adding the drive back to the array but I was getting:
root@CHOMEOMV:~# mdadm --manage /dev/md0 --re-add /dev/sdc
mdadm: Cannot open /dev/sdc: Device or resource busy
so I rebooted my nas box.. however, now what I have is very puzzling:
lsscsi now reports:
root@CHOMEOMV:~# lsscsi -d
[0:1:0:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sda [8:0]
[0:1:1:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdb [8:16]
[0:1:2:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdc [8:32]
[0:1:3:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdd [8:48]
[0:1:4:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sde [8:64]
[0:1:6:0] disk WDC WD40EFRX-68WT0N0 82.0 /dev/sdf [8:80]
[1:0:0:0] disk ATA WDC WD1600BEKX-0 01.0 /dev/sdg [8:96]
[2:0:0:0] disk ATA WDC WD1600BEKX-0 01.0 /dev/sdh [8:112]
root@CHOMEOMV:~#
root@CHOMEOMV:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sda[0](S)
3905804288 blocks super 1.2
md127 : active (auto-read-only) raid6 sdb[1] sdf[5] sde[4] sdd[3] sdc[2]
15623215104 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [_UUUUU]
unused devices: <none>
Alles anzeigen
and mdadm detail
root@CHOMEOMV:~# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sun Sep 20 00:13:11 2015
Raid Level : raid6
Array Size : 15623215104 (14899.46 GiB 15998.17 GB)
Used Dev Size : 3905803776 (3724.86 GiB 3999.54 GB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent
Update Time : Sun Sep 20 11:45:00 2015
State : clean, degraded
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : CHOMEOMV:Storage001 (local to host CHOMEOMV)
UUID : 7ff2875d:f8466166:ed83b1d8:5d486d37
Events : 654
Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
Alles anzeigen
So my questions are:
1. Why couldn't I re-add my drive back?
2. Why did all my drives change device pointers after the reboot?
3. Why do I now have 2 arrays (md devices)?
4. Where do I go from here?
so perhaps #3 has happened because the drive I failed was the first drive in the array and now re-insertion of a non-failed device somehow caused it to register as a new md array, but I don't believe that should have happened and don't know why it would have nor what the repurcussions are.
again, no fs was added yet, so none of the raid stuff has been mounted.
Also, this isn't a ciritcal issue because I have nothing on the array. I just wanted to start learning about the process prior to me having to deal with it in a real online data scenario .
Any/all help would be appreciated. I am not going to touch it for the moment so that hopefully you folks have the answers on what I should do next, what I should have done before, or whatever.
I am really liking this setup so far. I came from a QNAP 469L (4x3TB raid 5) and wanted to expand. I am pretty happy with my setup, but I am in learning mode and it's pretty fun so far. THis system is giving me the control and felxibility I wanted as well as the user recovery rather than having to send a device back to a vendor for repair if I got into that situation.
Thanks in advance.