Hello,
I had a 3 Disk RAID5 array, a few weeks ago one of the disks failed and the array was happily working in a degraded state with the two remaining disks.
Today I received a replacement disk and powered off the system. On turning the system back on my RAID array in the UI was missing. From the command line I can see the following...
$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive sdc[1](S) sdb[0](S)
7813772976 blocks super 1.2
unused devices: <none>
sudo /sbin/mdadm --misc --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 2
Persistence : Superblock is persistent
State : inactive
Working Devices : 2
Name : openmediavault:MonkeyFiles (local to host openmediavault)
UUID : edbda77a:1cc0c766:adcc8679:bd0c59f0
Events : 235096
Number Major Minor RaidDevice
- 8 32 - /dev/sdc
- 8 16 - /dev/sdb
Alles anzeigen
Raid Level should be 5, not 0.
sudo /sbin/mdadm --examine /dev/sd[cb]
/dev/sdb:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : edbda77a:1cc0c766:adcc8679:bd0c59f0
Name : openmediavault:MonkeyFiles (local to host openmediavault)
Creation Time : Thu Sep 30 21:10:09 2021
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : f73725f7:ea9072fd:b382784e:33b60a46
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Dec 6 12:12:43 2023
Checksum : 54170199 - correct
Events : 235096
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : edbda77a:1cc0c766:adcc8679:bd0c59f0
Name : openmediavault:MonkeyFiles (local to host openmediavault)
Creation Time : Thu Sep 30 21:10:09 2021
Raid Level : raid5
Raid Devices : 3
Avail Dev Size : 7813772976 (3725.90 GiB 4000.65 GB)
Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
Used Dev Size : 7813772288 (3725.90 GiB 4000.65 GB)
Data Offset : 264192 sectors
Super Offset : 8 sectors
Unused Space : before=264112 sectors, after=688 sectors
State : clean
Device UUID : f7f0e83a:b2ea11aa:0937ad91:03a52345
Internal Bitmap : 8 sectors from superblock
Update Time : Wed Dec 6 12:12:43 2023
Checksum : 86c7b787 - correct
Events : 235096
Layout : left-symmetric
Chunk Size : 512K
Device Role : spare
Array State : A.. ('A' == active, '.' == missing, 'R' == replacing)
Alles anzeigen
Both disks seem okay, but for some reason, sdc is marked as spare.
I have tried to reassemble the array but have no joy:
$ sudo /sbin/mdadm /dev/md0 --assemble /dev/sd[bc]
mdadm: /dev/sdb is busy - skipping
mdadm: /dev/sdc is busy - skipping
$ sudo /sbin/mdadm --stop /dev/md0
mdadm: stopped /dev/md0
$ sudo /sbin/mdadm /dev/md0 --assemble --force /dev/sd[bc]
mdadm: /dev/md0 assembled from 1 drive and 1 spare - not enough to start the array.
I am assuming the issue is the array being incorrectly marked as 0 or that sdc is marked as Spare and not Active - both I cannot find any way to resolve.
Im not sure how best to proceed, I'm at the point of making a fresh array and recovering from backups - but as mdadm --examine /dev/sd[cb] seems to report both disks are fine I feel I should be able to recover the array in a degraded state and then add the replacement drive (sdd).
Any guidance would be much appreciated, thank you!