In my case I have to delete sdc because mdadm says it is "posibly out of date". Am I right?
Possibly not. Add --force to your assemble command .
In my case I have to delete sdc because mdadm says it is "posibly out of date". Am I right?
Possibly not. Add --force to your assemble command .
Hello,
I have the exact same problem.
I run 2 arrays in my OMV server. One is RAID0 which works fine and shows ''clean'' and a second Mirror one, RAID1 which shows ''clean, degraded''.
RAID1 works OK too but my most important files are in that array so I'm a bit nervous with regards to its degraded state...
The drives in the RAID1 array are 2 X WD RED 4TB.
I'm under the impression that the degrade state has come up after a sudden power loss.
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid0 sdd[1] sdc[0]
976420864 blocks super 1.2 512k chunks
md0 : active raid1 sdb[1]
3906887488 blocks super 1.2 [2/1] [_U]
bitmap: 20/30 pages [80KB], 65536KB chunk
unused devices:
blkid
/dev/mmcblk1p6: SEC_TYPE="msdos" LABEL="boot" UUID="C0F8-560C" TYPE="vfat" PARTLABEL="boot" PARTUUID="32eef3ba-12f6-4212-84e2-5b0d76f4a993"
/dev/mmcblk1p7: LABEL="linux-root" UUID="2554df01-b8d0-41c1-bdf7-b7d8cddce3b0" TYPE="ext4" PARTLABEL="root" PARTUUID="529dbef1-9df3-4bdd-ac34-01de150ad7d8"
/dev/sda: UUID="7e4bee7e-c759-d5e9-3a7c-3b4f29675188" UUID_SUB="7b6b7f61-8872-f687-ffbf-3b896263b623" LABEL="rockpro64:0" TYPE="linux_raid_member"
/dev/sdb: UUID="7e4bee7e-c759-d5e9-3a7c-3b4f29675188" UUID_SUB="4c994784-29f0-2972-bd52-9900cc19108d" LABEL="rockpro64:0" TYPE="linux_raid_member"
/dev/md0: LABEL="RAID1" UUID="fa71c879-fd37-4fe1-8936-e5d730e3ac50" TYPE="ext4"
/dev/sdc: UUID="b1196f1a-4e11-83fc-59f0-41f2cc3e4aec" UUID_SUB="0249ea81-7454-7012-9eed-8ad7df2e9549" LABEL="rockpro64:0" TYPE="linux_raid_member"
/dev/md127: LABEL="RAID0" UUID="b14b3921-f446-4ca6-8d2d-360618d8f1f8" TYPE="ext4"
/dev/sdd: UUID="b1196f1a-4e11-83fc-59f0-41f2cc3e4aec" UUID_SUB="de2dfc68-3105-b22c-6bb9-d5780b1e64d1" LABEL="rockpro64:0" TYPE="linux_raid_member"
/dev/mmcblk1: PTUUID="86a3793b-4859-49c5-a070-0dd6149749ab" PTTYPE="gpt"
/dev/mmcblk1p1: PARTLABEL="loader1" PARTUUID="762eed66-a529-4c13-904d-33b8ff2d163e"
/dev/mmcblk1p2: PARTLABEL="reserved1" PARTUUID="c72e8fa0-8797-4c20-9ebd-3740dcbd03d4"
/dev/mmcblk1p3: PARTLABEL="reserved2" PARTUUID="afc8a236-0220-4f9a-a5ab-016019452401"
/dev/mmcblk1p4: PARTLABEL="loader2" PARTUUID="02cec096-2af6-46de-8007-2637eb8edc15"
/dev/mmcblk1p5: PARTLABEL="atf" PARTUUID="effa07ab-4894-4803-8d36-3a52d1da9f44"
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST
# definitions of existing MD arrays
ARRAY /dev/md/rockpro64:0 metadata=1.2 name=rockpro64:0 UUID=7e4bee7e:c759d5e9:3a7c3b4f:29675188
ARRAY /dev/md/rockpro64:0_0 metadata=1.2 name=rockpro64:0 UUID=b1196f1a:4e1183fc:59f041f2:cc3e4aec
mdadm --detail --scan --verbose
ARRAY /dev/md/rockpro64:0 level=raid1 num-devices=2 metadata=1.2 name=rockpro64:0 UUID=7e4bee7e:c759d5e9:3a7c3b4f:29675188
devices=/dev/sdb
ARRAY /dev/md/rockpro64:0_0 level=raid0 num-devices=2 metadata=1.2 name=rockpro64:0 UUID=b1196f1a:4e1183fc:59f041f2:cc3e4aec
devices=/dev/sdc,/dev/sdd
Any help will be greatly appreciated!
Any help will be greatly appreciated!
Just try re adding the drive as the drive is there in blkid
Just try re adding the drive as the drive is there in blkid
How am I going to do this? There's no option anywhere and the "recover" button is greyed out.
There's no option anywhere and the "recover" button is greyed out.
So under raid management selecting the raid that's degraded the recover is greyed out? Is this USB?
They are not USB attached. They are SATA.
I'm sorry. My bad. It's not greyed out but if you click the ''Recover'' option, there's no drive or array to choose. Please refer to the attached screenshots.
The you'll have to try from the cli assuming it's /dev/sda try mdadm --add /dev/md0 /dev/sda
The you'll have to try from the cli assuming it's /dev/sda try mdadm --add /dev/md0 /dev/sda
It started recovering! Thank you s much!
How am I going to prevent this from happening again in the future?
Safe shutdown the server and probably attach it to a UPS?
Safe shutdown the server and probably attach it to a UPS?
If you are prone to power cuts a ups would prevent this.
Don’t have an account yet? Register yourself now and be a part of our community!