Hello,
I have this config in my NAS:
- OMV 5.6.22 on 32GB USB stick
- 2 SSD of ~ 230GB in RAID0
- One RAID1 array with a 3rd SSD of ~ 470GB and the RAID0 array
There was a power loss, but the UPS kept it cool. I triggered the shutdown through power button, and had the beep to confirm the soft shutdown triggered.
Once started, I received an email with subject "DegradedArray event on /dev/md0:grange"
/dev/md0 is still mounted and readable
It seems I didn't loose data, though I'm using only ~ 40GB.
IIUC, from mdstat:
- md127 is missing from md0
- sdb is "disabled" from md127
I don't know what to do to restore a clean state.
SMART tests look OK:
blkid
Code
# blkid
/dev/sr0: UUID="2007-09-30-21-03-00-0" LABEL="Photos_2006_2007" TYPE="iso9660"
/dev/sdc: UUID="bb6dc4fa-6340-95e9-e456-8765c5bcf9ab" UUID_SUB="57e1c067-8015-5683-724f-8dc116859fcf" LABEL="grange:Deux230" TYPE="linux_raid_member"
/dev/sdb: UUID="bb6dc4fa-6340-95e9-e456-8765c5bcf9ab" UUID_SUB="52c5b38d-1598-419c-e5d7-84dcdc2e5dd9" LABEL="grange:Deux230" TYPE="linux_raid_member"
/dev/md127: UUID="b55fa23a-352e-6aa8-d591-105992535c4a" UUID_SUB="4df40837-4ae0-4b0d-d7d6-b363bb2554aa" LABEL="grange:0" TYPE="linux_raid_member"
/dev/sda: UUID="b55fa23a-352e-6aa8-d591-105992535c4a" UUID_SUB="a647c19e-4599-f2d1-9c89-695f0addfdd6" LABEL="grange:0" TYPE="linux_raid_member"
/dev/md0: LABEL="data" UUID="b207ff0e-7941-4359-89a4-3415d0928de3" TYPE="ext4"
/dev/sdd1: UUID="5a787299-6e09-4939-9b4a-7765bcd5c689" TYPE="ext4" PARTUUID="a2b1c244-01"
/dev/sdd5: UUID="7eae7069-fbbc-4a85-a085-a530abcdada9" TYPE="swap" PARTUUID="a2b1c244-05"
fdisk -l | grep "Disk "
Code
# fdisk -l | grep "Disk "
Disk /dev/sdc: 238,5 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SAMSUNG SSD PM83
Disk /dev/sdb: 232,9 GiB, 250059350016 bytes, 488397168 sectors
Disk model: CT250MX500SSD1
Disk /dev/sda: 465,8 GiB, 500107862016 bytes, 976773168 sectors
Disk model: Samsung SSD 860
Disk /dev/md127: 471,1 GiB, 505848791040 bytes, 987985920 sectors
Disk /dev/md0: 465,7 GiB, 499972571136 bytes, 976508928 sectors
Disk /dev/sdd: 29,7 GiB, 31914983424 bytes, 62333952 sectors
Disk model: STORAGE DEVICE
Disk identifier: 0xa2b1c244
Display More
/proc/mdstat
Code
# cat /proc/mdstat
Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda[1]
488254464 blocks super 1.2 [2/1] [_U]
bitmap: 2/4 pages [8KB], 65536KB chunk
md127 : active raid0 sdc[1] sdb[0]
493992960 blocks super 1.2 512k chunks
/etc/mdadm/mdadm.conf
Code
# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR ****@*******
MAILFROM root
# definitions of existing MD arrays
ARRAY /dev/md/grange:Deux230 metadata=1.2 name=grange:Deux230 UUID=bb6dc4fa:634095e9:e4568765:c5bcf9ab
ARRAY /dev/md0 metadata=1.2 name=grange:0 UUID=b55fa23a:352e6aa8:d5911059:92535c4a
Display More
mdadm --detail --scan --verbose
Code
# mdadm --detail --scan --verbose
ARRAY /dev/md/grange:Deux230 level=raid0 num-devices=2 metadata=1.2 name=grange:Deux230 UUID=bb6dc4fa:634095e9:e4568765:c5bcf9ab
devices=/dev/sdb,/dev/sdc
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=grange:0 UUID=b55fa23a:352e6aa8:d5911059:92535c4a
devices=/dev/sda