Hello everyone.
I have version 6.0.46-1 of OMV.
I have created two RAIDs, one is working, the other is in BROKEN state (md0).
The RAID information not working is this:
Version : 1.2
Creation Time : Sat Oct 29 14:06:23 2022
Raid Level : raid0
Array Size : 2929890816 (2794.16 GiB 3000.21 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Oct 29 14:06:23 2022
State : broken
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : -unknown-
Chunk Size : 512K
Consistency Policy : none
Name : nas:Anna (local to host nas)
UUID : 31167f13:46400c9b:eba60ea1:875f57a0
Events : 0
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 8 112 2 active sync
Alles anzeigen
Is there a chance to restore it?
Thank You
root@nas:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Oct 29 14:06:23 2022
Raid Level : raid0
Array Size : 2929890816 (2794.16 GiB 3000.21 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Sat Oct 29 14:06:23 2022
State : broken
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : -unknown-
Chunk Size : 512K
Consistency Policy : none
Name : nas:Anna (local to host nas)
UUID : 31167f13:46400c9b:eba60ea1:875f57a0
Events : 0
Number Major Minor RaidDevice State
0 8 32 0 active sync /dev/sdc
1 8 48 1 active sync /dev/sdd
2 8 112 2 active sync
root@nas:~# cat /proc/mdstat
Personalities : [raid0] [linear] [multipath] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid0 sdb[0] sdg[2] sde[1]
5860147200 blocks super 1.2 512k chunks
md0 : active raid0 sdh[2] sdc[0] sdd[1]
2929890816 blocks super 1.2 512k chunks
unused devices: <none>
Alles anzeigen
fdisk -l | grep "Disk "
Disk /dev/sdg: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20PURZ-85G
Disk /dev/sdf: 298,09 GiB, 320072933376 bytes, 625142448 sectors
Disk model: TOSHIBA MK3276GS
Disk identifier: 0xe1a99502
Disk /dev/sde: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20PURZ-85A
Disk /dev/sdd: 931,51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10PURZ-85U
Disk /dev/sdc: 931,51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10EZEX-00W
Disk /dev/sdb: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WD20PURZ-85A
Disk /dev/sda: 465,76 GiB, 500107862016 bytes, 976773168 sectors
Disk model: WDC WD5000AAKX-0
Disk identifier: 5C6BFB13-D813-4980-B209-D1719FAD7F71
Disk /dev/md0: 2,73 TiB, 3000208195584 bytes, 5859781632 sectors
Disk /dev/md1: 5,46 TiB, 6000790732800 bytes, 11720294400 sectors
Alles anzeigen
root@nas:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=nas:Anna UUID=31167f13:46400c9b:eba60ea1:875f57a0
ARRAY /dev/md1 metadata=1.2 name=nas:1 UUID=a3747a6f:6aea4af3:204d8c4f:c818a923
Alles anzeigen
root@nas:~# mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid0 num-devices=3 metadata=1.2 name=nas:Anna UUID=31167f13:46400c9b:eba60ea1:875f57a0
devices=/dev/sdc,/dev/sdd
ARRAY /dev/md1 level=raid0 num-devices=3 metadata=1.2 name=nas:1 UUID=a3747a6f:6aea4af3:204d8c4f:c818a923
devices=/dev/sdb,/dev/sde,/dev/sdg