Hi together,
I have been using a RAID5 with 5 disks without issues for over a year now, but today a moved everything to a new case to get more space and increase the RAID to maybe 8 disks later.
After I moved everything I had several issues with disks not beeing recognised, but this is an issue with the SATA expansion card mith 2 Marvell 88SE9215 where only 1 is usable.
Now all my disks from the RAID are recognised and in good condition in the GUI but the SW RAID is not shown on obviously then the file system on the RAID is marked as missing.
Then I have checked mdadm in the terminal, the output shows that it is defined as a raid0 instead of raid5, but the raid devices are the correct ones
root@omv-j5040:/proc# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Raid Level : raid0
Total Devices : 5
Persistence : Superblock is persistent
State : inactive
Working Devices : 5
Name : omv-j5040:RAID5 (local to host omv-j5040)
UUID : 180db90e:5dc436bc:d1266620:e9385686
Events : 60673
Number Major Minor RaidDevice
- 8 64 - /dev/sde
- 8 80 - /dev/sdf
- 8 48 - /dev/sdd
- 8 16 - /dev/sdb
- 8 96 - /dev/sdg
Alles anzeigen
When using mdadm --examine on each disk, I see two different pictures, two disks have newer update time with 3 disks missing, and 3 disks have older update time, but when all disks where still active.
Older superblock of a disk
root@omv-j5040:/proc# mdadm --examine /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 180db90e:5dc436bc:d1266620:e9385686
Name : omv-j5040:RAID5 (local to host omv-j5040)
Creation Time : Wed Sep 8 14:51:27 2021
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 27344502784 (13038.88 GiB 14000.39 GB)
Array Size : 54689005568 (52155.50 GiB 56001.54 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=0 sectors
State : clean
Device UUID : 7811787d:544074f5:c515d20c:847a67ef
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Oct 17 14:11:45 2022
Bad Block Log : 512 entries available at offset 64 sectors
Checksum : 24e3c9fe - correct
Events : 60673
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 1
Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
Alles anzeigen
Newer superblock of a disk
root@omv-j5040:/proc# mdadm --examine /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 180db90e:5dc436bc:d1266620:e9385686
Name : omv-j5040:RAID5 (local to host omv-j5040)
Creation Time : Wed Sep 8 14:51:27 2021
Raid Level : raid5
Raid Devices : 5
Avail Dev Size : 27344502784 (13038.88 GiB 14000.39 GB)
Array Size : 54689005568 (52155.50 GiB 56001.54 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=0 sectors
State : clean
Device UUID : caa0a466:43d02013:6f374468:be1c50be
Internal Bitmap : 8 sectors from superblock
Update Time : Mon Oct 17 14:17:11 2022
Bad Block Log : 512 entries available at offset 64 sectors
Checksum : 5617ae6e - correct
Events : 60679
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 3
Array State : ...AA ('A' == active, '.' == missing, 'R' == replacing)
Alles anzeigen
mdadm.conf:
root@omv-j5040:/proc# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR notify@muerwald.de
MAILFROM root
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=omv-j5040:RAID5 UUID=180db90e:5dc436bc:d1266620:e9385686
Alles anzeigen
I tried mdadm --assemble /dev/md0, it takes some time without output, and the details are still the same.
I very much believe there is an easy fix for that, as it looks its just that the informations on the disks are not consistent. But I am carefull, as I don't want anything to be lost (I don't have an backup, because it is nearly 50TB, but the files are also not "unique" but it would take a lot of time to get them back)
Does anyone know what I could try to fix the RAID5?
Thank you very much in advance.