After upgrading and rebooting my RAID 1 stopped working. I got "Software Failure" after clicking on Storage. The command line tells me the group descriptors are corrupted, but the syslog shows some mismatch with the mountpoint. I don't understand if my Raid is gone or whats the problem. Can't access the Raid via smb,
OMV6 on Proxmox with harddrive passthrough since 5 years...
Some informations:
/etc/fstab:
Code: /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=91512fa9-66a6-4e0c-966f-ee484cf7882d / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=44aa4dfd-e989-4162-aa49-b4c7390b1107 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
# >>> [openmediavault]
/dev/disk/by-id/md-name-omv.local:RedRaid /srv/dev-disk-by-id-md-name-omv.local-RedRaid ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/srv/dev-disk-by-id-md-name-omv.local-RedRaid/RedRaid/ /export/RedRaid none bind,nofail 0 0
# <<< [openmediavault]
Alles anzeigen
Code: cat /proc/mdstat
cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[1] sdb[0]
3906886464 blocks super 1.2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
Code: mdadm -D /dev/md0
mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 28 13:39:25 2020
Raid Level : raid1
Array Size : 3906886464 (3725.90 GiB 4000.65 GB)
Used Dev Size : 3906886464 (3725.90 GiB 4000.65 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Sep 3 22:11:01 2022
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : omv.local:RedRaid
UUID : cd3cf365:95c40760:74b080e1:9ede0634
Events : 33343
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
1 8 32 1 active sync /dev/sdc
Alles anzeigen
Code: mdadm --examine --brief --scan
mdadm --examine --brief --scan
ARRAY /dev/md/RedRaid metadata=1.2 UUID=cd3cf365:95c40760:74b080e1:9ede0634 name=omv.local:RedRaid
Code: blkid
blkid
/dev/sdc: UUID="cd3cf365-95c4-0760-74b0-80e19ede0634" UUID_SUB="44aa5744-4c01-7163-e93a-3a02560eb36c" LABEL="omv.local:RedRaid" TYPE="linux_raid_member"
/dev/sdb: UUID="cd3cf365-95c4-0760-74b0-80e19ede0634" UUID_SUB="3169127b-6147-0151-4643-72425592e250" LABEL="omv.local:RedRaid" TYPE="linux_raid_member"
/dev/sda1: UUID="91512fa9-66a6-4e0c-966f-ee484cf7882d" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9cebf7d0-01"
/dev/sda5: UUID="44aa4dfd-e989-4162-aa49-b4c7390b1107" TYPE="swap" PARTUUID="9cebf7d0-05"
/dev/md0: LABEL="RedRaid" UUID="297ce47a-fa80-41f7-a8ff-f5662b8676aa" BLOCK_SIZE="4096" TYPE="ext4"
/dev/sdd2: SEC_TYPE="msdos" UUID="78C8-F588" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI boot partition" PARTUUID="c4b7b64c-96b4-4b94-84a9-bb80b6ef09de"
/dev/sdd3: BLOCK_SIZE="2048" LABEL="PVE" TYPE="hfsplus" PARTLABEL="HFSPLUS" PARTUUID="c4b7b64c-96b4-4b94-84a8-bb80b6ef09de"
/dev/sdd1: PARTLABEL="Gap0" PARTUUID="c4b7b64c-96b4-4b94-84aa-bb80b6ef09de"
/dev/sdd4: PARTLABEL="Gap1" PARTUUID="c4b7b64c-96b4-4b94-84af-bb80b6ef09de"
Code: lsblk -fs
lsblk -fs
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda1 ext4 1.0 91512fa9-66a6-4e0c-966f-ee484cf7882d 2,4G 70% /
└─sda
sda2
└─sda
sda5 swap 1 44aa4dfd-e989-4162-aa49-b4c7390b1107 [SWAP]
└─sda
sdd1
└─sdd iso9660 PVE 2022-05-04-07-02-32-00
sdd2 vfat FAT12 78C8-F588
└─sdd iso9660 PVE 2022-05-04-07-02-32-00
sdd3 hfsplus PVE
└─sdd iso9660 PVE 2022-05-04-07-02-32-00
sdd4
└─sdd iso9660 PVE 2022-05-04-07-02-32-00
md0 ext4 1.0 RedRaid 297ce47a-fa80-41f7-a8ff-f5662b8676aa
├─sdb linux_raid_member 1.2 omv.local:RedRaid cd3cf365-95c4-0760-74b0-80e19ede0634
└─sdc linux_raid_member 1.2 omv.local:RedRaid cd3cf365-95c4-0760-74b0-80e19ede0634
Alles anzeigen
Code: fdisk -l
fdisk -l
Disk /dev/sdc: 3,64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 3,64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 12 GiB, 12884901888 bytes, 25165824 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x9cebf7d0
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 20973567 20971520 10G 83 Linux
/dev/sda2 20975614 25163775 4188162 2G 5 Extended
/dev/sda5 20975616 25163775 4188160 2G 82 Linux swap / Solaris
Disk /dev/md0: 3,64 TiB, 4000651739136 bytes, 7813772928 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
GPT PMBR size mismatch (2036127 != 61282630) will be corrected by write.
The backup GPT table is not on the end of the device.
Disk /dev/sdd: 29,22 GiB, 31376707072 bytes, 61282631 sectors
Disk model: Extreme
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C4B7B64C-96B4-4B94-84AB-BB80B6EF09DE
Device Start End Sectors Size Type
/dev/sdd1 64 499 436 218K Microsoft basic data
/dev/sdd2 500 6259 5760 2,8M EFI System
/dev/sdd3 6260 2035479 2029220 990,8M Apple HFS/HFS+
/dev/sdd4 2035480 2036079 600 300K Microsoft basic data
root@omv:~# nano /etc/default/openmediavault
root@omv:~# cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdc[1] sdb[0]
3906886464 blocks super 1.2 [2/2] [UU]
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>
Alles anzeigen
Code: Syslog
Sep 3 21:38:34 omv monit[895]: Filesystem '/srv/dev-disk-by-id-md-name-omv.local-RedRaid' not mounted
Sep 3 21:38:34 omv monit[895]: 'filesystem_srv_dev-disk-by-id-md-name-omv.local-RedRaid' unable to read filesystem '/srv/dev-disk-by-id-md-name-omv.local-RedRaid' state
Sep 3 21:38:34 omv monit[895]: 'filesystem_srv_dev-disk-by-id-md-name-omv.local-RedRaid' trying to restart
Sep 3 21:38:34 omv monit[895]: 'mountpoint_srv_dev-disk-by-id-md-name-omv.local-RedRaid' status failed (1) -- /srv/dev-disk-by-id-md-name-omv.local-RedRaid is not a mountpoint