Hello,
my OMV is installed in VMware workstation in Windows 10. Everything works great for years now. But today maybe a power loss or something, I don't know what happen!.
But my raid doesn't show up. I have the result of the following codes I pressed via putty.
OMV Version 5.6.16-1 (Usul)
Kernel Linux 5.10.0-0.bpo.8-amd64
Processor: Intel(R) Core(TM) i7-5820k CPU @ 3.30GHz
RAM: 4GB
cat /proc/mdstat
Code
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
blkid
Code
/dev/sdc1: LABEL="3TB" UUID="a19001f7-455b-4bb7-8d7a-49a46da8caac" TYPE="ext4" PARTUUID="b4d8871d-9de8-4bbf-9d24-efa1e5105bdf"
/dev/sda1: UUID="9b6ae31c-dda5-4c9c-b774-9b68dc7c5632" TYPE="ext4" PARTUUID="9567fe56-01"
/dev/sda5: UUID="7bff33e1-2ab3-432e-b122-48ec345816f1" TYPE="swap" PARTUUID="9567fe56-05"
/dev/sdd1: LABEL="10TB" UUID="4cea569e-ae6d-4b4a-8c62-4ba75f55e74c" TYPE="ext4" PARTUUID="b4eee4f6-c2f2-42f7-9732-f684a4324e6f"
/dev/sde1: PARTLABEL="Microsoft reserved partition" PARTUUID="0340836d-d691-4079-a28f-ca046c64f40c"
/dev/sdb1: PARTLABEL="Microsoft reserved partition" PARTUUID="a76b3ba6-f82a-4f6c-a5cc-341519c6304b"
fdisk -l | grep "Disk "
Code
Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: VMware Virtual S
Disk identifier: 10ADF2DB-363C-48C8-AC02-A9703F3FD4E2
Disk /dev/sde: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: VMware Virtual S
Disk identifier: 8E5B25EB-3C72-47A0-A87C-BF553ED62D34
Disk /dev/sda: 12 GiB, 12884901888 bytes, 25165824 sectors
Disk model: VMware Virtual S
Disk identifier: 0x9567fe56
Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: VMware Virtual S
Disk identifier: CF25B582-1FF6-4908-A9F1-F439428E4F14
Disk /dev/sdd: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors
Disk model: VMware Virtual S
Disk identifier: E8BC4C59-8A0B-44C6-A074-0597CF1CC433
Alles anzeigen
cat /etc/mdadm/mdadm.conf
Code
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR sysopcgi@gmail.com
MAILFROM root
# definitions of existing MD arrays
ARRAY /dev/md/serverBackup.local:6tb6tb12TB metadata=1.2 name=serverBackup.local:6tb6tb12TB UUID=59d06c69:9655613a:3f6b2d83:39254200
Alles anzeigen
mdadm --detail --scan --verbose