Hello,
I have problems after latest OMV update. I have 2 x RAID0 that disappeared from WEBUI - Storage - MD.
I have confirmed that they disappear once I update the OMV, because I made a fresh install and the 2 x RAID0 can be seeing there WEBUI - Storage - MD. Once I install the latest updates for OMV they gone.
Code
root@FAQNAS:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md126 : inactive sdf[0] sdg[1]
5860531120 blocks super 1.2
md127 : inactive sdd[0] sde[1]
5860531120 blocks super 1.2
unused devices: <none>
Code
root@FAQNAS:~# blkid
/dev/sdf: UUID="9e12811f-80b3-c8cf-b69d-4c88f852332c" UUID_SUB="48478ffa-71e1-0fff-cb60-67718f6c3e8e" LABEL="FAQNAS:RAID0Series" TYPE="linux_raid_member"
/dev/sdd: UUID="967af789-e703-4e2a-8a41-cd8cefb1bcba" UUID_SUB="57b64adf-0548-6657-1dcc-b3ee65b156c0" LABEL="FAQNAS:RAID0HD" TYPE="linux_raid_member"
/dev/sdb1: LABEL="Datos" UUID="de0a1396-8d80-434b-8391-71fc4fb62bb4" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="e6f19f49-d581-4b72-a32c-2020cb02f6c3"
/dev/sdg: UUID="9e12811f-80b3-c8cf-b69d-4c88f852332c" UUID_SUB="98bf4334-b6ab-2564-27bf-56f2ebb7bc56" LABEL="FAQNAS:RAID0Series" TYPE="linux_raid_member"
/dev/sde: UUID="967af789-e703-4e2a-8a41-cd8cefb1bcba" UUID_SUB="c2a76c84-f934-47c3-9fb2-ce183056dfcc" LABEL="FAQNAS:RAID0HD" TYPE="linux_raid_member"
/dev/sdc1: LABEL="FHD" UUID="23478bde-7f63-438c-becf-0894ed000803" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="21c0f58d-e080-4331-b7f8-9c0789f6c190"
/dev/sda5: UUID="2bd1b59f-2779-4a70-9d2d-c395e2a83cb1" TYPE="swap" PARTUUID="241c8ebf-05"
/dev/sda1: UUID="d3592b93-6cba-42de-9f28-747963e33069" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="241c8ebf-01"
Code
root@FAQNAS:~# fdisk -l | grep "Disk "
Partition 1 does not start on physical sector boundary.Disk /dev/sdd: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EZRX-00D
Disk /dev/sde: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EZRX-00D
Disk /dev/sdf: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EZRX-00M
Disk identifier: 0x00000000
Partition 1 does not start on physical sector boundary.
Disk /dev/sdg: 2,73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: WDC WD30EZRX-00M
Disk identifier: 0x00000000
Disk /dev/sdc: 7,28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000AS0002-1NA
Disk identifier: 86CADA20-2846-47F4-9CC9-1976B15F8548
Disk /dev/sdb: 931,51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10JPVX-22J
Disk identifier: DE4C6F09-8F09-4E82-A8DA-0896FE6EC462
Disk /dev/sda: 14,9 GiB, 15999172608 bytes, 31248384 sectors
Disk model: USB DISK 2.0
Disk identifier: 0x241c8ebf
Display More
Code
root@FAQNAS:~# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
Display More
Code
root@FAQNAS:~# mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md127 level=linear num-devices=2 metadata=1.2 name=FAQNAS:RAID0HD UUID=967af789:e7034e2a:8a41cd8c:efb1bcba
devices=/dev/sdd,/dev/sde
INACTIVE-ARRAY /dev/md126 level=linear num-devices=2 metadata=1.2 name=FAQNAS:RAID0Series UUID=9e12811f:80b3c8cf:b69d4c88:f852332c
devices=/dev/sdf,/dev/sdg
- 2 HDD 3Tb per array.
I hope somebody can help me to recover them.
Thanks in advance!