Hi all
was trying to go from raid 1 to raid5
Stuck at created the file system as it will not present it in the drop down list for creating an new filesystem. However it does show up in the mount filesystem, but that fails as there is no valid file system present.
My issue seems to be that the old raid1 md0 UUID is still present and cannot be removed so i cannot make a new filesystem, omv thinks there is already a filesystem on the disks.
Events (as remembered):
Copied all data from the raid1 set to an usbdrive.
Deleted the old filsystem, deleted the raid set (got weird error), but after a reboot ti did delete the raid config in the gui.
Then i erased the two old disks and added a new disk.
Created an raid5 volume , it seem like it used the same UUID as the old raid1.
Looked and in fstab it still refered to the old raid1 volume UUID 09339e6b-8b30-4674-9b5d-1ab41c2abc22
Found it is omv config.xml delete the lines referring to the old UUID (made an backup of the config) and run the required command to apply to fstab and verfied removed from fstab
rebooted
Created the raid 5 volume seems ok.
again stuck at the file system creation, the new raid 5 is not present at the create fileesystem drop down.
omv still sees the old raid1 filesystem UUID (09339e6b-8b30-4674-9b5d-1ab41c2abc22) and has added it to the fstab, even if it has been removed from the omv config.xml file (and applied according to documentation, it does go away in fstab but comes back). It would not expect that it had to reuse the UUID, should it not use the newUUID in the mdadm.conf?
So how do i get this old reference remove so it gets available in the dropdown list in create new filesystem and not in the mount filesystem in omv gui=
system information:
#cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sde[2] sdb[1] sda[0]
1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
[======>..............] resync = 32.6% (319052524/976630272) finish=57.8min speed=189361K/sec
bitmap: 6/8 pages [24KB], 65536KB chunk
unused devices: <none>
-------------------
#blkid (used dash in the lines that seems not to be relevant)
/dev/sdf1: UUID="a392cd93-eec8-4484-a8f4-f7897965f397" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="a0c051fc-c85e-4647-b95a-15fb8648e27f" #snapraid
/dev/nvme0n1p1: UUID="ddeeebfe-f5ee-4e1e-afdd-a840b4e3f25a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a7c923e6-7b91-4fcb-bce5-09e879990a65"
/dev/sdd1: UUID="4d3e0df1-aeb8-47c2-8927-bb5f19303c59" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="b3755943-3ce2-4c91-b45c-d9430103c0eb"
/dev/sdk1: UUID="4d099531-0998-42db-9864-4a4ebc3f9943" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="15931fb7-291b-454e-bb8c-503f3e435064"
/dev/sdi1: UUID="df1e1b71-3b04-4b8c-a336-d21f24cb2f15" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="5f34c242-8189-42d7-acbb-0dc7b320f7fa"
/dev/sdc1: UUID="8aeb5223-60cb-4cdc-aebc-ef9df0922487" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ebf9c08a-16c5-450e-9ef4-c679e0049a39" #usb drive with backup files
/dev/sdl3: UUID="f7555e5e-e5f0-45b4-856a-19daccf94218" TYPE="swap" PARTUUID="b7dfd815-ae72-4531-9187-87190e714189"
/dev/sdl1: UUID="2D61-5B15" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="4efec54e-524f-4024-bce8-a5d49f998bb1" #seems to system drive
/dev/sdl2: UUID="fae7a71d-1987-4d7d-9031-78e6d767f7c0" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="460daf4c-f986-4440-9c2f-3c189eb86623" #seems to be system drive
/dev/sdj1: UUID="9f440498-27c3-4e39-82b6-4a82f6baee2e" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="4af17b24-4153-4031-b89a-dc79a605b6fb" #snapraid
/dev/sdh1: UUID="cba04627-56b3-45cb-b29b-73d285eff09a" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="cb9dea85-ad20-469c-905b-5fdcfafd4988" #snapraid
/dev/sdb: UUID="da6e190f-ff24-5fa9-e07a-e029e9a2bd8f" UUID_SUB="e49a9783-551c-ded9-6905-78902ec48890" LABEL="omv1:0" TYPE="linux_raid_member"
/dev/md0: UUID="09339e6b-8b30-4674-9b5d-1ab41c2abc22" BLOCK_SIZE="4096" TYPE="ext4" # not made yet, it is a new raid5 volume
/dev/sde: UUID="da6e190f-ff24-5fa9-e07a-e029e9a2bd8f" UUID_SUB="fefdfc38-cc13-d18a-1ec4-52e44fe3275a" LABEL="omv1:0" TYPE="linux_raid_member"
/dev/sda: UUID="da6e190f-ff24-5fa9-e07a-e029e9a2bd8f" UUID_SUB="b5bc9cba-7e6e-18d0-5d79-8011fe17997c" LABEL="omv1:0" TYPE="linux_raid_member"
------
~# fdisk -l | grep "Disk "
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WDS100T1R0A
Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WDS100T1R0A
Disk /dev/sdf: 10.91 TiB, 12000138625024 bytes, 23437770752 sectors
Disk model: TOSHIBA HDWG21C
Disk identifier: 83095D4B-93CD-48CF-8027-5DA84D45C951
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: CT1000MX500SSD1
Disk /dev/sdj: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EFAX-68J
Disk identifier: 75640C63-92B0-4B7B-A391-5F23E92DB786
Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: SSD 990 PRO
Disk identifier: B9C2584C-ED2D-40B1-9921-2F400B3DB80A
Disk /dev/sdi: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: TOSHIBA HDWG480
Disk identifier: 03A10A6B-A8D0-44CB-AF43-B4264C39696B
Disk /dev/sdd: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: WDC WD60EFAX-68J
Disk identifier: 7674FAEC-8D30-4255-A6E4-F4525F28D2BF
Disk /dev/sdh: 10.91 TiB, 12000138625024 bytes, 23437770752 sectors
Disk model: ST12000VN0007-2G
Disk identifier: 8EC0A096-E130-42F0-870D-6D3EB90303BF
Disk /dev/sdk: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors
Disk model: ST6000VN0033-2EE
Disk identifier: 3F152642-ED02-4560-B7AB-B7C002442132
Disk /dev/sdl: 232.89 GiB, 250059350016 bytes, 488397168 sectors
Disk model: Samsung SSD 840
Disk identifier: FB9212AD-4A38-4B83-84C6-A9F8DA512285
Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 980 PRO 2TB
Disk identifier: DF5C7095-5E89-43AA-ADD8-5B069603B47E
Disk /dev/md0: 1.82 TiB, 2000138797056 bytes, 3906521088 sectors
----------------
# cat /etc/mdadm/mdadm.conf
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=omv1:0 UUID=da6e190f:ff245fa9:e07ae029:e9a2bd8f
-------------
mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=omv1:0 UUID=da6e190f:ff245fa9:e07ae029:e9a2bd8f
devices=/dev/sda,/dev/sdb,/dev/sde