Old raid1 volume filesystem uuid keeps returning when making a raid5 set (from the old disks) so not able to create newfilesystem (or mount raid5 as no filesystem has been created yet)

  • Hi all
    was trying to go from raid 1 to raid5

    Stuck at created the file system as it will not present it in the drop down list for creating an new filesystem. However it does show up in the mount filesystem, but that fails as there is no valid file system present.
    My issue seems to be that the old raid1 md0 UUID is still present and cannot be removed so i cannot make a new filesystem, omv thinks there is already a filesystem on the disks.


    Events (as remembered):

    Copied all data from the raid1 set to an usbdrive.

    Deleted the old filsystem, deleted the raid set (got weird error), but after a reboot ti did delete the raid config in the gui.

    Then i erased the two old disks and added a new disk.

    Created an raid5 volume , it seem like it used the same UUID as the old raid1.

    Looked and in fstab it still refered to the old raid1 volume UUID 09339e6b-8b30-4674-9b5d-1ab41c2abc22

    Found it is omv config.xml delete the lines referring to the old UUID (made an backup of the config) and run the required command to apply to fstab and verfied removed from fstab

    rebooted

    Created the raid 5 volume seems ok.

    again stuck at the file system creation, the new raid 5 is not present at the create fileesystem drop down.
    omv still sees the old raid1 filesystem UUID (09339e6b-8b30-4674-9b5d-1ab41c2abc22) and has added it to the fstab, even if it has been removed from the omv config.xml file (and applied according to documentation, it does go away in fstab but comes back). It would not expect that it had to reuse the UUID, should it not use the newUUID in the mdadm.conf?


    So how do i get this old reference remove so it gets available in the dropdown list in create new filesystem and not in the mount filesystem in omv gui=



    system information:



    #cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid5 sde[2] sdb[1] sda[0]

    1953260544 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

    [======>..............] resync = 32.6% (319052524/976630272) finish=57.8min speed=189361K/sec

    bitmap: 6/8 pages [24KB], 65536KB chunk


    unused devices: <none>

    -------------------

    #blkid (used dash in the lines that seems not to be relevant)

    /dev/sdf1: UUID="a392cd93-eec8-4484-a8f4-f7897965f397" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="a0c051fc-c85e-4647-b95a-15fb8648e27f" #snapraid

    /dev/nvme0n1p1: UUID="ddeeebfe-f5ee-4e1e-afdd-a840b4e3f25a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a7c923e6-7b91-4fcb-bce5-09e879990a65"

    /dev/sdd1: UUID="4d3e0df1-aeb8-47c2-8927-bb5f19303c59" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="b3755943-3ce2-4c91-b45c-d9430103c0eb"

    /dev/sdk1: UUID="4d099531-0998-42db-9864-4a4ebc3f9943" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="15931fb7-291b-454e-bb8c-503f3e435064"

    /dev/sdi1: UUID="df1e1b71-3b04-4b8c-a336-d21f24cb2f15" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="5f34c242-8189-42d7-acbb-0dc7b320f7fa"

    /dev/sdc1: UUID="8aeb5223-60cb-4cdc-aebc-ef9df0922487" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="ebf9c08a-16c5-450e-9ef4-c679e0049a39" #usb drive with backup files

    /dev/sdl3: UUID="f7555e5e-e5f0-45b4-856a-19daccf94218" TYPE="swap" PARTUUID="b7dfd815-ae72-4531-9187-87190e714189"

    /dev/sdl1: UUID="2D61-5B15" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="4efec54e-524f-4024-bce8-a5d49f998bb1" #seems to system drive

    /dev/sdl2: UUID="fae7a71d-1987-4d7d-9031-78e6d767f7c0" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="460daf4c-f986-4440-9c2f-3c189eb86623" #seems to be system drive

    /dev/sdj1: UUID="9f440498-27c3-4e39-82b6-4a82f6baee2e" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="4af17b24-4153-4031-b89a-dc79a605b6fb" #snapraid

    /dev/sdh1: UUID="cba04627-56b3-45cb-b29b-73d285eff09a" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="cb9dea85-ad20-469c-905b-5fdcfafd4988" #snapraid

    /dev/sdb: UUID="da6e190f-ff24-5fa9-e07a-e029e9a2bd8f" UUID_SUB="e49a9783-551c-ded9-6905-78902ec48890" LABEL="omv1:0" TYPE="linux_raid_member"

    /dev/md0: UUID="09339e6b-8b30-4674-9b5d-1ab41c2abc22" BLOCK_SIZE="4096" TYPE="ext4" # not made yet, it is a new raid5 volume

    /dev/sde: UUID="da6e190f-ff24-5fa9-e07a-e029e9a2bd8f" UUID_SUB="fefdfc38-cc13-d18a-1ec4-52e44fe3275a" LABEL="omv1:0" TYPE="linux_raid_member"

    /dev/sda: UUID="da6e190f-ff24-5fa9-e07a-e029e9a2bd8f" UUID_SUB="b5bc9cba-7e6e-18d0-5d79-8011fe17997c" LABEL="omv1:0" TYPE="linux_raid_member"


    ------


    ~# fdisk -l | grep "Disk "

    Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: WDC WDS100T1R0A

    Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: WDC WDS100T1R0A

    Disk /dev/sdf: 10.91 TiB, 12000138625024 bytes, 23437770752 sectors

    Disk model: TOSHIBA HDWG21C

    Disk identifier: 83095D4B-93CD-48CF-8027-5DA84D45C951

    Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: CT1000MX500SSD1

    Disk /dev/sdj: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors

    Disk model: WDC WD60EFAX-68J

    Disk identifier: 75640C63-92B0-4B7B-A391-5F23E92DB786

    Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors

    Disk model: SSD 990 PRO

    Disk identifier: B9C2584C-ED2D-40B1-9921-2F400B3DB80A

    Disk /dev/sdi: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors

    Disk model: TOSHIBA HDWG480

    Disk identifier: 03A10A6B-A8D0-44CB-AF43-B4264C39696B

    Disk /dev/sdd: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors

    Disk model: WDC WD60EFAX-68J

    Disk identifier: 7674FAEC-8D30-4255-A6E4-F4525F28D2BF

    Disk /dev/sdh: 10.91 TiB, 12000138625024 bytes, 23437770752 sectors

    Disk model: ST12000VN0007-2G

    Disk identifier: 8EC0A096-E130-42F0-870D-6D3EB90303BF

    Disk /dev/sdk: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors

    Disk model: ST6000VN0033-2EE

    Disk identifier: 3F152642-ED02-4560-B7AB-B7C002442132

    Disk /dev/sdl: 232.89 GiB, 250059350016 bytes, 488397168 sectors

    Disk model: Samsung SSD 840

    Disk identifier: FB9212AD-4A38-4B83-84C6-A9F8DA512285

    Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors

    Disk model: Samsung SSD 980 PRO 2TB

    Disk identifier: DF5C7095-5E89-43AA-ADD8-5B069603B47E

    Disk /dev/md0: 1.82 TiB, 2000138797056 bytes, 3906521088 sectors


    ----------------

    # cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>


    # definitions of existing MD arrays

    ARRAY /dev/md0 metadata=1.2 name=omv1:0 UUID=da6e190f:ff245fa9:e07ae029:e9a2bd8f


    -------------

    mdadm --detail --scan --verbose

    ARRAY /dev/md0 level=raid5 num-devices=3 metadata=1.2 name=omv1:0 UUID=da6e190f:ff245fa9:e07ae029:e9a2bd8f

    devices=/dev/sda,/dev/sdb,/dev/sde

  • KM0201

    Approved the thread.

  • Events (as remembered):

    Copied all data from the raid1 set to an usbdrive.

    Deleted the old filsystem, deleted the raid set (got weird error), but after a reboot ti did delete the raid config in the gui.

    Then i erased the two old disks and added a new disk.

    Your "weird error" may have been important. What option did you use to erase the two old disk? Your problem stems from not fully wiping the old disks (needs at least a 25% secure wipe). Previous raid/filesystem signatures have been left on disk, hence blkid shows the presence of ext4 on your newly built raid and as OMV prevents creation of another filesystem over a pre-existing filesystem you are stuck.


    Editing the OMV config.xml by hand can easily leave your system in an inconsistent state.


    To correct your problem you need to:


    1. Destroy the array you've just built via the WEBUI.

    2. Secure wipe all three disks.

    3. Use this command at the CLI:  omv-salt deploy run fstab

  • Hi

    Yes i did wike the disks, but i did the quick erase option. Maybe that is not enough? even if i can create an new raid5 system?
    I will try to delete the raid set again , make a full secure wipe (is there not another way, these are sdd disks) and run the salt again.


    I did get some error during the delete filesystem/delete raidset. Did it again and it succeed , not sure if this is related to the issue.

    After this error i have notice long execute time for some action, like showing all disk (background tasks hangs), but only in the gui, all cli commands work fine

    (i assume the issue is because something expect the filesystem to bethere).


    Will get back with an update
    /jan

  • So i can stop the task in the gui when it has completed the full erase at 25%?

    (just want to be sure do not want to do it multiple times ;) )

  • It worked, Many Thanks the quite quick assistance..


    For reference

    Fuil secure erase in the gui of all 3 SDD took quite long time even on SDD and only 1 TB each about 3 hours .

    The the salt command to update the fstab and reboot (just to be sure)

    after reboot Checked fstab, yes the old UUID was gone ;)
    created new raid5 volume

    now it was possible to create new filesystem and everything seems to work again.


    Fyi been running omv for about a year on a home build nas with intel n100 16Gb ram, 10 x sata disks(HDD & SDD's) one nvme 2tb. have been cleaning up a lot over the christmas (old disks replaced and cleaning up for initial poor decisions/designs :-))


    Omv has evolved alot during my use. many things i made in cli can now be made in the gui really nice. :)
    and i am getting a bit better at linux (old microsoft mcse tech person )


    Also Really nice to know improvements is already planned for this "issue"


    have a fantastic sunday.

    /jan

  • macom

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!