I have OMV 6.0.632.
I added two new disks as a new mirrored raid yesterday.
When I restarted the server I noticed that an older raid, as well as this new one had started a scan.
The old raid stopped it's sync and the new one was running.
But as I had issues accessing the existing raid, I shut down the server again, removed the two new disks and restarted.
Reason to let the old raid resync get ready with the resynch first.
When done, I attached the two new disks again and this one got done this afternoon.
But now I can't add a new filesystem and noticed mismatched names.
The new one is called "/dev/md124" in the RAID tab. But that name is what the old one has if I open the dropdown in Shared folders.
And the name the old one has on the RAID tab, "/dev/md126" is missing on "Shared folders" tab.
And there are nothing to select if I try to add a new filesystem.
How do I fix this issue? The new raid is completely empty, so I do not mind to reformat that if that helps.
But I think that there are references in Debian that need to be fixed to sort this out, and I am still too much of a linux rookie to fix it.
The risk is that I make it worse.
Mismatched mirror raids
-
- OMV 6.x
- gelöst
- 7ore
-
-
All of the above is very confusing, so ssh into omv as root and run the following;
cat /proc/mdstat
blkid
cat /etc/mdadm/mdadm.conf
fdisk -l | grep "Disk "
post each output into a separate code box this symbol </> on the forum thread bar, makes it easier to read
-
All of the above is very confusing, so ssh into omv as root and run the following;
cat /proc/mdstat
blkid
cat /etc/mdadm/mdadm.conf
fdisk -l | grep "Disk "
post each output into a separate code box this symbol </> on the forum thread bar, makes it easier to read
how do you use the </> symbol?
-
-
Thanks.
Here is the results of those commands:Code
Alles anzeigen:~# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md124 : active raid1 sdj[1] sdh[0] 2930135488 blocks super 1.2 [2/2] [UU] bitmap: 0/22 pages [0KB], 65536KB chunk md125 : active raid1 sda[2] sdd[3] 2930135512 blocks super 1.2 [2/2] [UU] md126 : active (auto-read-only) raid1 sdb[3] sdg[2] 5860391512 blocks super 1.2 [2/2] [UU] md127 : active raid1 sde[2] sdc[3] 3906887512 blocks super 1.2 [2/2] [UU] unused devices: <none>
Code
Alles anzeigen:~# blkid /dev/sda: UUID="0c98e989-12de-39be-be11-9640e4f548e0" UUID_SUB="e2afcfe1-5ec0-1fa9-deaf-18c3ce64b5b4" LABEL="nasse:kangu" TYPE="linux_raid_member" /dev/sdb: UUID="c38f46f3-382c-20e5-02a9-3b53837294a3" UUID_SUB="96a88c34-6ab2-ee77-ffa1-91d4ad35c28c" LABEL="nasse:puh" TYPE="linux_raid_member" /dev/sdd: UUID="0c98e989-12de-39be-be11-9640e4f548e0" UUID_SUB="9bf6fdb9-ea20-738b-2807-352983160b68" LABEL="nasse:kangu" TYPE="linux_raid_member" /dev/sdj: UUID="4dcd0afb-6a85-5a11-9a61-5b51e7d6af77" UUID_SUB="b7b4af13-5c48-5ed0-189c-c16215f13efe" LABEL="nasse:tiggr" TYPE="linux_raid_member" /dev/sdi1: UUID="123bb5a1-1840-4c01-8b7f-0e70c4429816" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="2dcc77d4-01" /dev/sdi5: UUID="1fb4ca6e-b72e-6a1b-a274-4c256505e16f" TYPE="swap" PARTUUID="2dcc77d4-05" /dev/sdh: UUID="4dcd0afb-6a85-5a11-9a61-5b51e7d6af77" UUID_SUB="b34ab0e5-4c97-78c2-fc36-9b46c7b68377" LABEL="nasse:tiggr" TYPE="linux_raid_member" /dev/sdg: UUID="c38f46f3-382c-20e5-02a9-3b53837294a3" UUID_SUB="c5f4fec4-ef84-d6ce-7653-712dac0a4f59" LABEL="nasse:puh" TYPE="linux_raid_member" /dev/sdf1: UUID="ecd69112-7a50-4b01-b135-5e58697b857e" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="bc636c9b-01" /dev/sdf5: UUID="43f56fdc-7054-4c6c-b8d0-c3896e244977" TYPE="swap" PARTUUID="bc636c9b-05" /dev/sdc: UUID="edb0fb6c-b865-e482-9100-54591f536e48" UUID_SUB="492cfb70-58a5-c3b2-ebb4-ca9723d16ea2" LABEL="nasse:ior" TYPE="linux_raid_member" /dev/sde: UUID="edb0fb6c-b865-e482-9100-54591f536e48" UUID_SUB="35354ed1-0240-b4e3-3672-4331c1d92f36" LABEL="nasse:ior" TYPE="linux_raid_member" /dev/md127: LABEL="ior" UUID="3b3cf06f-6ce6-4ac4-8d3f-a1568af6beec" BLOCK_SIZE="4096" TYPE="ext4" /dev/md126: LABEL="puh" UUID="b984e9b0-911b-42d4-816b-03a0713696ba" BLOCK_SIZE="4096" TYPE="ext4" /dev/md125: LABEL="ru" UUID="30a53389-7564-4002-a5c0-122b869fc485" BLOCK_SIZE="4096" TYPE="ext4" /dev/md124: LABEL="puh" UUID="b984e9b0-911b-42d4-816b-03a0713696ba" BLOCK_SIZE="4096" TYPE="ext4"
Removed the instructions and default settings of the file.
Codee:~# cat /etc/mdadm/mdadm.conf ... # definitions of existing MD arrays ARRAY /dev/md/ior metadata=1.2 name=nasse:ior UUID=edb0fb6c:b865e482:91005459:1f536e48 ARRAY /dev/md/puh metadata=1.2 name=nasse:puh UUID=c38f46f3:382c20e5:02a93b53:837294a3 ARRAY /dev/md/kangu metadata=1.2 spares=1 name=nasse:kangu UUID=0c98e989:12de39be:be119640:e4f548e0
Code
Alles anzeigen:~# fdisk -l | grep "Disk " Disk /dev/sda: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68E Disk /dev/sdb: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: ST6000VN0033-2EE Disk /dev/sdd: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EFRX-68E Disk /dev/sdj: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EZRX-00D Disk /dev/sdi: 298.09 GiB, 320072933376 bytes, 625142448 sectors Disk model: HITACHI HTS72503 Disk identifier: 0x2dcc77d4 Disk /dev/sdh: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WDC WD30EZRX-00D Disk /dev/sdg: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: ST6000VN0033-2EE Disk /dev/sdf: 74.53 GiB, 80026361856 bytes, 156301488 sectors Disk model: INTEL SSDSA2M080 Disk identifier: 0xbc636c9b Disk /dev/sdc: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: ST4000NE001-2MA1 Disk /dev/sde: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: ST4000NE001-2MA1 Disk /dev/md127: 3.64 TiB, 4000652812288 bytes, 7813775024 sectors Disk /dev/md126: 5.46 TiB, 6001040908288 bytes, 11720783024 sectors Disk /dev/md125: 2.73 TiB, 3000458764288 bytes, 5860271024 sectors Disk /dev/md124: 2.73 TiB, 3000458739712 bytes, 5860270976 sectors
I can see that md124 and md126 have the same guid in blkid and that the md124 array is missing in mdadm.conf.
How do I fix that - without making more mess of things...
MultiUser "</>" is a button in the editor creating the above boxes. -
how do you use the </> symbol
when you reply/post the symbol/icon/button is on the forum thread bar it creates a code box copying and pasting output into the code box makes it easier to read, like this:
-
How do I fix that - without making more mess of things
How many actual arrays do you have, what arrays have data on them, this is a first, duplicate uuid's for two different arrays
This is a mess, mdadm references, that is md127 go in reverse order, md127, md126 etc during creation, there are reasons why mdadm uses md127 rather than md0, then md1 but that is not going to resolve this.
Therefore md127 would be the first reference
-
-
md124 (3TB) is new and empty.
The other three have data and have been around for years. -
You also have 2x references to boot OS:
sdi && sdf
What is this all about?
-
You also have 2x references to boot OS:
sdi && sdf
What is this all about?
even I didn't spot that
md124 (3TB) is new and empty
Does it display in file systems?
-
-
Does it display in file systems?
No, it doesn't.
You also have 2x references to boot OS:
sdi && sdf
What is this all about?That is also an issue, I suspect that one have grub and the other latest OMV.
So I need to fix that too, but that is for later... -
No, it doesn't
OK well that's something.......therefore I would suggest you delete it from raid management and subsequently apply the changes
/dev/md126 is in an auto-read-only state to correct this run mdadm --readwrite /dev/md126 from the cli
If both of the above work, then the two drives you want to use (I assume /dev/sdj and /dev/sdh) will need to be securely wipe both drives first before a new array is created. Whilst secure wipe does take a long time it has been noted that this can be stopped after approx. 25% completion.
Caveat: DO NOT REBOOT or SHUTDOWN this can cause drive and mdadm references to change!!
-
I suspect that one have grub and the other latest OMV
That would suggest that one of those drives is the installation media and grub was not configured correctly on the boot media, this may or may not throw up another can of worms
-
-
OK, So I tried to remove the raid from GUI, but could only remove one of the disks in it.
The other one is still there and the option to remove the complete disk can't be selected.But I have formatted both disks now, that worked.
I have not restarted the machine yet.
I got a bigger issue now, though.One of the shared folders in md126 appears empty. Other folders are intact and the filesystem seems untouched. This seems like another issue outside of this thread, but as it is related I continue here.
What steps can I do to recover or rescan the filesystem to restore the content of that folder -
Solved the empty folder issue.
Moved one of the drives in that RAID to another session of OMV (started up another computer with an old version of OMV that I had).
And the drive had all files on it, so I recover the raid there.
Then back to this instance and continue to do a cleanup there to remove the extra disk and the other issues.
Thanks for the help so far. I might have followup questions, but I think that I know the path forward from here. -
7ore
Hat das Label gelöst hinzugefügt.
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!