In OMV-Extras there is an option to install and boot once using a SystemRescueCD that would be the option to use.
BTW I use an identical server
In OMV-Extras there is an option to install and boot once using a SystemRescueCD that would be the option to use.
BTW I use an identical server
Alright, I'll try that first, thanks.
Does fdisk -l | grep "Disk " see them
Ok this will give some info on the drives wipefs -n /dev/sdc this will not wipe the drive but will supply information on the drives signatures.
Don’t have any more answers for you unfortunately. Looks very weird.
All I can think is try to assemble the raid in degraded mode.
Try see if lsblk reports more info.
Ok this will give some info on the drives wipefs -n /dev/sdc this will not wipe the drive but will supply information on the drives signatures.
Nothing happened... (see screenshot below)
Don’t have any more answers for you unfortunately. Looks very weird.
All I can think is try to assemble the raid in degraded mode.
Try see if lsblk reports more info.
lsblk - not sure if that helps:
How would i assemble in degraded mode?
I did some more research and found this thread: https://unix.stackexchange.com…onger-a-valid-luks-device
Unfortunately, my Linux knowledge is somewhat limited, so I wonder if I could follow the same route or not. It sure sounds like the same problem, doesn't it?
BTW
I tried to simply decrypt one of the drives with cryptsetup luksopen - obviously that didn't work either. The drives are not recognized as LUKS devices.
Try that on /dev/md0 as it's not the drives that are encrypted but the array
Try this
lsblk -o name,uuid,type,fstype
Still sde and sdd show empty. Try to assemble without sde
mdadm --assemble /dev/md0 /dev/sdd
You probably need to add the --force flag.
Isn't /dev/sde the other drive in the array? If it is, mdadm --assemble --force --verbose /dev/md0 /dev/sde
Unfortunately not, the two 4TB drives (sdc & sdd) used to be the array, see screenshot above - but I'll include the overview in the screenshot below as well.
sde is the SSD that OMV runs on (and sda & sdb are the two independent 8TB drives)
Nevertheless I tried your command - adding the verbose option - again for those two drives; sadly giving the same result:
---
edit
I see the confusion about the drive labels - in my very first post in this thread, sdd & sde are the 4TB drives. When booting SystemrescueCD it seems to change - why, I don't know. Whenever I boot OMV, they're back to sdd & sde as well.
Unfortunately not, the two 4TB drives (sdc & sdd) used to be the array, see screenshot above - but I'll include the overview in the screenshot below as well.
I looked at your first post where they were sdd and sde.
Nevertheless I tried your command - adding the verbose option - again for those two drives; sadly giving the same result:
The verbose definitely wouldn't help assemble.
So, I went back and read the whole thread since this situation was very confusing. Your drives don't show up as array members or luks devices. Did you wipe these drives or ever use the mdadm --create command? There is really no way to assemble these drives if neither one has an mdadm signature.
I looked at your first post where they were sdd and sde.
The verbose definitely wouldn't help assemble.
So, I went back and read the whole thread since this situation was very confusing. Your drives don't show up as array members or luks devices. Did you wipe these drives or ever use the mdadm --create command? There is really no way to assemble these drives if neither one has an mdadm signature.
Concerning the label confusion: it's really strange! I double checked OMV again and now the drives have changed their labels there as well. The 8TB drives are sda & sdb, the 4TB drives (the former array) are sdc & sdd and the SSD containing OMV is sde (just as when booting RescueCD).
As you can see in comparison to the screenshot on page 1 the labels have indeed completely changed. That is _very_ strange. Might that be a source of the problem as well?
Concerning your question regarding the array situation: I never used the mdadm --create command as I created the RAID via the OMV frontend, but I would think that's what happened in the background, you would probably know best. I never did anything to the drives after creation of the LUKS encrypted RAID. Except for editing the config.xml and rebooting as explained in the beginning...
I am not entirely sure of the correct order of encryption / creating the RAID, though - but I believe there was only one way to get a LUKS encrypted RAID1. At this time, I'm unable to to check again because obviously I do not want to overwrite data on those drives.
I believe the order was:
* creation of a mirror RAID device using Storage/RAID management
* creation of an encrypted device using Storage/Encryption
* creation of a ext4 Filesystem on that encrypted device called RAID4TB
That is _very_ strange. Might that be a source of the problem as well?
Nope. Not strange. Some bios initialize their drives in different order on every boot. It shouldn't cause the problem since mdadm is looking for a specific signature on the drive (which your drives don't seem to have).
I never used the mdadm --create command as I created the RAID via the OMV frontend, but I would think that's what happened in the background, you would probably know best.
OMV does use this command but I was wondering if you tried it afterwards trying to fix the situation. Sounds like you didn't.
but I believe there was only one way to get a LUKS encrypted RAID1.
You should be able to create an array on encrypted disks or create an encrypted disk on an array. From the output, it looks like neither was done even though I know one of them was used. My only suggestion at this point would be to try the create flag with mdadm but it usually doesn't fix the problem and wipes the drives. Good suggestion I know but it is really the only one since you can't use recovery tools due to encryption being used.
Don’t have an account yet? Register yourself now and be a part of our community!