I am moving my OMV setup from an Intel 5000PSL to an HP 290 with a 4-bay USB to SATA drive enclosure.
sda > internal drive (stock with hp 290) = data4 (keyfile LUKS)
sdb > first drive in usb enclosure
sdb1 > /boot partition
sdb2 > / (root) system partition (password LUKS)
sdb3 > swap partition (password LUKS)
sdb4 > data1 (keyfile LUKS)
sdc = data2 (second drive in usb enclosure) (keyfile LUKS)
sdd = data3 (third drive in usb enclosure) (keyfile LUKS)
When I moved the omv drives to the new PC I thought it would be as simple as loading the drives from the old server into the usb enclosure and connecting it to the USB port.
OMV is booting off the first drive in the enclosure from the old server, but when OMV loads only the LUKS encrypted data partition on the boot drive (data1) will auto mount now.
The LUKS partitions on the other drives (data2, data3, data4) do not auto mount.
I've always used this guide to auto mount drives with no issues.
https://www.howtoforge.com/aut…ted-drives-with-a-keyfile
My fstab: (I tried removing the other data drives and only auto mounting the internal drive (data4) with no improvement)
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/sda3_crypt / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda1 during installation
UUID=d61a4271-e5f7-4c46-9b52-791e5ee0fc36 /boot ext4 defaults 0 2
/dev/mapper/sda2_crypt none swap sw 0 0
# >>> [openmediavault]
/dev/disk/by-label/data1 /srv/dev-disk-by-label-data1 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/data2 /srv/dev-disk-by-label-data2 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/data3 /srv/dev-disk-by-label-data3 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
/dev/disk/by-label/data4 /srv/dev-disk-by-label-data4 ext4 defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
# <<< [openmediavault]
Display More
My crypttab:
# <target name> <source device> <key file> <options>
#System disks - DO NOT EDIT
sda2_crypt UUID=e07a2c60-9fb4-4ff6-85b1-e2ad4c767ebf none luks,swap,discard
sda3_crypt UUID=716b232a-f44a-4f46-a276-4e563856c0c6 none luks,discard
#Data disks
data1 UUID=1dfccdef-7d88-4d5e-9f42-59bd36020c37 /root/keyfile.key luks
data2 UUID=298a0d0f-52ce-48f8-817f-d75d293a4836 /root/keyfile.key luks
data3 UUID=134a1f5e-46b1-422a-b08d-ccc8476340b9 /root/keyfile.key luks
data4 UUID=b9eb750a-5c20-4673-b652-e7e7915cb470 /root/keyfile.key luks
I've tried everything I can think of to get data2, data3, and data4 to auto mount. I've tried replacing /dev/disk/ in the fstab with /dev/mapper, ive tried replacing UUID= with the directory of the device, ive manually removed the drives from the OMV config and readded them.
I know OMV is detecting all the drives normally because it works fine if I just go into the web interface and manually unlock and mount the partitions, its just not auto mounting the encrypted partitions on the non-boot drives. The last thing I haven't tried is a clean install of OMV, otherwise I could just keep manually unlocking the drives but the automatic method always worked with the old server, why isn't it working now? It can't be a USB issue because the fourth drive is an internal SATA.
EDIT: Added from the other posts 12/16/2020
Now manually unlocking and mounting the drives isn't working ether.
When I manually unlock the drives it shows this error when trying to mount them because the drives aren't listed in fstab.
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mount -v --source '/dev/disk/by-label/data2' 2>&1' with exit code '1': mount: /dev/disk/by-label/data2: can't find mount source /dev/disk/by-label/data2 in /etc/fstab.
If they ARE listed in fstab the startup will fail because it tries to mount them and it cannot because they are locked, and then OMV drops to emergency shell.
So unless I comment out the drives from fstab when it drops to emergency mode, start OMV, then uncomment them, then manually unlock and mount the drives, my OMV just doesn't work now.
Syslog shows this error:
Dec 15 17:39:03 omv-server monit[996]: Filesystem '/srv/dev-disk-by-label-data1' not mounted
Dec 15 17:39:03 omv-server monit[996]: 'filesystem_srv_dev-disk-by-label-data1' unable to read filesystem '/srv/dev-disk-by-label-data1' state
Dec 15 17:39:03 omv-server monit[996]: 'filesystem_srv_dev-disk-by-label-data1' trying to restart
Dec 15 17:39:03 omv-server monit[996]: 'mountpoint_srv_dev-disk-by-label-data1' status failed (1) -- /srv/dev-disk-by-label-data1 is not a mountpoint
Additionally as a result of trying to change the config.xml manually to try anything to fix this OMV also shows these errors, and the filesystems tab no longer loads.
Couldn't extract an UUID from the provided path '/dev/disk/by-label/data2'.
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color quota 2>&1' with exit code '1': omv-server: Data failed to compile: ---------- Rendering SLS 'base:omv.deploy.quota.default' failed: while constructing a mapping in "<unicode string>", line 42, column 1 found conflicting ID 'quota_off_no_quotas_' in "<unicode string>", line 69, column 1
So essentially I broke the OMV system.
It would be good if I could just reinstall OMV from scratch, figure out why my data drives aren't auto mounting, fix it, and get everything working again. However if trying to fix an issue causes this much of a headache with no easily explainable solution, maybe I'm better off just going back to Windows 10, turning off all the security and update nonsense and just using that as a server instead. It would definitely have a lot less errors.