LUKS partition used to mount after unlock, now it's not. Containers running in wrong disk, too.

  • I'm running the latest OMV on RPI 4b with an external USB dual drive box with BTRFS set to mirror RAID, encrypted with LUKS. The docker containers data and YML files are located on the encrypted drive.


    After working fine for a couple of months, out of nowhere my NAS started acting up. All my docker containers got reset and I thought I had lost all my data. It ended up being that the external drives were not mounted, and doing a simple "mount -a" and rebooting all containers fixed the issue.


    However, it used to be that after reboot a simple unlock of the drives would get everything running. The docker containers would "wait" for the drive to show up. Now, out of the blue, the containers are already running (on the main disk instead of the external one), with no data, and unlocking LUKS does not initiate a mount. The unlocked disk shows in the list however it shows as unmounted.


    I created a temporary script to remount and restart containers, which fixes the issue, however the containers are creating initialization data on my main drive when starting.... which is a huge waste... So I see this as a temporary solution until I find a way to fix this.


    Any idea why this behaviour suddenly started? Any idea how I could fix this?

  • yordogonfihr

    Changed the title of the thread from “LUKS partition used to mount after unlock, not it's not. Containers running in wrong disk, too.” to “LUKS partition used to mount after unlock, now it's not. Containers running in wrong disk, too.”.
  • chente

    Approved the thread.
  • Are you sure you shutdown the containers before any type of reboot was performed? I found with my LUKS setup that all containers needed to be stopped and not set to restart automatically. If you do not do this the containers will restart before the encryption is unlocked and the containers will write new folders to the unencrypted file system.


    I'm not saying this is the root cause but it sounds like it might be partially causing your issues.

  • Are you sure you shutdown the containers before any type of reboot was performed? I found with my LUKS setup that all containers needed to be stopped and not set to restart automatically. If you do not do this the containers will restart before the encryption is unlocked and the containers will write new folders to the unencrypted file system.


    I'm not saying this is the root cause but it sounds like it might be partially causing your issues.

    I'm indeed not shutting down the containers, and they are set to "restart: unless-stopped". But this used to not be an issue in the past.


    I think the bigger issue is why would the LUKS+BTRFS-RAID not mount as soon as the disk is decrypted. This used to be the behaviour, but not anymore...

  • I'm indeed not shutting down the containers, and they are set to "restart: unless-stopped". But this used to not be an issue in the past.


    I think the bigger issue is why would the LUKS+BTRFS-RAID not mount as soon as the disk is decrypted. This used to be the behaviour, but not anymore...

    Hopefully you'll get that resolved. This is why I didn't use RAID in the end with LUKS. I felt it was too complex and more things could go wrong. In the end I went for two disks, each with LUKS and then used rsync to backup between the disks.


    RAID is fundamentally used for HA, so I felt a NAS use case is 'backup' and rsync seemed to meet that use case with less complications. If something goes wrong I can control when and how backups are created so issues are only replicated on my command.


    I would definitely recommend not using 'unless-stopped' with LUKS. You will find that the containers will create new config unencrypted as they will probably restart before you get to unlock LUKS encryption. It won't break the system, but will leave unencrypted unneeded files and folders.

  • Hopefully you'll get that resolved. This is why I didn't use RAID in the end with LUKS. I felt it was too complex and more things could go wrong. In the end I went for two disks, each with LUKS and then used rsync to backup between the disks.


    RAID is fundamentally used for HA, so I felt a NAS use case is 'backup' and rsync seemed to meet that use case with less complications. If something goes wrong I can control when and how backups are created so issues are only replicated on my command.


    I would definitely recommend not using 'unless-stopped' with LUKS. You will find that the containers will create new config unencrypted as they will probably restart before you get to unlock LUKS encryption. It won't break the system, but will leave unencrypted unneeded files and folders.

    How do you handle reboots then? Do you go to each container and start it? What if the container crashes?

  • How do you handle reboots then? Do you go to each container and start it? What if the container crashes?

    I stop the container before I shut the system - all my config and state is maintained.


    I also store all container mount points on my data drives. It just keeps it clean for me anyway. Once everything is restarted and LUKs is unlocked I just restart containers.


    One thing to note this won't work if the system crashes as the containers will restart on reboot as they never got stopped. So in that case you'll probably end up with the unencrypted files being created again.

  • I stop the container before I shut the system - all my config and state is maintained.


    I also store all container mount points on my data drives. It just keeps it clean for me anyway. Once everything is restarted and LUKs is unlocked I just restart containers.


    One thing to note this won't work if the system crashes as the containers will restart on reboot as they never got stopped. So in that case you'll probably end up with the unencrypted files being created again.

    OK


    I think I'd like to try and fix the LUKS not mounting on its own first, and then I'll move to try some optimizations. Because that part should work, right?

  • OK


    I think I'd like to try and fix the LUKS not mounting on its own first, and then I'll move to try some optimizations. Because that part should work, right?

    Yes you need to fix that. I cannot really help there, I'm sure someone else will be able to advise. I was pointing out some practises I use for LUKS such as rsync instead of RAID and container management.

  • Sorry to jump in with a different setup (RAID+LUKS+Docker) but ext4 instead of BTRFS

    But I also believe that docker and docker-compose should not start if the drive is not UP even if the container restart is at always or unless-stopped



    When I was using TrueNAS Scale; that issue never occurred.

    The major difference I observe is
    on TrueNAS Scale my drive where unlocked via a keyfile while on OpenMediaVault is with a Password.


    So I added keyfiles in my RAIDS and Voilà!

    • Official Post

    But I also believe that docker and docker-compose should not start if the drive is not UP even if the container restart is at always or unless-stopped

    LUKS is a pain for all services not just docker. The compose plugin is creating an override file to tell docker to start after the filesystems are mounted. It would be difficult to create requires statements for just the disks that docker is using.


    docker compose doesn't know anything about filesystems being mounted. So, not much can be done there.

    When I was using TrueNAS Scale; that issue never occurred.

    The major difference I observe is
    on TrueNAS Scale my drive where unlocked via a keyfile while on OpenMediaVault is with a Password.


    So I added keyfiles in my RAIDS and Voilà!

    Are you automatically unlocking the container(s) with keyfiles? Otherwise, I don't see how keyfiles are any different than a password (I use LUKS at work on a good number of machines).

    omv 7.6.0-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0.1 | kvm 7.0.16 | compose 7.3.3 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!