Posts by yordogonfihr

    I stop the container before I shut the system - all my config and state is maintained.


    I also store all container mount points on my data drives. It just keeps it clean for me anyway. Once everything is restarted and LUKs is unlocked I just restart containers.


    One thing to note this won't work if the system crashes as the containers will restart on reboot as they never got stopped. So in that case you'll probably end up with the unencrypted files being created again.

    OK


    I think I'd like to try and fix the LUKS not mounting on its own first, and then I'll move to try some optimizations. Because that part should work, right?

    Hopefully you'll get that resolved. This is why I didn't use RAID in the end with LUKS. I felt it was too complex and more things could go wrong. In the end I went for two disks, each with LUKS and then used rsync to backup between the disks.


    RAID is fundamentally used for HA, so I felt a NAS use case is 'backup' and rsync seemed to meet that use case with less complications. If something goes wrong I can control when and how backups are created so issues are only replicated on my command.


    I would definitely recommend not using 'unless-stopped' with LUKS. You will find that the containers will create new config unencrypted as they will probably restart before you get to unlock LUKS encryption. It won't break the system, but will leave unencrypted unneeded files and folders.

    How do you handle reboots then? Do you go to each container and start it? What if the container crashes?

    Are you sure you shutdown the containers before any type of reboot was performed? I found with my LUKS setup that all containers needed to be stopped and not set to restart automatically. If you do not do this the containers will restart before the encryption is unlocked and the containers will write new folders to the unencrypted file system.


    I'm not saying this is the root cause but it sounds like it might be partially causing your issues.

    I'm indeed not shutting down the containers, and they are set to "restart: unless-stopped". But this used to not be an issue in the past.


    I think the bigger issue is why would the LUKS+BTRFS-RAID not mount as soon as the disk is decrypted. This used to be the behaviour, but not anymore...

    I'm running the latest OMV on RPI 4b with an external USB dual drive box with BTRFS set to mirror RAID, encrypted with LUKS. The docker containers data and YML files are located on the encrypted drive.


    After working fine for a couple of months, out of nowhere my NAS started acting up. All my docker containers got reset and I thought I had lost all my data. It ended up being that the external drives were not mounted, and doing a simple "mount -a" and rebooting all containers fixed the issue.


    However, it used to be that after reboot a simple unlock of the drives would get everything running. The docker containers would "wait" for the drive to show up. Now, out of the blue, the containers are already running (on the main disk instead of the external one), with no data, and unlocking LUKS does not initiate a mount. The unlocked disk shows in the list however it shows as unmounted.


    I created a temporary script to remount and restart containers, which fixes the issue, however the containers are creating initialization data on my main drive when starting.... which is a huge waste... So I see this as a temporary solution until I find a way to fix this.


    Any idea why this behaviour suddenly started? Any idea how I could fix this?

    Hello,

    I'm building a NAS machine with two drives running as a BTRFS RAID1 array.

    My current setup consists of:

    1. Both drives encrypted using LUKS, `/dev/sda-encrypt` and `/dev/sdb-encrypt`

    2. Both drives running BTRFS as RAID1 - `/dev/dm-0`

    3. OMV mounts `/dev/dm-0` as `/srv/SOME_UUID`


    So far so good. Works great after setup, but not so much after performing a reboot.


    As expected, LUKS doesn't automatically unlock the drives, so the OS cannot mount the RAID1 array. This is fine and the kind of behaviour I'm looking for.

    However, after unlocking both drives, OMV shows the RAID1 filesystem as unmounted. There's no option in the UI to remount the filesystem, so it forced me to:

    1. Login to teminal.

    2. Identify the mounted drive UUID.

    3. Mount the drive to the correct `/srv/SOME_UUID`


    Is there an easier way remount the filesystem that OMV should already have all the information about?

    Best would be through the UI, of course. But even some kind of "fix" command would be nice. Does it exist?