- LXC should be as secure as docker when inside the container. Outside the container, you can't have loose permissions on the os files since they are now visible on the host. If someone has that much access to your host, your other security has failed.
- id mapping for what?
- Not sure. You could definitely mount it readonly in the guest OS.
- Don't : ) I don't plan to create my own templates since there are so many available. I would rather put effort into a script that will do the setup on a new container.
- For now. I need to see what it is needed to get the existing kvm plugin backup stuff to work with a container. I have a feeling there is a lot of work involved but I haven't played with lxc snapshots much.
LXC support for openmediavault-kvm plugin
-
-
2. e.g, root in LXC not same id/gid as root on host, etc. Privleged vs. unprivleged container.
4. I meant once template is downloaded what can be down outside LXC roofs as opposed to just scripting changes to a running LXC.
5. No problem with as is.
-
root in LXC not same id/gid as root on host, etc. Privleged vs. unprivleged container.
They are privileged containers. I don't even know if you can specify id/gid. Feel free to research that. I have no plans to add unprivileged containers unless it is very painless codewise.
I meant once template is downloaded what can be down outside LXC roofs as opposed to just scripting changes to a running LXC.
You should be able to add, edit, or remove files. It should be the same as changing files in a docker volume path. What did you have in mind? Not that I can change how LXC works.
-
id mapping is support but it breaks things when the container boots. Seems like it requires unprivileged containers. https://libvirt.org/formatdomain.html#container-boot
-
I would not expect to be creating many LXC containers myself, but it would be good to know what is possible and how best to use this new functionality.
At the moment to get a LXC roofs on OMv6 via the plugin you have to create at least one LXC container, but not necessarily run it. Following that you might want to clone this rootfs (using rsync or zfs send/recv?) before making changes to this LXC rootfs.
Changes might involve adding/altering files in the LXC roofs before it's run, e.g ssh keys, no ssh password auth, a systemd service definition, etc. You might also want to alter one or more running LXC containers, e.g loop through a list of containers to do a "apt update && apt upgrade -y" or act on a single container.
IIRC, Proxmox has "pct push" and "pct exec" commands to do this kind of thing from the host. The pct command being a wrapper for LXC commands. I might be wrong, but the the libvrtd LXC driver does not seem to have anything equivalent.
So as you say, you'd couldbe editing the LXC roofs directly in some cases, but with no equivalent of LXC exec I'm not sure how you'd script something like "apt update && apt upgrade -y" for all running LXC containers.
-
IIRC, Proxmox has "pct push" and "pct exec" commands to do this kind of thing from the host. The pct command being a wrapper for LXC commands. I might be wrong, but the the libvrtd LXC driver does not seem to have anything equivalent.
Definitely exists. If proxmox has it, it exists in lxc and libvirt mostly exposes the same.
virsh -c lxc:///system lxc-enter-namespace LXC_CONTAINER_NAME -- /bin/ls -al /dev
-
ryecoaaron I had found that, but the command fails when LXC ct is off as expected and when LXC ct is running:
Coderoot@omv6:~# virsh -c lxc:///system lxc-enter-namespace ublxc -- /bin/ls -al /root error: Requested operation is not valid: domain is not running root@omv6:~# virsh -c lxc:///system lxc-enter-namespace ublxc -- /bin/ls -al /root libvirt: Cgroup error : Unable to write to '/sys/fs/cgroup/machine.slice/machine-lxc\x2d10354\x2dublxc.scope/cgroup.procs': Device or resource busy error: internal error: Child process (10584) unexpected exit status 125 root@omv6:~#
Also, there's one minor bug I've found. Just navigating to the KVM / VMs web page with an LXC container listed causes the syslog to be spammed with this message:
CodeNov 11 15:38:22 omv6 libvirtd[1512]: this function is not supported by the connection driver: virDomainSnapshotNum Nov 11 15:38:22 omv6 libvirtd[1512]: this function is not supported by the connection driver: virDomainSnapshotNum Nov 11 15:38:32 omv6 libvirtd[1512]: this function is not supported by the connection driver: virDomainSnapshotNum Nov 11 15:38:32 omv6 libvirtd[1512]: this function is not supported by the connection driver: virDomainSnapshotNum
-
I had found that, but the command fails when LXC ct is off as expected and when LXC ct is running:
No idea. I am new to LXC too.
Also, there's one minor bug I've found. Just navigating to the KVM / VMs web page with an LXC container listed causes the syslog to be spammed with this message:
thanks. I will fix that.
-
ryecoaaron A brief follow up on the “virsh -c lxc:///system lxc-enter-namespace lmslxc -- /bin/ls -al /root” error. I realise livbvrt-lxc and lxc are not the same thing but if the lxc-checkconfig command is still relevant it may point to a problems with cgroups on OMV6.
Code
Alles anzeigenroot@omv6:~# lxc --checkconfig -bash: lxc: command not found root@omv6:~# lxc-checkconfig LXC version 4.0.6 Kernel configuration not found at /proc/config.gz; searching... Kernel configuration found at /boot/config-5.19.7-2-pve --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled newuidmap is not installed newgidmap is not installed Network namespace: enabled --- Control groups --- Cgroups: enabled Cgroup v1 mount points: Cgroup v2 mount points: /sys/fs/cgroup Cgroup v1 systemd controller: missing Cgroup v1 freezer controller: missing Cgroup namespace: required Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled, loaded Macvlan: enabled, loaded Vlan: enabled, not loaded Bridges: enabled, not loaded Advanced netfilter: enabled, loaded CONFIG_NF_NAT_IPV4: missing CONFIG_NF_NAT_IPV6: missing CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded CONFIG_IP6_NF_TARGET_MASQUERADE: enabled, not loaded CONFIG_NETFILTER_XT_TARGET_CHECKSUM: enabled, loaded CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled, loaded FUSE (for use with lxcfs): enabled, not loaded --- Checkpoint/Restore --- checkpoint restore: enabled CONFIG_FHANDLE: enabled CONFIG_EVENTFD: enabled CONFIG_EPOLL: enabled CONFIG_UNIX_DIAG: enabled CONFIG_INET_DIAG: enabled CONFIG_PACKET_DIAG: enabled CONFIG_NETLINK_DIAG: enabled File capabilities: Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig root@omv6:~#
I don’t know for sure if the output is significant but on debian 11 you have this for cgroups:
Coderoot@omv6:~# mount | grep cgroup cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot) root@omv6:~#
In thisthread at the debian forum there’s an interchange re: a problem caused by the absence of V1 cgroups. The notes at https://libvirt.org/drvlxc.html#control-groups-requirements imply you need v1 cgroups. Supposedly the answer is to use systemd in some way.
In contrast to debian, the cgroups on my kubuntu desktop are this:
Code
Alles anzeigenchris@kubuntu:~$ sudo mount | grep cgroup [sudo] password for chris: tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755,inode64) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,misc) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) chris@kubuntu
Using a virt-manager lxc connection on my desktop shows a command like
“virsh -c lxc:///system lxc-enter-namespace lmslxc -- /bin/ls -al" works only if you add the option --noseclabel , e.g.:
Code
Alles anzeigenroot@kubuntu:~# virsh -c lxc:///system Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # list Id Name State ------------------------------- 12837 container1 running virsh # lxc-enter-namespace container1 --noseclabel -- /bin/ls -l total 2097232 lrwxrwxrwx 1 root root 7 Mar 17 2021 bin -> usr/bin drwxr-xr-x 3 root root 4096 Nov 2 20:04 boot drwxrwxr-x 2 root root 4096 Mar 17 2021 cdrom drwxr-xr-x 3 root root 320 Nov 12 11:28 dev drwxr-xr-x 162 root root 12288 Nov 11 06:40 etc drwxr-xr-x 3 root root 4096 Mar 17 2021 home lrwxrwxrwx 1 root root 7 Mar 17 2021 lib -> usr/lib lrwxrwxrwx 1 root root 9 Mar 17 2021 lib32 -> usr/lib32 lrwxrwxrwx 1 root root 9 Mar 17 2021 lib64 -> usr/lib64 lrwxrwxrwx 1 root root 10 Mar 17 2021 libx32 -> usr/libx32 drwx------ 2 root root 16384 Mar 17 2021 lost+found drwxr-xr-x 5 root root 4096 Nov 12 06:27 media drwxr-xr-x 2 root root 4096 May 9 2022 mnt drwxr-xr-x 2 root root 4096 Feb 4 2021 opt dr-xr-xr-x 306 root root 0 Nov 12 11:28 proc drwx------ 8 root root 4096 Nov 12 09:11 root drwxr-xr-x 37 root root 1120 Nov 12 10:09 run lrwxrwxrwx 1 root root 8 Mar 17 2021 sbin -> usr/sbin drwxr-xr-x 2 root root 4096 Mar 17 2021 snap drwxr-xr-x 2 root root 4096 Feb 4 2021 srv -rw------- 1 root root 2147483648 Mar 17 2021 swapfile dr-xr-xr-x 13 root root 0 Nov 12 11:28 sys drwxrwxrwt 20 root root 4096 Nov 12 11:23 tmp drwxr-xr-x 14 root root 4096 Feb 4 2021 usr drwxr-xr-x 14 root root 4096 Feb 4 2021 var virsh #
-
If you find an easy way to enable cgroups v1, maybe the plugin could do that.
-
ryecoaaron The solution is buried here https://www.debian.org/release…en.html#openstack-cgroups.
Zitatadd the parameters
systemd.unified_cgroup_hierarchy=false
and
systemd.legacy_systemd_cgroup_controller=false
to the kernel command line in order to override the default and
restore the old cgroup hierarchy.
Testing in Omv6 shows the old cgroup behaviour is restored and for a pve kernel:
Code
Alles anzeigenroot@omv6:/# uname -a Linux omv6 5.19.7-2-pve #1 SMP PREEMPT_DYNAMIC PVE 5.19.7-2 (Tue, 04 Oct 2022 17:18:40 + x86_64 GNU/Linux root@omv6:/# dmesg | grep "command line" [ 0.027003] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.19.7-2-pve root=UUID=3987ad82-2a0f-4b8f-82ce-e6d7ca7c194e ro systemd.unified_cgroup_hierarchy=false systemd.legacy_systemd_cgroup_controller= false quiet [ 0.027120] Unknown kernel command line parameters "false BOOT_IMAGE=/boot/vmlinuz-5.19.7-2-pve", will be passed to user space. root@omv6:/# mount | grep cgroup tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755,inode64) cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,misc) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) root@omv6:/#
Unfortunately starting an LXC container now generates an error from the KVM plugin:
CodeUnable to - poweronerror from service: GDBus.Error:org.freedesktop.machine1.NoMachineForPID: PID 3709 does not belong to any known machine OMV\Exception: Unable to - poweronerror from service: GDBus.Error:org.freedesktop.machine1.NoMachineForPID: PID 3709 does not belong to any known machine in /usr/share/openmediavault/engined/rpc/kvm.inc:2174 Stack trace: #0 [internal function]: OMVRpcServiceKvm->doCommand(Array, Array) #1 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #2 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('doCommand', Array, Array) #3 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Kvm', 'doCommand', Array, Array, 1) #4 {main}
I tested this outside of OMV6 in another debian 11 VM which has libvirt installed and after adding those two boot params libvrt-lxc seems to work OK., e.g:
Code
Alles anzeigenroot@debian11vm:~# virsh -c lxc:///system list Id Name State ------------------------------ 2586 container1 running root@debian10vm:~# virsh -c lxc:///system lxc-enter-namespace container1 --noseclabel -- /bin/bash -c "apt update && apt upgrade -y" Hit:1 http://archive.ubuntu.com/ubuntu jammy InRelease Get:2 http://archive.ubuntu.com/ubuntu jammy-updates InRelease [114 kB] Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB] Fetched 224 kB in 0s (484 kB/s) Reading package lists... Done Building dependency tree... Done Reading state information... Done All packages are up-to-date. Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done # # News about significant security updates, features and services will # appear here to raise awareness and perhaps tease /r/Linux ;) # Use 'pro config set apt_news=false' to hide this and future APT news. # 0 to upgrade, 0 to newly install, 0 to remove and 0 not to upgrade. root@debian11vm:~#
Of course, I have no idea of the possible unwanted side effects of using those kernel boot params.
-
Unfortunately starting an LXC container now generates an error from the KVM plugin:
Just for existing LXC containers or new containers as well? Not a huge fan of editing kernel parameters with plugins.
Of course, I have no idea of the possible unwanted side effects of using those kernel boot params.
Maybe docker issues?
-
ryecoaaron The error occurs for both existing and newly added containers. If editing kernel parameters doesn't fit with OMV6, then is this addition to the KVM plugin worth pursuing? Manipulating cgroups looks a tricky thing to do unless there is a systemd way to do it. I doubt if you'd want to develop a separate proper LXC plugin.
From my casual reading about containers, libvirt LXC seems to have a tiny footprint on the web, often seen as a quick and dirty way of creating a privleged container. Ubuntu may still promote LXD/LXC and Promox persists with their use of LXC containers, but LXC is dwarfed by docker/kubernetes use.
Do you think many OMV6 users would even make much use of such an option, as opposed to using dockers?
-
If editing kernel parameters doesn't fit with OMV6
It might fit in omv6 but I don't want to add it to the plugin.
then is this addition to the KVM plugin worth pursuing?
Because you can't run commands inside the container from outside the container? If you can't do what you need, I'm sorry. But the containers are working perfect for my needs. I spent a lot of hours adding this and I'm not going to rip it out because some use cases don't work. And if you aren't using lxc, it shouldn't affect your system at all.
I doubt if you'd want to develop a separate proper LXC plugin.
Definitely not. I am able to use a lot of code that VMs use and it doesn't make sense to separate them.
From my casual reading about containers, libvirt LXC seems to have a tiny footprint on the web, often seen as a quick and dirty way of creating a privleged container. Ubuntu may still promote LXD/LXC and Promox persists with their use of LXC containers, but LXC is dwarfed by docker/kubernetes use.
Do you think many OMV6 users would even make much use of such an option, as opposed to using dockers?Probably not but OMV users are not my motivation for adding it. I am my motivation for adding it. No one has to use it. It isn't supposed to replace docker at all. I do a lot of testing where I need a different full distro or version and I can spin up an lxc container faster than I could install and configure a VM.
-
ryecoaaron It's not about what I need, I was thinking of your development time not knowing how complete your work was.
Obviously it's not meant to replace docker. I suppose command's over ssh is an alternative way to maintain a container you wanted to keep. I read that those kernel systemd boot params are deprecated from systemd 252 onward and debian testing is already at systemd 251. So changing from unified back to hybrid cgroups looks like a non-starter anyway, or least would have a limited life time .
Creating a LXC container via libvrit is much faster than a full VM, so it is a useful option to have. Happy to keep testing.
The templates are dependent on the full LXC package and the default config creates a lxc bridge, but it is ever going to be used? It can be turned on in the relevant LXC config file.
-
I suppose command's over ssh is an alternative way to maintain a container you wanted to keep.
I guess I still don't know why you are focusing on remote commands. I am using the containers like a VM. I login via ssh or virsh console and do what I need. This is great for when I need something for a very short time (happens often) and now I don't have to do the install.
The templates are dependent on the full LXC package and the default config creates a lxc bridge, but it is ever going to be used? It can be turned on in the relevant LXC config file.
I guess I could add it to the list of network types. I hadn't needed it for anything. So, I wasn't worried about it.
-
I tried this by installing image:"openwrt; 22.03; amd64; default; 20221112 _11:57."
The following error message was received:
Code
Alles anzeigenFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; awk -i inplace -F":" 'BEGIN{OFS = ":"} /root/{$2="$y$j9T$wp.Vxne12yx8NoEPBC0YX/$oPuRduHdwj01CRgFpfwJEZv.hL8Aw2Lip1TdJ0Io7i1"}{ print }' /srv/omv2/Tool/LXC/openwrt/etc/shadow' with exit code '2': OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; awk -i inplace -F":" 'BEGIN{OFS = ":"} /root/{$2="$y$j9T$wp.Vxne12yx8NoEPBC0YX/$oPuRduHdwj01CRgFpfwJEZv.hL8Aw2Lip1TdJ0Io7i1"}{ print }' /srv/omv2/Tool/LXC/openwrt/etc/shadow' with exit code '2': in /usr/share/php/openmediavault/system/process.inc:217 Stack trace: #0 /usr/share/openmediavault/engined/rpc/kvm.inc(2901): OMV\System\Process->execute(Array, 2) #1 /usr/share/openmediavault/engined/rpc/kvm.inc(1183): OMVRpcServiceKvm->resetLxcPassword('/srv/omv2/Tool/...') #2 [internal function]: OMVRpcServiceKvm->setVm(Array, Array) #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('setVm', Array, Array) #5 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Kvm', 'setVm', Array, Array, 1) #6 {main}
-
It failed to reset the root password. I have no plans to test every image that linuxcontainers offers but I can make that error non-fatal.
-
It failed to reset the root password. I have no plans to test every image that linuxcontainers offers but I can make that error non-fatal.
Thanks, wait for better features.
-
wait for better features.
What better features?
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!