Have a look at
/var/log/libvirt/qemu
on the host. I don't have the plugin running, but that is where I find logs of the VMs on my Laptop.
That was it, thank you very much macom !
Have a look at
/var/log/libvirt/qemu
on the host. I don't have the plugin running, but that is where I find logs of the VMs on my Laptop.
That was it, thank you very much macom !
Hi! How can i replace LXC container to another OMV instanсe?
Just copy LXC folder to other machine?
When i try to create new LXC container and use the folder with files from previous container - i see error thath the folder is not empty. cANT UNDERSTAND HOW I CAN USE MY LXC CONTAINER ON different OMV instance on different server
Hi! How can i replace LXC container to another OMV instanсe?
Just copy LXC folder to other machine?
When i try to create new LXC container and use the folder with files from previous container - i see error thath the folder is not empty. cANT UNDERSTAND HOW I CAN USE MY LXC CONTAINER ON different OMV instance on different server
Create the LXC first, then replace the folder with one you want to copy over.
As you discovered, unlike a VM where you can select an existing virtual drive, lxc requires pulling of a base image in the initial creation. Once the base is pulled, the machine is created. Simply replacing that folder with the one that is already made should be enough to make it work, however, if you plan on running both the old lxc and new one on the same network, you may run into some naming issues, as an lxc defaults to LXC_NAME as the visible name on a network.
If you want to change that, you need to edit the name in 2 locations, /etc/hostname and /etc/hosts, and then restart the lxc.
I have almost no info. How are you starting the container? docker-compose? If you aren't binding the container to a network or ip address, you are binding it to 0.0.0.0 which is all network interfaces. I don't have enough info to know what 10.0.3.1 is but the output of ip a would help.
I am starting with this script:
#!/bin/bash
IP_LOOKUP="$(ip route get 8.8.8.8 | awk '{ print $NF; exit }')" # May not work for VPN / tun0
IPv6_LOOKUP="$(ip -6 route get 2001:4860:4860::8888 | awk '{ print $10; exit }')" # May not work for VPN / tun0
IP="${IP:-$IP_LOOKUP}" # use $IP, if set, otherwise IP_LOOKUP
IPv6="${IPv6:-$IPv6_LOOKUP}" # use $IPv6, if set, otherwise IP_LOOKUP
DOCKER_CONFIGS="/srv/dev-disk-by-id-ata-Micron_1100_MTFDDAV256TBN_17501A32891E-part3/dockerconfig/piHole"
echo "IP: ${IP} - IPv6: ${IPv6}"
# Default ports + daemonized docker container
docker run -d \
--name pihole \
-p 53:53/tcp -p 53:53/udp -p 80:80 \
-v "${DOCKER_CONFIGS}/pihole/:/etc/pihole/" \
-v "${DOCKER_CONFIGS}/dnsmasq.d/:/etc/dnsmasq.d/" \
-e ServerIP="$IP" \
-e TZ="Europe/Berlin" \
-e WEBPASSWORD="jule123" \
--restart=always \
pihole/pihole:latest
docker logs pihole 2> /dev/null | grep 'password:'
# -e ServerIPv6="${IPv6:-$(ip -6 route get 2001:4860:4860::8888 | awk '{ print $10; exit }')}" \
# -e VIRTUAL_HOST="${IP}":81 \
Alles anzeigen
ip -a tells me:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 90:1b:0e:fc:2c:60 brd ff:ff:ff:ff:ff:ff
inet 192.168.177.3/24 brd 192.168.177.255 scope global enp0s31f6
valid_lft forever preferred_lft forever
inet6 2a00:xxxx:4509:ac00:xxxx:eff:fefc:2c60/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 2662sec preferred_lft 2662sec
inet6 fe80::921b:eff:fefc:2c60/64 scope link
valid_lft forever preferred_lft forever
3: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff
inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0
valid_lft forever preferred_lft forever
4: wgnet0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 10.192.122.1/24 scope global wgnet0
valid_lft forever preferred_lft forever
5: br-6566530af5f3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:6b:a9:a1:aa brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/16 brd 172.18.255.255 scope global br-6566530af5f3
valid_lft forever preferred_lft forever
inet6 fe80::42:6bff:fea9:a1aa/64 scope link
valid_lft forever preferred_lft forever
6: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:9e:09:26:34 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:9eff:fe09:2634/64 scope link
valid_lft forever preferred_lft forever
7: br-05e0294fb85d: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ee:85:07:6a brd ff:ff:ff:ff:ff:ff
inet 172.31.0.1/16 brd 172.31.255.255 scope global br-05e0294fb85d
valid_lft forever preferred_lft forever
inet6 fe80::42:eeff:fe85:76a/64 scope link
valid_lft forever preferred_lft forever
8: br-1549d5198bb3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:e1:6c:ce:a8 brd ff:ff:ff:ff:ff:ff
inet 172.19.0.1/16 brd 172.19.255.255 scope global br-1549d5198bb3
valid_lft forever preferred_lft forever
inet6 fe80::42:e1ff:fe6c:cea8/64 scope link
valid_lft forever preferred_lft forever
...
88: macvtap0@enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
link/ether 52:54:00:e5:86:5d brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fee5:865d/64 scope link
valid_lft forever preferred_lft forever
90: veth4e3b6e3@if89: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-bceb55143ce1 state UP group default
link/ether 46:c0:c1:36:4d:20 brd ff:ff:ff:ff:ff:ff link-netnsid 13
inet6 fe80::44c0:c1ff:fe36:4d20/64 scope link
valid_lft forever preferred_lft forever
91: macvtap1@enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 500
link/ether 52:54:00:84:97:d6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:fe84:97d6/64 scope link
valid_lft forever preferred_lft forever
95: vethf2b1d18@if94: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6566530af5f3 state UP group default
link/ether 76:d2:7a:a1:bf:de brd ff:ff:ff:ff:ff:ff link-netnsid 7
inet6 fe80::74d2:7aff:fea1:bfde/64 scope link
valid_lft forever preferred_lft forever
97: vethe474ade@if96: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6566530af5f3 state UP group default
link/ether 46:f4:93:1b:6c:fe brd ff:ff:ff:ff:ff:ff link-netnsid 17
inet6 fe80::44f4:93ff:fe1b:6cfe/64 scope link
valid_lft forever preferred_lft forever
99: veth22debcf@if98: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6566530af5f3 state UP group default
link/ether fa:98:00:a6:6e:e7 brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::f898:ff:fea6:6ee7/64 scope link
valid_lft forever preferred_lft forever
101: veth28f4b14@if100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-6566530af5f3 state UP group default
link/ether 3a:9d:b4:c1:11:8d brd ff:ff:ff:ff:ff:ff link-netnsid 18
inet6 fe80::389d:b4ff:fec1:118d/64 scope link
valid_lft forever preferred_lft forever
Alles anzeigen
So, it is lxcbr0
ZitatWhy would you purge it after I told you it wasn't causing the problem? You could try disabling all NAT networks in the kvm plugin. It shouldn't start a dnsmasq listener on that network then.
That was before you said it.
ZitatA bridge network defined in the plugin or a bridge network interface defined at the OMV network level? The latter should be used and is described thoroughly in the kvm guide.
A bridge defined in the plugin.
The guide states, what I need to do. I am a bit worried though regarding the remark that " If you have services configured on the host with this network interface (for example, Wireguard), you will need to reconfigure them to work with the bridge"
What except for wireguard could be affected?
PostUp = iptables -A FORWARD -i wgnet0 -j ACCEPT; iptables -A FORWARD -o wgnet0 -j ACCEPT; iptables -t nat -A POSTROUTING -o enp0s31f6 -j MASQUERADE
PostDown = iptables -D FORWARD -i wgnet0 -j ACCEPT; iptables -D FORWARD -o wgnet0 -j ACCEPT; iptables -t nat -D POSTROUTING -o enp0s31f6 -j MASQUERADE
So, I would have to replace the enp0s31f6 by br0?
Best regards,
Hendrik
What except for wireguard could be affected?
Anything explicitly refering to the enp0s31f6 adapter. Wireguard is about the only plugin I can think of that does that.
-p 53:53/tcp -p 53:53/udp -p 80:80 \
If you just change this line to the following, it would only listen on one interface.
-p 192.168.177.3:53:53/tcp -p 192.168.177.3:53:53 -p 192.168.177.3:80:80 \
Hi everyone!
i find an issue that i cant normally install and start ssh server on UBUNTU in LXC container.
When i try apt install openssh-server in LXC, i se this:
Setting up openssh-server (1:8.2p1-4ubuntu0.5) ...
rescue-ssh.target is a disabled or a static unit, not starting it.
Job for ssh.service failed because the control process exited with error code.
See "systemctl status ssh.service" and "journalctl -xe" for details.
invoke-rc.d: initscript ssh, action "start" failed.
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/system/service.d
└─zzz-lxc-service.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2023-04-05 07:28:40 UTC; 8ms ago
Docs: man:sshd(8)
man:sshd_config(5)
Process: 3941 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=1/FAILURE)
CPU: 12ms
Apr 05 07:28:40 LXCNAME systemd[1]: ssh.service: Failed with result 'exit-code'.
Apr 05 07:28:40 LXCNAME systemd[1]: Failed to start OpenBSD Secure Shell server.
dpkg: error processing package openssh-server (--configure):
installed openssh-server package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
openssh-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
can somebody help me why i cant normally use openssh server on LXC ?
Tried reinstall Ubuntu 20.04 in container two times, but shis issue still on
Thanks!
This
If you just change this line to the following, it would only listen on one interface.
-p 192.168.177.3:53:53/tcp -p 192.168.177.3:53:53 -p 192.168.177.3:80:80 \
unfortunately, does not work:
./docker_run.sh
IP: 0 - IPv6: src
55bf877fb5b2a15cbed8438e4a657bc234984aeacd0f47c28b2550168bc92c79
docker: Error response from daemon: driver failed programming external connectivity on endpoint pihole (af4ea568bc3269a4a7ed60ebbcf8ffc2642b447f0de063c1e860e188b5118312): Bind for 192.168.177.3:53 failed: port is already allocated.
netstat -tulpn |grep ":53 "
tcp 0 0 10.0.3.1:53 0.0.0.0:* LISTEN 2246/dnsmasq
udp 0 0 10.0.3.1:53 0.0.0.0:* 2246/dnsmasq
Do you understand this?
Greetings,
Hendrik
I'm not sure why you are using that script. You really should use docker-compose. But since something is still listening on 0.0.0.0 (all interfaces), your docker container can't start the port. Is the container still running? What is the output of: ps aux | grep dnsmasq
Alles anzeigenHi everyone!
i find an issue that i cant normally install and start ssh server on UBUNTU in LXC container.
When i try apt install openssh-server in LXC, i se this:Setting up openssh-server (1:8.2p1-4ubuntu0.5) ...
rescue-ssh.target is a disabled or a static unit, not starting it.
Job for ssh.service failed because the control process exited with error code.
See "systemctl status ssh.service" and "journalctl -xe" for details.
invoke-rc.d: initscript ssh, action "start" failed.
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/system/service.d
└─zzz-lxc-service.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2023-04-05 07:28:40 UTC; 8ms ago
Docs: man:sshd(8)
man:sshd_config(5)
Process: 3941 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=1/FAILURE)
CPU: 12ms
Apr 05 07:28:40 LXCNAME systemd[1]: ssh.service: Failed with result 'exit-code'.
Apr 05 07:28:40 LXCNAME systemd[1]: Failed to start OpenBSD Secure Shell server.
dpkg: error processing package openssh-server (--configure):
installed openssh-server package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
openssh-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
can somebody help me why i cant normally use openssh server on LXC ?
Tried reinstall Ubuntu 20.04 in container two times, but shis issue still on
I have done this many times in an LXC without problem, however, if you just use the basic apt-get install openssh-server it may not work, as it doesn't install the full server setup.
Try using apt-get -y install openssh-server^. The ^ at the end installs the full server meta-package, and is the same package that is installed using tasksel. I normally also install the regular server package too, apt-get -y install server^ so that the LXC operated pretty much the same as an ubuntu minimal install from an ISO.
if you just use the basic apt-get install openssh-server it may not work, as it doesn't install the full server setup.
Odd, I have never had to do this. I build all of the Ubuntu templates at work and they are very minimal + openssh-server. What have you seen that makes it not work? openssh-server is not a meta package.
Odd, I have never had to do this. I build all of the Ubuntu templates at work and they are very minimal + openssh-server. What have you seen that makes it not work? openssh-server is not a meta package.
From my experience it doesn't install the server daemon (sshd) and doesn't generate keys. using the meta-package either via tasksel or appending the ^ to the install does though.
I have done this many times in an LXC without problem, however, if you just use the basic apt-get install openssh-server it may not work, as it doesn't install the full server setup.
Try using apt-get -y install openssh-server^. The ^ at the end installs the full server meta-package, and is the same package that is installed using tasksel. I normally also install the regular server package too, apt-get -y install server^ so that the LXC operated pretty much the same as an ubuntu minimal install from an ISO.
Cant Understand why - but i try this but same result.
root@LXCNAME:~# apt-get -y install openssh-server^
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'python3-distro' for task 'openssh-server'
Note, selecting 'python3-urllib3' for task 'openssh-server'
Note, selecting 'libwrap0' for task 'openssh-server'
Note, selecting 'openssh-sftp-server' for task 'openssh-server'
Note, selecting 'python3-idna' for task 'openssh-server'
Note, selecting 'python3-six' for task 'openssh-server'
Note, selecting 'python3-requests' for task 'openssh-server'
Note, selecting 'ncurses-term' for task 'openssh-server'
Note, selecting 'openssh-server' for task 'openssh-server'
Note, selecting 'ssh-import-id' for task 'openssh-server'
Note, selecting 'python3-certifi' for task 'openssh-server'
Note, selecting 'python3-chardet' for task 'openssh-server'
libwrap0 is already the newest version (7.6.q-30).
libwrap0 set to manually installed.
ncurses-term is already the newest version (6.2-0ubuntu2).
ncurses-term set to manually installed.
python3-certifi is already the newest version (2019.11.28-1).
python3-chardet is already the newest version (3.0.4-4build1).
python3-distro is already the newest version (1.4.0-1).
python3-idna is already the newest version (2.8-1).
python3-requests is already the newest version (2.22.0-2ubuntu1).
python3-six is already the newest version (1.14.0-2).
ssh-import-id is already the newest version (5.10-0ubuntu1).
ssh-import-id set to manually installed.
openssh-server is already the newest version (1:8.2p1-4ubuntu0.5).
openssh-sftp-server is already the newest version (1:8.2p1-4ubuntu0.5).
openssh-sftp-server set to manually installed.
python3-urllib3 is already the newest version (1.25.8-2ubuntu0.2).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
1 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Setting up openssh-server (1:8.2p1-4ubuntu0.5) ...
rescue-ssh.target is a disabled or a static unit, not starting it.
Job for ssh.service failed because the control process exited with error code.
See "systemctl status ssh.service" and "journalctl -xe" for details.
invoke-rc.d: initscript ssh, action "start" failed.
● ssh.service - OpenBSD Secure Shell server
Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/system/service.d
└─zzz-lxc-service.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2023-04-05 11:40:50 UTC; 6ms ago
Docs: man:sshd(8)
man:sshd_config(5)
Process: 12389 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=1/FAILURE)
CPU: 14ms
dpkg: error processing package openssh-server (--configure):
installed openssh-server package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
openssh-server
E: Sub-process /usr/bin/dpkg returned an error code (1)
Cant understand what's wrong, why i cant correctly install and start ssh server=(
it doesn't install the server daemon (sshd)
It most certainly does - https://packages.debian.org/bu…4/openssh-server/filelist. You will see /usr/sbin/sshd and the unit file /lib/systemd/system/ssh.service in the list.
doesn't generate keys
It should do that as well unless they already exist - https://salsa.debian.org/ssh-t…enssh-server.postinst#L59
Cant understand what's wrong, why i cant correctly install and start ssh server=(
What is the output of: sudo /usr/sbin/sshd
It most certainly does - https://packages.debian.org/bu…4/openssh-server/filelist. You will see /usr/sbin/sshd and the unit file /lib/systemd/system/ssh.service in the list.
It should do that as well unless they already exist - https://salsa.debian.org/ssh-t…enssh-server.postinst#L59
I just tried again in a fresh lxc, it it seems to work. Might have been a short lived quirk in a package when I ran into the problem but the tasksel route worked so I started using the ^
What is the output of: sudo /usr/sbin/sshd
root@LXCNAME:~# sudo /usr/sbin/sshd
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/etc/ssh/ssh_host_rsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/etc/ssh/ssh_host_ecdsa_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/etc/ssh/ssh_host_ed25519_key' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
sshd: no hostkeys available -- exiting.
chmod 400 for my keyfiles helped me
Looks like you change permissions on the private ssh keys.
sudo chmod 600 /etc/ssh/ssh_host_{ecdsa,ed25519,rsa}_key
then start the service
sudo systemctl start ssh
I am planning on closing this thread as well. I think the plugin is very stable - I use it a ton. If people have issues, please open new threads. I will give it a bit before doing this.
Hello,
the output of ps aux is
ps aux | grep dnsmasq
dnsmasq 2071 0.0 0.0 14976 2208 ? S 17:57 0:00 dnsmasq --conf-file=/dev/null -u dnsmasq --strict-order --bind-interfaces --pid-file=/run/l
/dnsmasq.pid --listen-address 10.0.3.1 --dhcp-range 10.0.3.2,10.0.3.254 --dhcp-lease-max=253 --dhcp-no-override --except-interface=lo --interface=lxcbr0 --dhc
leasefile=/var/lib/misc/dnsmasq.lxcbr0.leases --dhcp-authoritative
root 25225 0.0 0.0 6740 644 pts/0 S+ 18:10 0:00 grep --color dnsmasq
I am using that script as I found it in some guide, back then. I do not quite remember why I did not use docker-compose (I do use it elsewhere).
On my other topic, I now configured the VM to use a br0 set up in the main OMV-Network settings.
After I applied the network settings, I lost access to the s erver from my laptop. It would not reply to a ping. A ping on the shell (physical access) to the server IP (192.168.177.3) worked.
After reboot, it worked.
It may be worth adding a remark in the guide.
Best regards,
Hendrik
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!