That did the trick! Thank you very much. EVen though I have backups I was mightily scared!
Posts by FatherRandom
-
-
I am confused about the latest kernel. I thought the pve version is the correct one. At least that's the one which was previously used in combination with proxmox.
-
-
I've updated my OMV installation as well. I noticed a KVM package being held back. So I enabled backports in OMV extras as per instruction here to be able to finish the updates. Unfortunately, that caused my zfs file system to vanish. The server boots fine from the separate SSD, but everything from the zfs pool on the two data storage SSDs is inaccessible. The drives themselves are fine according to SMART.
I fear to make the wrong steps and kill something permanently if that's not already happened. Is my problem related to the abovementioned? Or shall I open a new thread?
Attempting to access the zfs pool yields:
Code
Display MoreFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; zfs list -p -H -t all -o name,type 2>&1' with exit code '1': The ZFS modules cannot be auto-loaded. Try running 'modprobe zfs' as root to manually load them. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; zfs list -p -H -t all -o name,type 2>&1' with exit code '1': The ZFS modules cannot be auto-loaded. Try running 'modprobe zfs' as root to manually load them. in /usr/share/php/openmediavault/system/process.inc:247 Stack trace: #0 /usr/share/omvzfs/Utils.php(450): OMV\System\Process->execute() #1 /usr/share/omvzfs/Utils.php(262): OMVModuleZFSUtil::exec() #2 /usr/share/openmediavault/engined/rpc/zfs.inc(297): OMVModuleZFSUtil::getZFSFlatArray() #3 [internal function]: OMVRpcServiceZFS->listPools() #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array() #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(155): OMV\Rpc\ServiceAbstract->callMethod() #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(628): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}() #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(152): OMV\Rpc\ServiceAbstract->execBgProc() #8 /usr/share/openmediavault/engined/rpc/zfs.inc(304): OMV\Rpc\ServiceAbstract->callMethodBg() #9 [internal function]: OMVRpcServiceZFS->listPoolsBg() #10 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array() #11 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod() #12 /usr/sbin/omv-engined(544): OMV\Rpc\Rpc::call() #13 {main}
I'd be very grateful for assistance!
-
Thank you very much for your help, ryecoaaron! For now the setups works for me and I have no issue with the missing IPv6. I will look into it later once I've sorted out my backup ideas for my dms inside the VM.
-
Code
Display Moreip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000 link/ether a8:a1:59:0c:d3:9b brd ff:ff:ff:ff:ff:ff 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 12:e3:60:49:a9:d6 brd ff:ff:ff:ff:ff:ff inet 192.168.188.32/24 metric 100 brd 192.168.188.255 scope global dynamic br0 valid_lft 855497sec preferred_lft 855497sec 4: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:eb:f5:95 brd ff:ff:ff:ff:ff:ff 5: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0 valid_lft forever preferred_lft forever
-
Code
***@server:~$ grep -ir forward /etc/sysctl.d/* /etc/sysctl.d/99-sysctl.conf:# Uncomment the next line to enable packet forwarding for IPv4 /etc/sysctl.d/99-sysctl.conf:#net.ipv4.ip_forward=1 /etc/sysctl.d/99-sysctl.conf:# Uncomment the next line to enable packet forwarding for IPv6 /etc/sysctl.d/99-sysctl.conf:#net.ipv6.conf.all.forwarding=1 /etc/sysctl.d/ip_forward.conf:net.ipv4.ip_forward=1
-
Another interesting result of the test: After uncommenting the IPv6 deactivation line, saving the conf file and rebooting the server - the vm would autostart again. UnfortunateIy, I forgot to perfom the second part of mcking230 suggestion after editing sysctl.config:
QuoteThis prevented me from loggin in via RDP again. So I executed sysctl -p, rebooted and was able to log in to the vm with RDP again. Dunno whether that is of some diagnostic value to the experts.
-
Code
Display More2024-05-31 07:51:35.745+0000: starting up libvirt version: 9.0.0, package: 9.0.0-4 (Debian), qemu version: 7.2.9Debian 1:7.2+dfsg-7+deb12u5, kernel: 6.8.4-3-pve, hostname: ***.fritz.box LC_ALL=C \ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \ HOME=/var/lib/libvirt/qemu/domain-1-ecoDMS \ XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-1-ecoDMS/.local/share \ XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-1-ecoDMS/.cache \ XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-1-ecoDMS/.config \ /usr/bin/qemu-system-x86_64 \ -name guest=ecoDMS,debug-threads=on \ -S \ -object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-ecoDMS/master-key.aes"}' \ -machine pc-q35-7.2,usb=off,vmport=off,dump-guest-core=off,memory-backend=pc.ram \ -accel kvm \ -cpu host,migratable=on \ -m 10240 \ -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":10737418240}' \ -overcommit mem-lock=off \ -smp 2,sockets=1,dies=1,cores=2,threads=1 \ -uuid e3571aef-a0a1-4304-ac68-06460b2fc0b8 \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=35,server=on,wait=off \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -global ICH9-LPC.disable_s3=1 \ -global ICH9-LPC.disable_s4=1 \ -boot strict=on \ -device '{"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"}' \ -device '{"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"}' \ -device '{"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"}' \ -device '{"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"}' \ -device '{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x2.0x4"}' \ -device '{"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x2.0x5"}' \ -device '{"driver":"pcie-root-port","port":22,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x2.0x6"}' \ -device '{"driver":"pcie-root-port","port":23,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x2.0x7"}' \ -device '{"driver":"pcie-root-port","port":24,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x3"}' \ -device '{"driver":"pcie-root-port","port":25,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x3.0x1"}' \ -device '{"driver":"pcie-root-port","port":26,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x3.0x2"}' \ -device '{"driver":"pcie-root-port","port":27,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x3.0x3"}' \ -device '{"driver":"pcie-root-port","port":28,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x3.0x4"}' \ -device '{"driver":"pcie-root-port","port":29,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x3.0x5"}' \ -device '{"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"}' \ -device '{"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.3","addr":"0x0"}' \ -blockdev '{"driver":"file","filename":"/pooldata/KVM/Volumes/ecoDMS.qcow2","aio":"native","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","cache":{"direct":true,"no-flush":false},"driver":"qcow2","file":"libvirt-2-storage","backing":null}' \ -device '{"driver":"virtio-blk-pci","bus":"pci.4","addr":"0x0","drive":"libvirt-2-format","id":"virtio-disk0","bootindex":1,"write-cache":"on"}' \ -device '{"driver":"ide-cd","bus":"ide.0","id":"sata0-0-0","bootindex":2}' \ -netdev '{"type":"tap","fd":"32","vhost":true,"vhostfd":"34","id":"hostnet0"}' \ -device '{"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:eb:f5:95","bus":"pci.7","addr":"0x0"}' \ -chardev pty,id=charserial0 \ -device '{"driver":"isa-serial","chardev":"charserial0","id":"serial0","index":0}' \ -chardev socket,id=charchannel0,fd=30,server=on,wait=off \ -device '{"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"}' \ -device '{"driver":"usb-tablet","id":"input0","bus":"usb.0","port":"1"}' \ -audiodev '{"id":"audio1","driver":"spice"}' \ -vnc 0.0.0.0:0,audiodev=audio1 \ -spice port=5901,addr=0.0.0.0,disable-ticketing=on,image-compression=off,seamless-migration=on \ -device '{"driver":"virtio-vga","id":"video0","max_outputs":1,"bus":"pcie.0","addr":"0x1"}' \ -device '{"driver":"ich9-intel-hda","id":"sound0","bus":"pcie.0","addr":"0x1b"}' \ -device '{"driver":"hda-duplex","id":"sound0-codec0","bus":"sound0.0","cad":0,"audiodev":"audio1"}' \ -device '{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.5","addr":"0x0"}' \ -object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \ -device '{"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.6","addr":"0x0"}' \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2024-05-31T07:51:35.873311Z qemu-system-x86_64: warning: Spice: ../server/reds.cpp:2551:reds_init_socket: getaddrinfo(0.0.0.0,5901): Address family for hostname not supported 2024-05-31T07:51:35.873381Z qemu-system-x86_64: warning: Spice: ../server/reds.cpp:3442:do_spice_init: Failed to open SPICE sockets 2024-05-31T07:51:35.873413Z qemu-system-x86_64: failed to initialize spice server 2024-05-31 07:51:35.881+0000: shutting down, reason=failed
Commenting out the configuration line to deactivate IPv6 brought back the error messages for the vm. Consequently, it did not autostart. So I will disable IPv6 again to get it working again - at least for the time being.
-
Would you like me to test this by commenting out the statement to disable IPv6? Or is it not just as easy as this to revert to the former configuration? And when would the new version of the package (?) be available for installation? Incidently, I was notified about an update for KVM only today. This got something to do with your change?
-
as user root yields the following result:
Code
Display Moreroot@HelmsNAS:~# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000 link/ether a8:a1:59:0c:d3:9b brd ff:ff:ff:ff:ff:ff 3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 12:e3:60:49:a9:d6 brd ff:ff:ff:ff:ff:ff inet 192.168.188.32/24 metric 100 brd 192.168.188.255 scope global dynamic br0 valid_lft 827763sec preferred_lft 827763sec 5: lxcbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 00:16:3e:00:00:00 brd ff:ff:ff:ff:ff:ff inet 10.0.3.1/24 brd 10.0.3.255 scope global lxcbr0 valid_lft forever preferred_lft forever 6: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UNKNOWN group default qlen 1000 link/ether fe:54:00:eb:f5:95 brd ff:ff:ff:ff:ff:ff
The mini server is hooked up to a FritzBox and gets its ip address via DHCP. The FritzBox is configured to provide always the same to this machine.
-
Hello everyone, after 3 versions of OMV over the years I decided to join the forum. First off - thanks very much to all the dedicated contributors of this great software!
I was having the same troubles as mcking230. The vm wouldn't autostart, Spice wasn't activated and later it showed the same red error message upon the attempt to remove it from the vm.
This worked for me as well.
As a sidenote: After the deactivation of IPv6 I could connect to my vm (Debian 12 with Xfce4 and xRDP) via RDP from a windows machine. That was impossible before.