I found out how to fix that by changing the network adapter of my OMV system by a bridge, as described in the KVM plugin wiki....
now everything is OK
I found out how to fix that by changing the network adapter of my OMV system by a bridge, as described in the KVM plugin wiki....
now everything is OK
Hello,
I need to fix a network issue between my services:
here is my configuration:
- Nginx Proxy manager running on docker (Network ; bridge): internal IP: 172.20.0.8
- KVM virtual machine running Home Assistant instance. IP: 192.168.1.172
- OMV host has IP 192.168.1.50
I can't access to my HA instance from outside my LAN.
After many tests...I noticed that I can not ping HA Virtual machine from Nginx container.
all the other machines on my LAN are actually seen by Nginx contaner.
so I suppose there is a continuity issue beween the conainer network and the KVM network, but I'm not an expert and can not figure it out....
anyone to help me? Ask me for the infos you need...
Thank you
JR
kvm works different than virtualbox. So, it needs to like kvm needs. What specifically is confusing?
I decided to use KVM plugin to set up my HomeAssistant VM.
The VM is running, but I miss some informations....
for example:
is there a way to retrieve the IP of the VM from the plugin? I can't find it
indeed, I can't find any informations on network part of the VM (MAC adress, etc..;)
Can you help me?
Hum, I don't have any backup of my 16 TB of data
ZFS I started to think about it....
Hello,
I'm moving my raid6 array (6x4To) to RAID5 in order to regain more space with 1 HDD.
I used the mdadm --grow command
but the reshape step is very slow:
md126 : active raid6 sdb[7] sdd[4] sdc[6] sdf[9] sdg[8] sde[10]
15627554816 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[===>.................] reshape = 17.2% (672139880/3906888704) finish=9148.5min speed=5892K/sec
bitmap: 0/30 pages [0KB], 65536KB chunk
I know it could be much more rapid.
I've tested many of the tips found on internet (stripe cache, sync speed min and max) but no change
do you have any idea on what happens and what I could do?
My setup is AMD Ryzen 5 5600G with 16 Go RAM, running last OMV 6.7.1-2
thank you
Well, I found the solution.
nvidia runtime needs not to be specified in daemon.json anymore.
now, it has to be registered here:
etc/systemd/system/docker.service.d/override.conf
with this settings:
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --host=fd:// --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
then:
systemctl daemon-reload
systemctl restart docker
Hi,
After the last update of OMV, my containers lost their nvidia runtime access.
I've noticed that daemon.json has been reseted to default
But when I want to edit daemon.json with nvidia runtime specs...docker can't start
Do you have any suggestion on how to fix that?
Hello,
I'm still having this issue...
I've just edited my /etc/hosts as suggested, so I need to wait for next OMV update...
but my question is: /etc/Hosts begins with:
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
so is this this suggestion really usefull in our case?
Hello,
Fail2ban has 2 "Ban time" options
one in general settings
one for each jail
Why and What is the difference ?
I've tested both...
last test was omv-firstaid and configure ipv4 & ipv6
last OMV update has the same warnings...
I think I know where the issue comes from...
My OS partition was cloned from a backup I have done using clonezilla
indeed my previous setup was with 1 nvme only and I wanted to switch to 2xnvme in RAID1 so secure my OS....
after creating the array, I cloned back my backup onto....and this is when the ghost appeared....
do you think there is a chance to clean this?
Dear all,
I have a strange behavior of my RAID 1 I've build for my OS, using two NVME SSD.
OMV reports 2 versions of this array, one active (/dev/md127) and one with "False" state (/dev/md127p1)
mdstat only sees the active array
fstab show that root / is mounted on the false array (/dev/md127p1)
and here is lsblk output:
jerome@DTC-JEJE:/$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 3,6T 0 disk
└─md125 9:125 0 3,6T 0 raid1 /srv/dev-disk-by-uuid-e5954363-9d99-4c6f-9dd6-7c2ca9fc4d9e
sdb 8:16 1 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdc 8:32 1 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdd 8:48 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sde 8:64 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdf 8:80 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
sdg 8:96 0 3,6T 0 disk
└─md125 9:125 0 3,6T 0 raid1 /srv/dev-disk-by-uuid-e5954363-9d99-4c6f-9dd6-7c2ca9fc4d9e
sdh 8:112 0 3,6T 0 disk
└─md126 9:126 0 14,6T 0 raid6 /srv/dev-disk-by-uuid-7cad0e2e-e1e9-49f9-bb3d-07c5de338a9e
nvme1n1 259:0 0 232,9G 0 disk
├─nvme1n1p1 259:1 0 512M 0 part /boot/efi2
└─nvme1n1p2 259:2 0 232,4G 0 part
└─md127 9:127 0 232,3G 0 raid1
└─md127p1 259:6 0 232,3G 0 part /
nvme0n1 259:3 0 232,9G 0 disk
├─nvme0n1p1 259:4 0 512M 0 part /boot/efi
└─nvme0n1p2 259:5 0 232,4G 0 part
└─md127 9:127 0 232,3G 0 raid1
└─md127p1 259:6 0 232,3G 0 part /
Alles anzeigen
any ideas on how to clean this setup?
thank you
JR
Does the suggestion from morganfw help? See RE: warning during updates.
unfortunately NO.
It's what I've said before,
net.ipv6.conf.all.disable_ipv6 = 1
does not fix the warnings during salt settings configuration...
Still the same warnings....this timeout makes the update process really longer
Hello,
unfortunately, theses warnings came back...I've noticed them today when applying OMV's update.
the tip from morganfw is still present in my sysctl.conf
there was an OMV update today.... no warning anymore, so it seems to be fixed for me! thank you!
I've done the tip...
I can't test now since it only occurs when applying OMV update..... looks like it is related to salt changes ....
I'll let you know with nexty update....
What Linux OS are you using?
What version of OMV are you using?
Are you using IPv6 on your network?
it' OMV6, so debian 11 (install was done with OMV6 iso)
I don't use/need IPV6
sysctl net.ipv6.conf|grep disable_ipv6
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.docker0.disable_ipv6 = 0
net.ipv6.conf.enp5s0.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.veth0175baf.disable_ipv6 = 0
net.ipv6.conf.veth1b22d6c.disable_ipv6 = 0
net.ipv6.conf.veth9e5533c.disable_ipv6 = 0
net.ipv6.conf.veth9ee71d6.disable_ipv6 = 0
net.ipv6.conf.vetha8fdac1.disable_ipv6 = 0
net.ipv6.conf.vethc72d718.disable_ipv6 = 0
net.ipv6.conf.vetheec4419.disable_ipv6 = 0
net.ipv6.conf.vethf705c00.disable_ipv6 = 0
net.ipv6.conf.vethfd4a3b3.disable_ipv6 = 0
Alles anzeigen
cat /etc/hosts
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
127.0.0.1 localhost.localdomain localhost
127.0.1.1 DTC-JEJE.local DTC-JEJE
# The following lines are desirable for IPv6 capable hosts.
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
192.168.0.50 DTC-JEJE.local DTC-JEJE
Alles anzeigen