It is a change introduced with V28 of the Docker Engine: Docker Engine v28
I put the string under "networks", because putting it as you say, every time the container was restarted the mac address changed anyway.
It is a change introduced with V28 of the Docker Engine: Docker Engine v28
I put the string under "networks", because putting it as you say, every time the container was restarted the mac address changed anyway.
The best way to assign a network card exclusively to a VM is to use the "add PCI device" function from the KVM plugin.
Otherwise, the fixed IP assigned to OMV will be detected on all network adapters connected to the LAN, physically present in the machine.
See:
Is it possible for a disk that is reported as defective by the S.M.A.R.T. monitor to be marked as "failed" automatically?
As mentioned, I had the opportunity to use the terminal command 2 times and, after replacing the disk, the mirror was rebuilt without problems.
The "remove" option is de-activated in the WEBUI for good reason to prevent users shooting themselves in the foot as my examples show. Using the CLI directly removes those checks. What you said in #3 and now in #12 are two different error conditions. First you talked about the "remove" button being de-activated now you are talking about not being able to select a given drive to remove which is an entirely different case due to some unknown error condition specific to your system.
It was not possible to select the disk to be removed, because the "remove" button was disabled; it is not that I could click on it but it did not let me choose the disk.
The fact that it was disabled, probably, was due to a bug in the GUI or the version I was using at the time (it was OMV 7 but I don't remember the exact version, honestly).
At the moment, the remove button is active and I can safely choose which disk to remove from the raid, even if I have no error reports.
I suppose that the "remove" button does exactly the same thing as the terminal command, also because I don't think there are dozens of ways to remove a disk from a raid.
Alles anzeigenUsing that command on an active array will generate an error, e.g.
Coderoot@omvt:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sdd[4] sda[0] sdc[2] sdb[1] 31429632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none> root@omvt:~# mdadm /dev/md0 --remove /dev/sdd mdadm: hot remove failed for /dev/sdd: Device or resource busy root@omvt
You need to fail the device before removing it, e.g.
CodeAlles anzeigenroot@omvt:~# mdadm /dev/md0 --fail /dev/sdd mdadm: set /dev/sdd faulty in /dev/md0 root@omvt:~# mdadm /dev/md0 --remove /dev/sdd mdadm: hot removed /dev/sdd from /dev/md0 root@omvt:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sda[0] sdc[2] sdb[1] 31429632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_] unused devices: <none> root@omvt:~#
Using that command on an already degraded array will kill your array, e.g
CodeAlles anzeigenroot@omvt:~# mdadm /dev/md0 --remove /dev/sdc mdadm: hot removed /dev/sdc from /dev/md0 root@omvt:~# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] md0 : active raid5 sda[0] sdb[1] 31429632 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [UU__] unused devices: <none> root@omvt:~# mdadm /dev/md0 --add /dev/sdc mdadm: /dev/md0 has failed so using --add cannot work and might destroy mdadm: data on /dev/sdc. You should stop the array and re-assemble it. root@omvt:~#
Please don't advise people to use CLI commands which have the potential to lose their data.
I had the need to use this command 2 times on 2 different installations, because in both cases I had a disk with different S. M. A. R. T. signals; For reasons I don't know, the "Remove" button did not allow me to select the disk to be removed, preventing me from continuing.
Using the terminal command, however, the removal of the failed disk was successful both times; Once you turned it off, removed the disk, and returned to the web interface, you could always add the new disk to the volume, without any problems.
In case the "remove" button is not available (once it happened to me, I still don't know why) you can remove the disk from the terminal with the following command:
mdadm /dev/md0 --remove /dev/sdd
Obviously, you have to modify it according to your needs; In this example, "MD0" is the name of the RAID volume, while "/dev/SDD" is the disk to be removed.
Try Syncthing!
services:
syncthing:
image: syncthing/syncthing
container_name: syncthing
hostname: syncthing
network_mode: bridge
environment:
- PUID=1001
- PGID=100
volumes:
- /srv/dev-disk-by-uuid-e777083c-6f6e-4e91-a00f-0a15d4192385/laboratorio:/var/syncthing
ports:
- 8384:8384
- 22000:22000/tcp
- 22000:22000/udp
- 21027:21027/udp
restart: always
Alles anzeigen
Ho bisogno di aumentare il numero di porte SATA e stavo pensando a usare un adattatore PCIe to SATA.
Valutando tra schede con ASM 1166 o JMB 585, per una serie di motivi, ero più orientato verso il JMB 585 poi ho scoperto le schede raid tipo LSI MegaRAID 9272-8i...
Sicuramente forniranno prestazioni superiori considerando che usano almeno 8 linee PCIe ecc ma mi chiedevo se si possono usare con OMV, se il RAID posso farlo anche sw da OMV o necessariamente dal BIOS e quindi quanto più complicata può essere la configurazione e se vale la pena orientarsi su questi.
Allora:
Sì, puoi usare le schede PCIe-SATA per avere porte aggiuntive.
Sì, il RAID su OMV lo si può fare software, direttamente dall'interfaccia web.
If you have changed the admin password from the web interface, to enter via terminal as root you must use the password that was there before the change.
I noticed (after a reboot), in all 4 OMV installations I have, that in the logs (in the Kernel section) there are references to Wireguard.
The fact is none of the 4 NAS has ever had Wireguard installed.
None of these machines are directly exposed on the Internet, except for a few Docker containers, behind NPM.
All installations are updated to version 7.5.1-1, while the kernel in use is 6.11.0-2-pve
Hi, try this from terminal:
ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off
Change eno1 with your network adapter.
NOTE: This string must be launched every time the system starts (or at least that's what I read on the Proxmox forum); You can create a script that starts every time you start.
This problem occurred to me on one of my systems, using the KVM plugin; it's not the plugin's fault, but it seems to be a bug in the Intel driver or something similar.
Hi, I followed this article: https://learn.microsoft.com/it…nd-smb3?tabs=group-policy
but I still can't connect to the NAS, are there any settings I need to check in the UI?
Ciao, l'articolo che hai postato spiega come accedere a una cartella condivisa, via SMB, su Windows.
Nel tuo caso, vuoi accedere da Windows a una cartella sul NAS; la richiesta di credenziali non dipende da Windows, ma dal NAS.
Crea sul NAS un utente (con password), associalo alla cartella condivisa e, quando Windows ti chiede le credenziali, inserisci quelle.
Ok, I've done a bit of testing with the passthrough, and it works! Thanks for the tip!
OMV doesn't really use them. Most OMV configs are just telling the service to listen on all network interfaces. If you really want OMV to not use them, you should pass the nic thru to the VM.
It's a good idea; I try!
Thank you!
Why did you configure the cards in OMV? Just so they would show up in the KVM plugin?
Because even without doing so, the problem still arose.
I thought that by loading and disabling them, I would prevent OMV from using them.
In the KVM plugin you can see both loading them and doing nothing
One of the installations I have, is equipped with 3 ethernet ports; one integrated, while the other 2 are PCI-e cards recovered from an old HP server.
The integrated port is the one used as the main one, and is directly connected to the LAN.
One of the 2 additional ones, has been assigned to a VM (in this case it is a firewall), such as WAN mail, and is connected to the router.
The remainder has been assigned to a "Windows Server 2022" VM and is connected to the LAN, just like the main one.
The problem with having 2 LAN ports connected to the same network is that software such as Fing or NetAlertX, at random intervals, send me notifications of disconnection of the OVM NAS, which responds to the IP 192.168.20.244, and reconnection with a new MAC ADDRESS.
In the Network -> Interfaces section, I tried to "load" the 2 PCI-e cards, setting them as manual IP and disabled, but the problem remains the same.
Is there a way to force OMV to respond only on the main LAN port?
In the firewall section, you can create rules, but they apply to all network interfaces at the same time.
In any case, apart from the annoyance of receiving these notifications, the operation of the NAS and VMs does not seem to be interfered with.
You need to mount the individual folders you want to save:
services:
duplicati:
image: lscr.io/linuxserver/duplicati:development
container_name: duplicati
network_mode: bridge
environment:
- PUID=0
- PGID=0
- TZ=Europe/Rome
- CLI_ARGS=--webservice-allowedhostnames=* --webservice-password=MY_WEB_PASSWORD
volumes:
- /srv/dev-disk-by-uuid-e777083c-6f6e-4e91-a00f-0a15d4192385/AppData/Duplicati/config:/config
- /srv/dev-disk-by-uuid-e777083c-6f6e-4e91-a00f-0a15d4192385/AppData/Duplicati/script:/script
- /srv/dev-disk-by-uuid-e777083c-6f6e-4e91-a00f-0a15d4192385/AppData/Duplicati/dummy:/source
- /srv/dev-disk-by-uuid-e777083c-6f6e-4e91-a00f-0a15d4192385/AppData/Duplicati/dummy:/backups
- /srv/dev-disk-by-uuid-3cb31ed8-7969-4d9d-a88e-3b5406d415cd/backup-AppData:/BACKUP_AppData:ro
- /srv/dev-disk-by-uuid-3cb31ed8-7969-4d9d-a88e-3b5406d415cd/backup-owncloud:/BACKUP_Dati_Owncloud:ro
ports:
- 8200:8200
restart: always
Alles anzeigen
Hi all,
Yesterday i updated OMV to 7.4.12-3.
I received this notification today:
/etc/cron.daily/openmediavault-apticron:
run-parts: failed to exec /etc/cron.daily/openmediavault-apticron: Exec format error
run-parts: /etc/cron.daily/openmediavault-apticron exited with return code 1
Same error on 4 different OMV installation.