Beiträge von Thormir84

    The containers created in host or bridge mode can be accessed by pointing to the IP of the host machine, followed by the port used by the container; what is the reason you need to assign a private LAN IP to the NPM container?

    The containers created with the MACVLAN adapter do not have access to the host machine nor can they reach other containers running in bridge or host mode. Run the NPM container in bridge mode; it doesn't need to be in MACVLAN. If the issue is with the ports, you can set everything up like this:

    '8484:80'

    '8485:81'

    '8443:443'

    Of course, on the router/firewall, you will need to create a port forwarding like this:

    IN->80 -> TO->8484

    IN->81 -> TO->8485

    IN->443 -> TO->8443

    In the last few weeks, Netalert has started to report issues related to the use of swap. According to the details displayed, the problem concerns "qemu-system-x86-x64". I don't know why it has started to behave this way, especially since, as you can see from the screenshots, the host machine still has free memory.


    The system is still functioning and is not showing any problems.

    Why do you want to access the root file system to back up the shares?

    You simply need to mount the folders you want to save using variables like these:


    Code
        volumes:
          - /srv/dev-disk-by-label-DATI/AppData/Duplicati/config:/config
          - /srv/dev-disk-by-label-DATI/AppData/Duplicati/script:/script
          - /srv/dev-disk-by-label-DATI/AppData/Duplicati/dummy:/source
          - /srv/dev-disk-by-label-DATI/AppData/Duplicati/dummy:/backups
          - /srv/dev-disk-by-label-ARCHIVIO/BACKUP_AppData:/BACKUP_AppData:ro
          - /srv/dev-disk-by-label-ARCHIVIO/BACKUP_Dati_Owncloud:/BACKUP_Dati_Owncloud:ro


    Putting ":ro" at the bottom will mount the folders as read-only, if you want.

    It is a change introduced with V28 of the Docker Engine: Docker Engine v28


    I put the string under "networks", because putting it as you say, every time the container was restarted the mac address changed anyway.


    Code
        networks:
          macvlan_home:
            ipv4_address: xxx.xxx.xxx.xxx
            mac_address: "xx:xx:xx:xx:xx:xx"
    
    networks:
      macvlan_home:
       external: true

    The best way to assign a network card exclusively to a VM is to use the "add PCI device" function from the KVM plugin.

    Otherwise, the fixed IP assigned to OMV will be detected on all network adapters connected to the LAN, physically present in the machine.

    See:


    Is it possible for a disk that is reported as defective by the S.M.A.R.T. monitor to be marked as "failed" automatically?

    As mentioned, I had the opportunity to use the terminal command 2 times and, after replacing the disk, the mirror was rebuilt without problems.

    The "remove" option is de-activated in the WEBUI for good reason to prevent users shooting themselves in the foot as my examples show. Using the CLI directly removes those checks. What you said in #3 and now in #12 are two different error conditions. First you talked about the "remove" button being de-activated now you are talking about not being able to select a given drive to remove which is an entirely different case due to some unknown error condition specific to your system.

    It was not possible to select the disk to be removed, because the "remove" button was disabled; it is not that I could click on it but it did not let me choose the disk.

    The fact that it was disabled, probably, was due to a bug in the GUI or the version I was using at the time (it was OMV 7 but I don't remember the exact version, honestly).

    At the moment, the remove button is active and I can safely choose which disk to remove from the raid, even if I have no error reports.

    I suppose that the "remove" button does exactly the same thing as the terminal command, also because I don't think there are dozens of ways to remove a disk from a raid.

    I had the need to use this command 2 times on 2 different installations, because in both cases I had a disk with different S. M. A. R. T. signals; For reasons I don't know, the "Remove" button did not allow me to select the disk to be removed, preventing me from continuing.

    Using the terminal command, however, the removal of the failed disk was successful both times; Once you turned it off, removed the disk, and returned to the web interface, you could always add the new disk to the volume, without any problems.

    In case the "remove" button is not available (once it happened to me, I still don't know why) you can remove the disk from the terminal with the following command:


    mdadm /dev/md0 --remove /dev/sdd


    Obviously, you have to modify it according to your needs; In this example, "MD0" is the name of the RAID volume, while "/dev/SDD" is the disk to be removed.

    Try Syncthing!



    Ho bisogno di aumentare il numero di porte SATA e stavo pensando a usare un adattatore PCIe to SATA.


    Valutando tra schede con ASM 1166 o JMB 585, per una serie di motivi, ero più orientato verso il JMB 585 poi ho scoperto le schede raid tipo LSI MegaRAID 9272-8i...


    Sicuramente forniranno prestazioni superiori considerando che usano almeno 8 linee PCIe ecc ma mi chiedevo se si possono usare con OMV, se il RAID posso farlo anche sw da OMV o necessariamente dal BIOS e quindi quanto più complicata può essere la configurazione e se vale la pena orientarsi su questi.

    Allora:

    Sì, puoi usare le schede PCIe-SATA per avere porte aggiuntive.


    Sì, il RAID su OMV lo si può fare software, direttamente dall'interfaccia web.

    I noticed (after a reboot), in all 4 OMV installations I have, that in the logs (in the Kernel section) there are references to Wireguard.

    The fact is none of the 4 NAS has ever had Wireguard installed.

    None of these machines are directly exposed on the Internet, except for a few Docker containers, behind NPM.

    All installations are updated to version 7.5.1-1, while the kernel in use is 6.11.0-2-pve

    Hi, try this from terminal:

    ethtool -K eno1 gso off gro off tso off tx off rx off rxvlan off txvlan off

    Change eno1 with your network adapter.

    NOTE: This string must be launched every time the system starts (or at least that's what I read on the Proxmox forum); You can create a script that starts every time you start.
    This problem occurred to me on one of my systems, using the KVM plugin; it's not the plugin's fault, but it seems to be a bug in the Intel driver or something similar.

    Hi, I followed this article: https://learn.microsoft.com/it…nd-smb3?tabs=group-policy

    but I still can't connect to the NAS, are there any settings I need to check in the UI?

    Ciao, l'articolo che hai postato spiega come accedere a una cartella condivisa, via SMB, su Windows.
    Nel tuo caso, vuoi accedere da Windows a una cartella sul NAS; la richiesta di credenziali non dipende da Windows, ma dal NAS.

    Crea sul NAS un utente (con password), associalo alla cartella condivisa e, quando Windows ti chiede le credenziali, inserisci quelle.