Beiträge von kavejo

    Thank you mi-hol,


    Unfortunately that’s not the case as the same card on the same server with a live iso works just fine, as well as the same card on another server. I have tested it in the MicroServer as well as the DL360, it works fine on both. On the DL380 I see this behaviour.


    The firmware is up to date as I made sure I it was updated through the latest HP SPP and verified on the HPE website that I was running the latest.


    I might try to put a NC365T or a 331T, the latest has a Broadcom chip as opposed to the former which is an Intel one.


    Any other idea?


    Thank you!

    Good evening,


    I am trying to work out why 2 of my 4 network interfaces do not show up in OMV.

    I do have 4 integrated ports, which all works and are listed as enon, while the ports on the NC364T, 2 are listed as ens4fn, and 2 are listed as renamen but disabled.



    I have swapped the cables on the switch end, to check if it was an issue with the switch port, however it does not seem to be an issue with the cables or the switch.

    I have booted a live cd and tried the adapter on another server and I can see all the 4 ports working just fine. Only on OMV these 2 do not show up and are not correctly renamed.


    I can see all 8 PCI devices are listed, the 4 integrated and the 4 from the NC346T.


    Code
    root@DL380e:~# lspci | grep net
    02:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
    02:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
    02:00.2 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
    02:00.3 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01)
    05:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
    05:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
    06:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)
    06:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) (rev 06)


    Same goes for the hardware info.


    In /etc/udev/rules.d/ I can see just the default rules and none that catches these interfaces.


    The only strange thing I can spot is the fact the rename8 and rename9 interfaces shows like having link "no" when exporting the data via lshw -c network -json.


    Would anyone have any suggestion on how to get these 2 ports working?


    Thanks!

    Hi all!


    Just a quick question - I am looking to deploy a DNS AdBlocker like AdGuard Home or PiHole but I'm having some struggles as port 53 TCP and UDP are in use.

    As such the container fails to deploy and, I don't want to map 53 to another port, as otherwise the clients would not be able to connect.


    From netstat I can see that the port for both UDP and TCP is allocated to systemd-resolved.

    Code
    user@server:~# netstat -tulpn | grep ":53 "
    tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      794/systemd-resolve
    udp        0      0 127.0.0.53:53           0.0.0.0:*                           794/systemd-resolve


    I'm not hosting any other DNS-serving container on Docker and, on OMV itself, I only have enabled SMB/CIFS, SSH and RSync Server.


    Could anyone help me in understanding why that port is in use and how I could free it so to map it to the container?


    Thank you!

    Good morning all,


    I have just re-installed OMV 5 on top of Debian 10 on one of my OMV servers.
    I was used to access the Shared Folders via /sharedfolders however this appears empty now.
    The folders (and data) are available on /srv//dev-disk-by-label-XXXXXXX/.


    Is there any easy way to make them re-appear under the /sharedfolders directory so that I can retain all my Docker configuration without recreating all the containers?


    Thank you.


    Regards,
    Tommy

    Hi guys,


    I have just followed @votdev guide on Install OMV5 on Debian 10 (Buster) to install OMV 5 on top of Debian 10.
    I had gone this way as I wanted no swap (192GB of RAM) and I wanted to encrypt the root file system (with the only exception of /boot).


    Going down this way, however I noticed that after following the guide, the WebUI was not reachable and, at first only showed the nginx welcome page.


    I then bumpted into https://www.reddit.com/r/OpenM…gateway_in_web_interface/ which upon running the following 2 command made the WebUI accessible.

    Code
    omv-salt deploy run nginx
    omv-salt deploy run phpfpm
    omv-salt deploy run fstab


    I am wondering, is there anything more that needs to be deployed via OMV-SALT?


    I want to make sure I don't miss any piece.


    Thanks!

    Hi @subzero79, thanks for the reply.


    Yes, the device shows up in the File System section and I have actually recreated an EXT4 partition after wiping the existing one.
    I wonder if that could be something to do with Primary vs Logical/Extended partitions but, given the existing one was removed I am kind of tempted to exclude this cause.


    Would you have any suggestion on how to format the device as LUKS and then create a partition from terminal, please?
    Shall I try to follow https://www.cyberciti.biz/hard…-luks-cryptsetup-command/?


    Thank you!

    Hi,


    I have just tried to create a LUKS-Encrypted device via the plugin but I seem unable to do so.


    I had a hardware RAID-1 drive (/dev/sdb) mounted an in use. I have removed the shared folders, unmounted and deleted the file system.
    At this point he drive /dev/sdb was unused but still showed up under "Disks".


    I then moved to the "Encryption" tab, after installing the LUKS plug-in however this drive was not showing up at all.
    Can someone let me know what I am doing wrong and how I could get the device encrypted, please?


    Thank you.

    Hi,


    I am contemplating adding encryption to my OOMV-based NAS and I am trying to understand what the best practices are.
    I am rather a newbie when it comes to encryption in Linux so please forgive this question if it has already been discussed.
    I have read a number of threads about LUKS however I struggle to find information about what types of encryption are available (i.e. passphrase, key file, etc.) and where these can be stored (i.e. must be provided at boot, on file system, etc.).


    What I would love to achieve is a setup whereby if a USB key with a decryption key is plugged into the server, then all the data drives can be decrypted, otherwise, if this USB key is removed the data should not be accessible.
    Is this possible?


    As an additional security measure, I would love to be able to have all the data wiped after N-attempts (let's say 3) to boot the system without the decryption key inserted.
    Is this something achievable?


    Would this idea, of storing the decryption key in a USB drive, be an overkill and would perhaps a passphrase suffice instead?
    Supposing this is the case, and therefore supposing a passphrase would be enough, would the encryption "safety" depend on the length of the passphrase itself or would a 24-character passphrase be as safe as a 48-character passphrase?


    Thank you!

    I'm using P420 and P822 in my all NASes and I have no problem running OMV with them.
    They run in proper RAID mode and are not flashed to IT (HBA) mode.


    On one server I have a P420 with 4 SSDs attached (2 Raid-1 for OS and 2 Raid-1 for VirtualBox and Docker), then I have the data drives on a dedicated disk shelf connected to the P822.
    On the other I have a P420 with a 2 SSD (Raid-1 for OS, swap and Docker) and then 4 WD RED drives.


    I have had no problems what-so-ever with these raid card as the P-series HP raid cards are fully open source and the drivers have been built-in the kernel for a long time.
    That doesn't hold true for the H-series that are HBA only.

    Hi @ness1602, thank you for your reply.


    That's what I realized, is there any way to use all the bandwidth in a 1:1 connection?
    Perhaps by switching from LACP to balanced-xor/balancer-rr or other bonding algorithms?


    Thank you.

    Good morning all,


    I'm sure this is something that has been discussed already but I seem not to be able to find any reference at the moment.


    I have a couple of servers, both with a number of NICs (4 and 6, to be precise) configured with LACP (the switch supports LACP).
    Now, I was expecting NAS-to-NAS transfers to hit approx. 4 Gbps (the slowest server has 4 * 1Gbps interfaces) however I can see I a only able to reach 1Gbps as, apparently, rsync only creates a single TCP session which cannot span multiple NICs.


    I have tried to start multiple rsync jobs concurrently, hoping that this would allow all the bandwidth to be used however that still hit 1 Gbps top.
    I'm sure LACP works just fine as I move from/to each NAS data faster than 1 Gbps (i.e. having 3 wired clients copying from each NAS I can consume ~ 3Gbps).


    Is there any way to tell RSync to take advantage of all the available bandwidth and spread the load across multiple NICs configured with Link Aggregation?


    Thank you.

    Hi all,


    I've just installed portainer/portainer:latest so to get used with this new tool.
    I can see my contianers and their information and details.


    Is there an easy way to export the configuration (settings, volumes, etc.) from a running container so to be able to re-create the container "as is"?


    For example, let's say I delete Transmission-OpenVPN and, in a month time, I want to re-deploy as it was prior deletion (with username, password, PUID, volumes etc. set).
    Is there a way to do so?


    Thanks!


    In order for updating to work with the official image when restarting the container, you have to specify the proper image Tag when you initially pull and run the image.

    You lost me here.


    I had pulled an image with the “latest” tag, then I had pulled it again when s new version was release; I ended up with a single Plex docker running but 2 Plex images, one of which was the original one and the other was the newly released one.


    How could I tell docker to use the latest (newly released) and not the one that was originally used for the container creation?


    Thanks!

    Thank you @subzero79.


    So, docker-compose has the ability to rely on configuration files; I will need to look better into this and understand how to export the configuration of running containers.


    I’m still running OMV 4 so I don’t think there is an option to run Portainer (unless I run it in a docker image itself), however I’ve tested OMV 5 with Portainer in a VM and I found it much more complex than docker-gui.
    I must admit I struggled to find a way to search for images on Portainer.
    Would you have any resource you’d recommend for better understanding docker-compose and Portainer?


    Thank you!