Makes sense - thanks!
Thank you gderf ,
In Portainer I had mapped 0.0.0.0:53 to port 53 in the container.
I guess that takes any address, including 127.0.0.1.
Would you suggest me to just bind that one of the IPs of the server instead?
Just a quick question - I am looking to deploy a DNS AdBlocker like AdGuard Home or PiHole but I'm having some struggles as port 53 TCP and UDP are in use.
As such the container fails to deploy and, I don't want to map 53 to another port, as otherwise the clients would not be able to connect.
From netstat I can see that the port for both UDP and TCP is allocated to systemd-resolved.
I'm not hosting any other DNS-serving container on Docker and, on OMV itself, I only have enabled SMB/CIFS, SSH and RSync Server.
Could anyone help me in understanding why that port is in use and how I could free it so to map it to the container?
Good morning all,
I have just re-installed OMV 5 on top of Debian 10 on one of my OMV servers.
I was used to access the Shared Folders via /sharedfolders however this appears empty now.
The folders (and data) are available on /srv//dev-disk-by-label-XXXXXXX/.
Is there any easy way to make them re-appear under the /sharedfolders directory so that I can retain all my Docker configuration without recreating all the containers?
I have just followed @votdev guide on Install OMV5 on Debian 10 (Buster) to install OMV 5 on top of Debian 10.
I had gone this way as I wanted no swap (192GB of RAM) and I wanted to encrypt the root file system (with the only exception of /boot).
Going down this way, however I noticed that after following the guide, the WebUI was not reachable and, at first only showed the nginx welcome page.
I then bumpted into https://www.reddit.com/r/OpenM…gateway_in_web_interface/ which upon running the following 2 command made the WebUI accessible.
I am wondering, is there anything more that needs to be deployed via OMV-SALT?
I want to make sure I don't miss any piece.
Hi @subzero79, thanks for the reply.
Yes, the device shows up in the File System section and I have actually recreated an EXT4 partition after wiping the existing one.
I wonder if that could be something to do with Primary vs Logical/Extended partitions but, given the existing one was removed I am kind of tempted to exclude this cause.
Would you have any suggestion on how to format the device as LUKS and then create a partition from terminal, please?
Shall I try to follow https://www.cyberciti.biz/hard…-luks-cryptsetup-command/?
I have just tried to create a LUKS-Encrypted device via the plugin but I seem unable to do so.
I had a hardware RAID-1 drive (/dev/sdb) mounted an in use. I have removed the shared folders, unmounted and deleted the file system.
At this point he drive /dev/sdb was unused but still showed up under "Disks".
I then moved to the "Encryption" tab, after installing the LUKS plug-in however this drive was not showing up at all.
Can someone let me know what I am doing wrong and how I could get the device encrypted, please?
I am contemplating adding encryption to my OOMV-based NAS and I am trying to understand what the best practices are.
I am rather a newbie when it comes to encryption in Linux so please forgive this question if it has already been discussed.
I have read a number of threads about LUKS however I struggle to find information about what types of encryption are available (i.e. passphrase, key file, etc.) and where these can be stored (i.e. must be provided at boot, on file system, etc.).
What I would love to achieve is a setup whereby if a USB key with a decryption key is plugged into the server, then all the data drives can be decrypted, otherwise, if this USB key is removed the data should not be accessible.
Is this possible?
As an additional security measure, I would love to be able to have all the data wiped after N-attempts (let's say 3) to boot the system without the decryption key inserted.
Is this something achievable?
Would this idea, of storing the decryption key in a USB drive, be an overkill and would perhaps a passphrase suffice instead?
Supposing this is the case, and therefore supposing a passphrase would be enough, would the encryption "safety" depend on the length of the passphrase itself or would a 24-character passphrase be as safe as a 48-character passphrase?
Ah, that's a pity.
I guess I will have to upgrade my boxes by looking into 10Gbps NICs then.
I'm using P420 and P822 in my all NASes and I have no problem running OMV with them.
They run in proper RAID mode and are not flashed to IT (HBA) mode.
On one server I have a P420 with 4 SSDs attached (2 Raid-1 for OS and 2 Raid-1 for VirtualBox and Docker), then I have the data drives on a dedicated disk shelf connected to the P822.
On the other I have a P420 with a 2 SSD (Raid-1 for OS, swap and Docker) and then 4 WD RED drives.
I have had no problems what-so-ever with these raid card as the P-series HP raid cards are fully open source and the drivers have been built-in the kernel for a long time.
That doesn't hold true for the H-series that are HBA only.
Hi @ness1602, thank you for your reply.
That's what I realized, is there any way to use all the bandwidth in a 1:1 connection?
Perhaps by switching from LACP to balanced-xor/balancer-rr or other bonding algorithms?
Good morning all,
I'm sure this is something that has been discussed already but I seem not to be able to find any reference at the moment.
I have a couple of servers, both with a number of NICs (4 and 6, to be precise) configured with LACP (the switch supports LACP).
Now, I was expecting NAS-to-NAS transfers to hit approx. 4 Gbps (the slowest server has 4 * 1Gbps interfaces) however I can see I a only able to reach 1Gbps as, apparently, rsync only creates a single TCP session which cannot span multiple NICs.
I have tried to start multiple rsync jobs concurrently, hoping that this would allow all the bandwidth to be used however that still hit 1 Gbps top.
I'm sure LACP works just fine as I move from/to each NAS data faster than 1 Gbps (i.e. having 3 wired clients copying from each NAS I can consume ~ 3Gbps).
Is there any way to tell RSync to take advantage of all the available bandwidth and spread the load across multiple NICs configured with Link Aggregation?
I've just installed portainer/portainer:latest so to get used with this new tool.
I can see my contianers and their information and details.
Is there an easy way to export the configuration (settings, volumes, etc.) from a running container so to be able to re-create the container "as is"?
For example, let's say I delete Transmission-OpenVPN and, in a month time, I want to re-deploy as it was prior deletion (with username, password, PUID, volumes etc. set).
Is there a way to do so?
In order for updating to work with the official image when restarting the container, you have to specify the proper image Tag when you initially pull and run the image.
You lost me here.
I had pulled an image with the “latest” tag, then I had pulled it again when s new version was release; I ended up with a single Plex docker running but 2 Plex images, one of which was the original one and the other was the newly released one.
How could I tell docker to use the latest (newly released) and not the one that was originally used for the container creation?
This is not correct. See the Tags section of the documentation:
Thank you for the correction @gderf; I did restart the container but I kept getting the notification of a new version available so I supposed it wasn’t auto-apdating.
Good to know I was wrong as that’d simplify my life.
Thank you @subzero79.
So, docker-compose has the ability to rely on configuration files; I will need to look better into this and understand how to export the configuration of running containers.
I’m still running OMV 4 so I don’t think there is an option to run Portainer (unless I run it in a docker image itself), however I’ve tested OMV 5 with Portainer in a VM and I found it much more complex than docker-gui.
I must admit I struggled to find a way to search for images on Portainer.
Would you have any resource you’d recommend for better understanding docker-compose and Portainer?
Thank you @KM0201.
I had seen the video from TechnoDadLife about WhatchTower but I haven’t used it yet.
Is that generally what people do?
But I’m wondering, is there an easy way to export the configuration (i.e. to a *.yml file) and then recreate the container based on that file?
I am wondering so as, let’s say, one day I might need to migrate to new hardware and/or re-image my NAS and it’d be handy to have a way to just recreate the containers as they were (proving share names do not change and configuration folders are retained).