Posts by Shakesbeer

    Ok. I grabbed a coffee and started to feel human again. I dug deeper and here is what I found:

    This is stupid but reasonable.

    Obviously there is a block entry for my desktop IP in iptables, which includes http and https.

    So where does it come from?


    I remembered that I activated fail2ban for preventing brute force attacks.


    I found an entry in /var/log/fail2ban.log

    Code
    2021-10-19 00:07:12,635 fail2ban.actions        [516]: NOTICE  [omv-webgui] Restore Ban 192.168.1.20


    Ok. So I checked fail2ban


    Code
    fail2ban-client status omv-webgui
    Status for the jail: omv-webgui
    |- Filter
    |  |- Currently failed: 1
    |  |- Total failed:     1
    |  `- File list:        /var/log/auth.log
    `- Actions
    |- Currently banned: 1
    |- Total banned:     1
    `- Banned IP list:   192.168.1.20


    There it is. Obviously my Desktop simply gets blocked on http and https.

    This is the reason why there is no error message anywhere and everything looks fine. It is because everything IS fine and behaving as intended.


    So I removed my IP from the fail2ban jail with:

    Code
    fail2ban-client set omv-webgui unbanip 192.168.1.20


    and switched the omv webgui ports back to 80 and 443 again with omv-firstaid.


    I tried accessing the webgui on 443 and.... Tadaaaa it works.

    IT is amazing ... just do it the correct way, and it works. 8o


    So how did this happen?

    Following scenario:

    - My root disk was 100% full. The shares started, but nginx and the php webgui had an issue with that and did not start properly.

    - I tried to connect to the webgui during this, but it failed. Anyhow there were log entries about my failed connection attempts in /var/log/auth.log

    - These failed connection attempts have been recognized by fail2ban, as intended, and my IP has been jailed.

    - After I emptied the disk, updated omv and everything was fine again, the fail2ban entry persisted.


    When starting my investigation I got mislead by some posts having a similar issue, but having unintended port conflicts. That was why I first started looking there.



    My issue is solved now, and I hope my description will help others in the future :D

    While writing my last entry, I thought about iptables, even though I did not activate or configure it.

    I see there is an REJECT entry for the IP of my desktop computer for the f2b-omv-webgui chain. But I do not see any reference to 80 or 443.

    Can anyone please have a look and help me interpret the iptables output? I had a horrible night and cannot even focus on walking straight :D

    By the way ... I am currently not thinking it is a port conflict issue anymore, because:

    - I do not see any duplicate port usage in IPv4

    - IPv6 is deactivated


    I suspect there is something broken in the webserver. The nginx configuration looks great, but there must be something that prevents it from delivering anything on port 80 and 443.

    Any ideas where to look? I am not a nginx specialist.

    I had a closer look but could not find any hint about the root cause.


    The /etc/nginx/sites-available/openmediavault-webgui contains the correct values, if I change them.


    working:

    Code
    listen 8080 default_server;
    listen 9443 default_server ssl deferred;


    not working:

    Code
    listen 80 default_server;
    listen 443 default_server ssl deferred;


    when checking netstat everything looks ok as well. nginx is listening on port 80 / 443 only on IPv4 (as I disabled IPv6). But if I want to connect, my connection just gets a timeout.


    I checked the logs in /var/log but could not find anything suspicious.

    Code
    /var/log/nginx/error.log does not show anything useful as well:
    2021/10/25 21:03:19 [alert] 533#533: *78856 open socket #11 left in connection 4
    2021/10/25 21:03:19 [alert] 533#533: *78689 open socket #7 left in connection 10
    2021/10/25 21:03:19 [alert] 533#533: aborting


    Do you have any further idea where I could look for identifying the root cause? This seems to be reproducible behaviour, as there is already a second post in this forum describing the exact same issue.

    I built one too. Not really much different than a normal RPi other than power supply. Here is a pic of mine:

    Nice setup. I see you are running a HDD setup. Just using an available computer tower and the power supply is a good way to power the HDDs and put (maybe) already available hardware to a use.

    I wanted it in a small form factor, as these computer towers need so much space. No issue for me, but my relatives do not want to have too much computer hardware visibly standing around. With the smaller case I was able to put it behind their TV screen on a board.


    I am curious. What disks are you using exactly, in which setup and what filesystem? Did you do some performance testing? I only have the RPi4 Compute Module SSD experience.

    The real end to end read/write performance is influenced by many factors.

    While copying a large amount of data, please monitor your system with the dstat command and provide some information about your CPU , RAM and Network load.

    And please provide some information about your end to end setup.

    E.g.

    - How do you access your NAS share? SMB?

    - What System are you copying and reading data from? Windows 10?

    - What filesystem are you using on your OMV disks?

    - What disks in general? (the exact type, as some have a limited write cache which impacts performance heavily when transferring larger amounts of data.).

    etc.


    Why is this of interest?

    Because all these things are in the way of your data being copied from a client to your NAS.

    E.g. (I know, this is a simplified description. Bear with me.)

    The data gets read by your Windows client from the local disk in NTFS format, then put through your local network adapter in nice chunks of data (according to your network settings like MTU / Frames. Make sure to use jumbo frames), on your NAS the network card receives all these packages and it gets pieced together again to the original data. Then this data runs through your NAS system, the SMB protocol takes it's toll on performance for encoding/decoding, and finally it shall get stored on your disk on the NAS. But as there is different filesystem in place (e.g. ext4 / btrfs / whatever) your data package runs through the cpu again before it finally gets written to the disk. The disk itself receives the package and puts it to its internal write cache. Depending on the size of this cache and the write speed of your disk this can take time again, because if the write cache is full your package needs to "wait" in the system before even getting accepted by the disk. When the disks (e.g. spinning magnetic disks) finally receive the data from the cache it finally gets written.


    All these steps in between can affect performance. It is not unusual that you do not get the theoretical full read/write speed that is mathematically possible, when you only look at your disk metrics and the network. Always consider that your data package runs through multiple stacks on both sides of the network, which can heavily affect the real world performance.


    I learned this the hard way, as I built a NAS but was only able to write with 14 MB/s to my share from my windows system. Finally my CPU was the bottleneck. It was a multi core system, but the system only used one core for encoding/decoding all the SMB and filesystem stuff, which created a bottleneck there.


    So if you want to find your bottleneck you need to look at the full stack your data package is running through and trigger your little Sherlock Holmes to find it.


    ################################################


    Edit: Sorry, it is late, and I have been a bit unclear. When copying over the network using SMB the source and target filesystem on your disks of course has not a big impact. The main impact has the SMB protocol on your cpu, software raid configurations, cache limits of the used disks, MTU / frame setting of your network adapters and the hardware in between (switch and router).

    When copying localy from an ext4 disk to a NTFS disk, or vice versa, the NTFS linux driver takes it's toll on the CPU when converting the data. At least on my system (yes, the hardware is not the latest top notch) it only uses one core and puts significant load there creating a bottleneck.

    When using the network the bottleneck is again in the CPU but created by the SMB implementation.

    I get a satisfying speed out of my setup, but it is not the theoretically maximum possible.

    Have you been able to verify the connection? 100Mbit vs 1000Mbit?

    If you have verified that your network connection is not an issue, please post some more details about your used file systems and how you access it.

    Are you using SMB? What filesystem are you using on your USB stick and on the client you are copying from? I made some interesting experiences and tests regarding the capabilities of the Intel Celeron CPUs when using SMB to copy from a NTFS filesystem to an internal linux filesystem (like ext4). Depending on your setup this can become a bottleneck.

    In addition please provide some information about your system load when copying data, using the dstat command.

    Do you still have this issue?

    If yes, please provide more information. Is the web interface reachable? Do your shares work? etc.

    For example the output of journalctl

    journalctl -xe


    Looks like your boot sequence has an issue

    I just came across this thread, and it reminded me heavily of an article that I recently read. So I thought I could share my Information with you.


    The Heise Make Magazine (German) recently had a real nice build. They called it the "SATANAS". Here is the link.

    https://www.heise.de/select/make/2021/4/2033811215351927251


    The article itself is paid content, but it is worth it, as it contains the built specifically for OMV including a detailed step by step guide. If you want to to do an OMV build based on the RPi 4 Compute Module, I recommend this read. If you are an english speaker, maybe you can use google translate. Anyway the commands and pictures transport the information even if you are not a german speaker.


    I did their built and it works like a charm. Basically I used:

    Code
    Raspberry Pi Compute Module 4 (4 GB RAM)
    Raspberry Pi CM4 IO-Board
    4-Port SATA Controller Card Marvel 9215 (Which is, together with the compute module and the IO Board, the core.)
    Micro-SD-Card of 64 GB
    
    SSD Disks, mainly because of speed and the small size, I suppose.
    Power Supply (internal) of 12V and 1,5A
    
    Wireless Antenna for the Compute Module.

    They even made laser cutting templates for the case available for everyone on github:

    https://github.com/MakeMagazinDE/SATANAS


    Regarding power supply I built it directly in the case, as mentioned.

    I went with this one specifically, but any other with the same specifications and roughly the size should work.


    https://www.conrad.de/de/p/h-t…-bereich-5-24-190008.html


    The power supply should have 12V and at least 1500mA, depending on what peripherals and disks you include. Take care that it provides a plug supply that allows a connection to the IO board. I first did took a different one, and needed to switch later.


    As I am already running a HDD based OMV NAS, I will give this nice little piece of speed to some relatives. I just was curious and wanted to build it. Luckily I knew people who were looking for a custom NAS solution and not an out of the box system, so this was my chance :)

    Hello,


    I ran into a real weird behaviour of my OMV Server, and would like to get your opinion and experience before I dig deeper and maybe bother the community with a github issue.


    Code
    uname -a
    Linux MyNAS 5.10.0-0.bpo.8-amd64 #1 SMP Debian 5.10.46-4~bpo10+1 (2021-08-07) x86_64 GNU/Linux


    Hardware - Homebuilt NAS Tower with 4 HDD Bays (currently 3 in use), one 120GB SSD as root disk, and one extra eSATA PCI Adapter for an external eSATA Docking Station (for external backup disks).



    I startet with a Debian Stretch installation and installed OMV manually following the guide, and finally updated to Debian Buster in August 2021. (I took my time, as I did not want to be early adopter this time :) ). I did not install any additional software, besides some linux tooling like the lshw package etc. No pihole or other stuff. This server has the only purpose to run my OMV.

    I also updated OMV on the CLI and verified it on the web gui shortly after. Yes, I did a reboot.

    I was running with the default configuration for the web gui, which means port 80 and 443.


    Everything was fine, until I did a small stupid mistake. I created a folder on my root disk instead my backup disk, and synced backup data there, resulting in the root disk being 100% full. I did not realize it immediately as all the shared were still working fine, even after a reboot. I recognized something is wrong when I wanted to access the web gui and it was not accessible (Timeout). So I plugged in my screen to the NAS and saw a bunch of error messages when booting. The journalctl entries clearly pointed to the full disk. I found the full root disk and deleted the misplaced backup data, rebooted and did an apt-get update / upgrade and autoremove and cleanup, as this was my original intend.

    All error messages that I saw disappeared and everything looked fine.


    But weirdly enough, the web gui was still not accessible. I was assuming some side effect of the full disk, but now I think this is not the case.

    I used omv-firstaid to reset the web gui configuration (IPv4, Port 80 and 443, http and https enabled, IPv6 disabled), but nothing changed.

    The nginx service was up and running, and looked fine.


    When having a look at netstat -a I saw that there are no ports 80 and 443 listening for IPv4. But there were entries for IPv6 listening, like :::443 and :::80.

    A google search showed some results in which IPv6 interferred with the IPv4 configuration when using the same ports (weird). I checked with ifconfig -a and saw that my ethernet adapter had no IPv6 configured, but the localhost had still an IPv6 entry. To be sure I disabled IPv6 manually by creating /etc/sysctl.d/70-disable-ipv6.conf with the content: net.ipv6.conf.all.disable_ipv6 = 1 and activated and rebooted.

    The IPv6 entry finally disappeared for localhost, but the web gui was still not there.

    As another post mentioned a possible port conflict, I switched my https port to 9443 and http to 8080 just to give it a test.

    The web gui worked!!!

    Now I checked on this with multiple combinations of 80, 8080, 443 and 9443.


    I can verify that nginx starts correctly, but it seems to get a conflict on port 80 and 443 and does not provide the web gui as a result. I was not able to find the conflict, but obviously there ist some. As I have written I do not have any other software running, and especially no other software that provides a web interface. My setup is stable for some time now, and everything worked fine, until I did the update and filled the disk right after it.


    I did not activate the firewall on the OMV server and I am trying to access it from my regular windows Desktop which is located in the same network.




    My questions are:

    - Did anyone have the same experience?

    - Do you have any idea about the root cause? What could possibly cause such a weird conflict?

    - Am I blind? I do not see anything else using 443 or 80? So I assumed a port conflict. Any ideas what this can be?


    I came to a point where I wanted to ask for your oppinion and help in investigating on this. I do not think this is an issue related to my full disk issue anymore, but rather something that got introduced with my last package update.


    Edit: Removed typos and corrected formatting.