Beiträge von puterfixer

    Basically, you cannot use the same instance of FTP server and access it via two different IP addresses. You should have 2 instances of the FTP server, one configured to be used on the LAN and one on the WAN. I'm going to deconstruct the FTP protocol, so that you understand how things work.


    Objective: connect to the FTP server from the Internet (WAN IP)
    Steps:

    • get a more or less fixed IP address for the internet connection (either a static IP or a hostname that is updated whenever the IP changes - noip, dyndns etc.)
    • configure the router with port forwarding rules for the FTP command port (21) and the FTP transfer ports (a range of ports >1024 and <65535)
    • configure the FTP server to use that WAN IP and port range when communicating to clients.

    Now for a bit of theory: the FTP command protocol is the one which exchanges messages about source and destination IPs and ports, in order to prepare and establish the TCP connections to transfer the binary data of files. One of the peers is the "active" one, meaning it can open a TCP port in listening state and expect an incoming connection request, while the other peer is the "passive" one and will initiate the connection towards the "active" peer. The connection needs at least one of the peers to be "active", thus reachable through any routers and firewalls, and that's why you do the router configuration to ensure that anyone else, active or passive, can connect to your FTP.


    Here's how the communication between the Server (active) and Client (passive) occurs:
    Client tells the Server: I want to send/receive a file.
    Server responds: Ok, I have opened a socket on IP:port, please connect.
    Client connects to specified IP:port and transfers the data.


    So, as part of the protocol, the FTP server communicates not only the (random) port on which it is listening for connections, but also the IP address to which the client must connect to. For a FTP server configured to be accessed from the Internet through a router, the FTP server will annouce the WAN IP in this message, and never the LAN IP (which would be not routable and the remote client would not be able to connect to it), although the server's machine is operating on a LAN IP itself. It is the configuration of the FTP instance which instructs the FTP server to advertise the WAN IP.


    This will work for a client on the Internet: it gets a routable IP address and a port, it will connect to it and end up on a router, the router will forward the connection to the LAN IP where the server is, and the transfer will proceed.


    However, a client on the LAN side will also receive a message to connect to the WAN IP and port, instead of being told to use directly the LAN IP. The FTP server can't differentiate between LAN and WAN clients to send different messages, so it always sends the same IP address, as instructed in its configuration.


    So what happens then with the LAN client? It attempts to initiate a TCP connection to the routable IP address on the WAN side. The operating system identifies that the desired target is outside the local subnet, so it forwards the connection request to the default gateway - the router. The router performs Network Address Translation on the connection and forwards it to the Internet interface, but the target is actually its own WAN port. The router then identifies that this connection needs to be forwarded back through the Port Forwarding rule to a LAN IP address, and this is where things usually stop. In routers there's commonly a built-in security mechanism to prevent spoofing of source IP addresses in packets received on the WAN port, so that malicious people don't attack internal servers by making the packets appear as being originated from another LAN client.


    And this is why the connection from LAN will not work to the same FTP server instance which is otherwise accessible from the Internet.


    The solution for this is to have 2 instances of the FTP server running, with 2 separate network configurations: one for LAN clients, configured to work on one port and advertise its LAN IP and its own range of active ports (which don't need to be forwarded in the router), and another instance for WAN clients, configured with another port, advertising the WAN IP and another range of active ports which match the Port Forwarding rules in the router.

    If I may detail a bit @subzero79 's first point :) NIC teaming/bonding only works if you have a smart or managed switch which supports port teaming/bonding as well. Also, the internal switching fabric of the switch must be able to sustain that volume of traffic. These two requirements pretty much exclude any SoHo router with a built-in switch.

    Uh, this looks like a spaghetti of assumptions.


    As a basic concept: "proxy speeds up the Internet" does not mean it will always make internet access faster. It can only make some pieces download faster (from a local cache than from the original site), as long as they have been accessed before and kept in the cache. But your browser already does this anyway, even without a proxy. So the actual savings of traffic are visible in case of large groups of people accessing pretty much the same internet content, and when you tweak the proxy's content retention period to be longer than the browser's cache.


    To explain where the proxy fits into the picture, consider that you have 3 ways of connecting to the Internet:

    • You want your workstations to connect "directly" to the Internet. For this, you get a router to do packet routing between your LAN and any other destination, and configure the workstations with the router as "default gateway". The router can prioritize various types of traffic and allow/disallow some connections according to a whitelist/blacklist, but other than that you're not doing any sort of connection logging, content inspection, etc.
    • You want to fully control what your users are accessing, and for this you do not define a default gateway. Only specific machines on your LAN are able to access both your LAN and the Internet - for instance, a web proxy server. For this, you define a proxy server and your clients are required to explicitly define it in browser's settings, otherwise they don't have Internet access. The proxy server works at application level in the OSI model, so this allows you to do some fancy stuff which a router would not be able to do, for instance: do content caching, whitelist/blacklist URLs based on wildcards, have full logging of all traffic for audit purposes, etc. But, clients must be expressly configured to use the proxy.
    • You want the advantages of having a proxy, but you don't want your users to manually configure browser, or to even know that the traffic is passing through a proxy and is being logged. Or maybe you need the clients to first authenticate themselves in a browser before getting Internet access (like some corporations, hotels, etc.) In this case, a transparent proxy will be used. From the client perspective the configuration is similar to a router (the DHCP server gives you an IP, a DNS server and a default gateway), however that gateway actually is not just doing packet routing, but also traffic inspection and classification. It then decides whether the traffic is HTTP(S) or not, and what to do with it based on some rules. And for HTTP it can leverage a caching mechanism in a completely transparent way to the end user.


    Not to confuse this with a reverse proxy - this sits on the server side. For instance, I may be running a website on a web host somewhere, and the database is getting hammered by requests for every single pageview. One of the options to limit this is to route all traffic to the website through a reverse proxy, so that the proxy can cache the pretty much static pages/content and deliver them to the visitors, and avoid a load of database hits; only the unique requests (specific to one user or another) would pass through the reverse proxy to the web server to be generated.


    So, theory aside :) If you want to have a proxy on the LAN, you need to figure out if you're in the scenarios 2 or 3 above. You can even set up a proxy server on a networked machine just for caching (allow direct traffic through the router), and point your other machines' browsers to it as a proxy for HTTP. For a transparent proxy, things are trickier.


    Some other things to keep in mind:the proxy machine needs to be seriously beefed up with RAM and spare disk space for cache;

    • the traffic savings become relevant for large organizations, and pretty low for a home use scenario - see the basic concept at the beginning;
    • a lot of the web content these days is dynamic, so the savings will vary a lot depending on the internet use patterns of your people;
    • in order to be effective, the proxy needs to find a very fine balance and sensible configuration/management, so that you don't run into various issues; for instance, you may force items to expire after a looooong time so that they are delivered from cache, but this may prevent visitors from seeing new content; on the other hand, you may leave the expiration time too loose, in which case the proxy rarely serves content from cache and just forwards the request to the Internet most of the time, which is not really effective, or not significantly more effective than the browser's own caching mechanism.

    My OMV 2.1.6 is producing a new set of symptoms - rebooting randomly, transmission web interface freezing, not responding to a reboot/shutdown command even as root...


    I'm starting to suspect transmission to be causing this, but the processes appear to be random (kswapd, flush)...


    Attached is the Messages log, and the support info log.


    Can anyone help me pinpoint the issue, please? It's not the RAM, I tested it for 12 hours and passed all tests 6 times.

    More things to try:

    • F10 during POST to get into Computer Setup

      • Computer Setup > Advanced: WOL after power loss - enabled
      • Computer Setup > Advanced: Remote wakeup boot source - local hard drive (not remote server)
    • Ctrl-P during POST to get into ME BIOS Extension Setup and fiddle with the settings there

    (Source, page 9)


    Then, in the Technical Reference Guide, chapter 5.10.1, you get some details on WOL. 5.10.3 below mentions a PROSet Application software (for Windows) which allows you to set various wakeup events. It also mentions that the Magic Packet needs to contain the MAC address repeated 16 times.


    There's also a Service Reference Guide with a LOT more technical data and references to various tools for management. This is a BOOK, 264 pages!!!


    [later edit] Just to note, the symptoms indicate that WOL is indeed working correctly: the system wakes up from suspend/standby (S1/S3). However, after that, there's something else controlling the boot sequence in a different way than the standard boot. This is unusual for a generic PC but quite possible on a corporate box due to its built-in management subsystems. This led me to think that the BIOS has a separate setting for the boot sequence/source in case of a WOL event, and indeed it does: "Remote wakeup boot source" should be set to "Local hard drive". In a corporate managed environment, it wouldn't be unusual to have a scenarion in which a sysadmin needs to update a workstation remotely, so a network-wide broadcast of a WOL packet with the right MAC address could be used to trigger the boot from a specific network location - perhaps a wipe&fresh install, or a standalone utility to run a patch&antivirus scan.

    Actually it looks like the NIC's own "BIOS" (configuration program) may not be set up correctly, and could be attempting to do a PXE (network boot). There should be a key combination you can press during POST (Power-On Self Test), you have about 2 seconds to do that to get into the NIC's configuration menu - Alt-S, Alt-I, something like that.


    Oh, and also check the boot order in BIOS, leave just the USB stick as primary boot and disable secondary/tertiary boot or "Try other boot media" if such exists. These brand-name CMOS setup programs are a bit odd.

    Hi there @Eryan


    Sorry to hear about your sickness, glad to hear you're putting it to good use :D


    WOL is a fiddly topic. I have seen cases when specific motherboards could use only specific extension slots or none at all for WOL, specific NICs would wake up only in certain scenarios, and BIOSes that had limited capability.


    A couple of things you could test:


    Does this happen only with OMV, or also with other operating systems? This would narrow down the root cause to the OS. Since you have an on-board USB header, it would be fairly easy to replace the current stick with another one, install another OS of your choice, and see if you can get it to boot up via WOL.


    Also, check the settings in BIOS for the level of energy saving (S1, S2 or S3), and WOL (or "ring") trigger being enabled.

    Apologies for the frustration @tekkb and all. It was just incredibly aggravating to not figure out where the issue was coming from, nor to be able to get some expert support on the matter. In hindsight, nobody else would have been able to pinpoint the root cause anyway :D


    To close the loop, here's what happened meanwhile:

    • disabled the on-board NIC (Realtek) and installed an Intel add-on Gigabit Ethernet card with less CPU load - nada;
    • replaced the suspect drive with another one, connected to on-board SATA, then tried installing CentOS and Fedora Server - neither would work in graphical mode, and in low res graphical mode both froze during the install;
    • reduced the shared memory allocated to the graphics - no change;
    • tested the RAM exhaustively, passed 5 times over 9 hours - no issue here;
    • had a rare moment of divine inspiration :D and removed the HP Smart Array P410 hardware RAID controller from the PCI Express slot - installation of Fedora Server completed without glitches;
    • found out that HP actually released an advisory related to this card, they found out that in a particular combination of firmware + SATA drives + status polling the board would cause the system to freeze or reboot unexpectedly;
    • HP had published an updated firmware to correct the issue, and fortunately I just had installed Fedora which is on HP's list of supported OS's and could apply the RPM patch easily; (last time I had to install a Windows Server trial just to apply the firmware delivered as an executable, because it would run only under a Windows Server environment)
    • as I'm not familiar with either Fedora or CentOS and could not get Cockpit to work for remote server management, I gave up on that and returned to Debian;
    • I could not get OMV installed on top of Debian (a screenfull of dependencies which it couldn't handle), so I switched the hard drive again and performed a clean install of OMV 2.x.


    It's been 12 hours now and the new system is still working, all settings/shares/users/plugins were manually added in like half an hour, and I'm keeping an eye on it to see if it freezes again. New hard drive has less than 1,000 hours of uptime and the long test has passed.


    Sorry again for venting here. The issue was caused by a firmware bug in the hardware RAID controller, corrected by the vendor, and not related to lsof or another software component of OMV or Debian. I need to subscribe to support alerts for this piece of hardware.

    The box froze again just after midnight, and has been staying at full load for 7 hours. What now, shut down the system every time I am not using it? I am really, really pissed off. Even Windows would be a more stable option than this.


    Please provide instructions on how to backup the settings regarding users, shares and permissions, so that I can do a clean install from the ISO. I'm at the point where this is the last chance I'm giving OMV before moving on.

    I can replicate the issue. Problem appeared at the upgrade to 2.x, and it's only limited to the ETH0 graphs. Data is still being collected:


    Code
    root@openmediavault:/var/lib/rrdcached/db/localhost/interface-eth0# ls -lsa
    total 884
      4 drwxr-xr-x  2 root root   4096 Sep 18  2014 .
      4 drwxr-xr-x 24 root root   4096 Jul  3 00:05 ..
    292 -rw-r--r--  1 root root 295232 Jul  7 00:11 if_errors.rrd
    292 -rw-r--r--  1 root root 295232 Jul  7 00:11 if_octets.rrd
    292 -rw-r--r--  1 root root 295232 Jul  7 00:11 if_packets.rrd


    It's just the graphs not being propery displayed.


    For instance, the memory hourly graph works:

    Code
    <img src="rrd.php?name=memory-hour.png&amp;time=1436217329828" alt="RRD graph - by hour">


    But the ETH0 doesn't:

    Code
    <img src="rrd.php?name=interface-eth0-hour.png&amp;time=1436217250759" alt="RRD graph - by hour">

    I did an upgrade to 2.1, but I still can't pinpoint the issue or trust the existing system disk. It's an old 20GB IDE drive, and I would love to replace it. Question is - how to back up my settings/users/shares etc. so that I can easily restore them on a clean install on a new drive?


    Drat, the box powered up like 15 minutes ago just froze and did a cold reboot by itself. Not sure if linked to transmission traffic; a torrent was downloading, and the new Gigabit internet connection might be too much for it. (Yeah, 1Gbps internet connection, gigabit router with hardware PPPoE offloading, speed test returns between 930-970Mbps... All for €12/month. Wanna move here? :) )

    Alright, with monitoring enabled it crashed a couple more times until I could catch this red-handed.


    At 9:30pm sharp tonight, my box spiked up to 100% CPU load and stayed there for half an hour until I rebooted it. It did send me an alert that the CPU was way up, and then two other interesting mails with identical content:


    From: Cron Daemon
    Message: Segmentation fault
    Subject: Cron <root@openmediavault> [ -x /usr/lib/php5/maxlifetime ] && [ -x /usr/lib/php5/sessionclean ] && [ -d /var/lib/php5 ] && /usr/lib/php5/sessionclean /var/lib/php5 $(/usr/lib/php5/maxlifetime)


    Indeed, processes list showed quite a few processes with >80% CPU load, all about generating graphs.


    I'd point my finger at the cron job generating graphs, however... as I'm typing this, the system stopped responding again over the network - no web interface or SSH access. It did this as I was browsing through the pages of system logs.


    Any clues? Is it a software bug somewhere? Is it a faulty drive?

    The solution is most effective when applied closest to the source of the problem. Yeah, the idea of restricting some files is interesting, however it would be more effective to:
    1) use separate user accounts on your Windows PC
    2) not use accounts with admin privileges
    3) use a passive defense against malware spreading URLs by using OpenDNS IPs instead of your ISP's DNS
    4) use an active defense with a decent antivirus (comparison charts and tests here http://www.av-comparatives.org/ )
    5) disable autorun on all drives in Windows
    6) limit write permissions at user level for shared drives on the NAS.

    Gotcha. Done. Although it's not very logical to me how a newer kernel can have improved support for a really old motherboard (2nd half of 2007) :D


    And now, we wait.


    Is there a monitoring/watchdog solution I can use, to alert me when CPU load stays in an elevated state for more than a few minutes?


    Any other way to observe what's going on with the box when it freezes - would remote logging help?

    Memory is fine :| I'll run memtest on it in a loop again, just to be sure.


    OMV is installed on a IDE hard drive, the only drive on-board the motherboard. All other (SATA) controllers are disabled. Storage is on a hardware RAID controller on PCI Express.


    Backports? What does it do?