Posts by puterfixer

    Hello,


    Every few weeks, for an unknown reason, my OMV freezes. It is no longer accessible through the network, and it's headless so all I can do is hit the reset button.


    This last time, however, it kept logging things so maybe this will help clarify the root cause.


    From zero load, the CPU load jumped to a constant 1 in the afternoon of May 26, and stayed like this for nearly 30 hours. In the evening of May 27, the system stopped logging data - I assume that's when it stopped responding at all. Logging resumed in the evening of May 28 after a cold reboot.


    Here are the messages that came out of the blue, alternating between CPU0 and CPU1 every 30 minutes. Any idea what is this, what's up with "lsof Tainted"?


    This is a system with a low-power CPU (AMD BE-2350) and a massive heatsink with passive cooling in a case with well thought airflow, to minimize noise from an extremely underloaded system. I'd really like to stop these hangs from happening ever again, since the overheating can't do any good.


    Anything I can try? Any information you need? It's running OMV 1.19 with all the updates to day, with OMVExtras and Transmission as extra plugins, and only SMB/SSH/Torrent services running.


    Thanks in advance!



    Maybe I can interject with a comment :) Let's distinguish between the capability and principle of (not) using USB flash drives as boot disks. As stated, OMV is aligned with the principle that the normal functioning of the platform requires a hard drive for the operating system, and USB flash drives can not sustain the amount of data being written. It's a decent choice and it has been clearly stated - just as some other choices regarding which other functionalities make sense for a file server and should be made available through plug-ins, and which not.


    Yet here we have a system designed for the flexibility to meet any niche requirement or even personal quirk :D As long as someone wants to invest time in adding the new capability and making it work, they're free to do so, and this creates no obligation for OMV. Personally I salute anyone mad curious enough to get their hands dirty, even if the projected end result is a Frankenstein. I mean, I have a friend who purchased an used router - a specialized device custom built for enterprise customers of a telecom company, focusing on reliable 3G and enhanced VPN capabilities. He wants to hack it to add a parallel printer interface to it and make it a print server. He's definitely bonkers for even considering such a radical change for the existing device, but hey - kudos to him for trying. :)


    This being said, the option to boot from a flash drive and having a way to minimize writes (to it) does sound appealing. However, I'd raise you this proposal: why not use the USB flash drive for the OS and user settings, and reserve some space on the storage disks for work files, logs, cache and so on?


    Normally a file server will have storage attached to it if you intend to use it, so you could block 100MB or whatever seems fit for temporary data (or persistent storage flushed periodically from the ramdisk). And if the server doesn't have any storage attached, then it doesn't fulfill its purpose anyways and you can safely assume that it can use a ramdisk when powered on and disregard most of that data when powered off. (Insert here the discussion about which changes need to be written on the flash disk and retained from one boot to the other, ie patches and updates, vs. which temporary files, uptime statistics etc. can be ignored.)

    Welp, I think I can mark this as "Solved" by applying the workaround above. Currently the 4 disks are connected back to the motherboard built-in SATA controller, software RAID working fine, and transferring data to other drives connected to the hardware RAID card. The sustained 70 MB/s copying speed is a lot better than 16 MB/s when backup disks were connected in a USB(2) external rack. Plus, the partitioning of those disks somehow messed up the partition table, so the backups were not actually retrievable :) So much for trusting USB racks.


    Tonight's fun: after all data is backed up, move the 4 disks to the RAID card and set up a RAID5 there. It has a specialized CPU for accelerating this, dedicated 512MB of ECC RAM, battery backup and a plethora of built-in tools for early identification and recovery from failures. Software RAID might be "free", but when a server-grade hardware RAID controller with all the bells and whistles is under €150... :)

    Hiya,


    I have 4 drives of 2TB, previously connected directly to the motherboard SATA controller and configured in a RAID5 matrix under OMV. Recently I purchased a HP P410 SAS/SATA RAID controller for some extra kick. Today I connected the disks to the controller and ran into trouble: I can't bring up the RAID5 matrix for an unknown reason.


    The controller is configured with each drive in its own array and a single logical drive. Basically, it bypasses any built-in RAID functionality and exposes the disks as JBOD to the operating system. They just appear with a different name - instead of seeing their model, firmware etc. now I see "Logical volume" for all as Model.


    Here's what I'm getting:


    mdadm --assemble /dev/md127 /dev/sd[abcd] --verbose --force


    If I look in the system log, I get the following:



    I'm baffled, as I can check each drive with mdadm, but can't bring up the array. Any ideas, please?


    Here are some other results:


    blkid


    cat /proc/mdstat

    Quote

    Personalities :
    unused devices: <none>


    mdadm --examine /dev/sd[a-d]

    A bit of theory first :) Active/passive only matters for establishing connections for file transfers, AFTER the main command connection is established.


    In Active mode for server (and passive for client), the client tells the server "get ready, I want to get that file". The server opens a new TCP socket in listening mode on a port in the active range, and waits. The client initiates the TCP connection towards the server; once established, it saves the data stream. This presumes that the server CAN be reached via direct TCP connections on that port range.


    The Passive mode for server (and active for client) goes the other way around. The client opens a TCP socket in listening mode, then sends a command to the server "send me that file, this is my IP and port". The server then initiates an outgoing TCP connection towards the client; once established, it pumps the data through, and the client saves it.


    A passive-to-passive connection will not work at all: neither part can initiate the TCP connection towards the other. Or, to be more accurate in the explanation, neither system can be reached on an open TCP port in listening mode because a router or firewall is preventing that incoming connection. At least one end must be active (ports visible through port forwarding or firewall exceptions).


    Considering the fact that clients can be connecting from a variety of networks and through various routers which may block incoming connections towards them, the assumption is that clients' ports are filtered, so they can't be active. It's much easier to make the server active through firewall/router configuration, than to ask every client to do that and become active.



    Now, the issue you seem to have is two steps behind file transfers: external clients can't connect to your server AT ALL. Not even for that initial connection for exchanging commands. That is clearly an issue for the router, and possibly another router or firewall upstream.


    Tips:
    - Most ISPs block standard ports for services (21=ftp, 25=smtp, 80=http etc.) in case the customers' computers become compromised without their knowledge. You should configure your FTP server to use a different port for command connections, in a higher range (above 1024). Change the listening port from 21 to something else - say, 20021, and forward that in your router.
    - Tests performed through your local network (on the same network as the servers) are not relevant at all. If you try to connect to the server through your router's public IP, the connection will go from LAN to the router to its external IP, then the router would be supposed to forward it back to the server in the LAN. This will not work, due to anti-spoofing mechanisms built into most modern routers (the router doesn't know if the connection it receives on its WAN port is really originating from the LAN, or someone else on the Internet is faking its IP as originating from the internal network to gain unauthorised access). You really have to perform the test from a remote system.


    My bet is that the culprit for your connectivity problems is using the default port 21 filtered by your ISP, nothing to do with active/passive ports. :)

    My fault, sorry :D I understood that you were looking at a solution to backup the OS drive of the NAS, not of workstation clients. Disregard :D


    I have used Acronis TrueImage over the past 5ish years for system backups and conversion from physical machine to virtual machine. It has evolved a lot, from being a bootable tool to backup/restore a drive/partition to/from a local/remote image file (yikes that's a lot of slashes :D), to a pretty decent Windows client with scheduled full/incremental backups of targets of your choice. Maybe you can give it a try, see if it suits your needs. I believe it can do an image of the system drive while the system is running. Obviously, you can't restore an image over a running OS.

    I may be slightly off-topic, but overloading the NAS with so many additional services is a serious security issue. If I were you, I would add only the Virtual Machines plugin, then have minimal, highly secure OS images for those particular purposes exposed to the Internet. If someone breaks in, all they get is a tiny virtual machine, not the whole rig. Then you can even separate the VM image and the web home directory (which would reside on a separate share created on the NAS with individual permissions), and some other cool stuff from a security point of view.


    Regarding remote restore of the OS: if this goes bye-bye, you'll have to get the headless system on your workbench, swap the faulty drive or whatever, then add a console so you can manually restore the OS. The only way you could do all this remotely would be if you had server-grade hardware with a watchdog port that allows you complete remote console access. For instance, the HP MicroServer ProLiant G8, which has an iLO (Integrated Lights Out) built-in module, or a Dell server with the similar DRAC module. These modules are always on and connected to the LAN on a separate port; through them, you can map a local ISO image as a virtual CD drive, then remote power on the system and so on.

    That is pretty nice! Well done! :)


    Alternative programs for WOL over local network:
    - the WOL project on SourceForge, get the win32 archive and use the wol.exe program from it; very easy to use from a shortcut like:

    Code
    wol.exe -i 192.168.1.255 -p 7 00:4F:49:07:0B:5F


    where -i specifies the LAN's broadcast address, -p specifies port (7 is default), and the MAC address follows;
    - NirSoft's WakeMeOnLAN, which scans the network when all computers are on, then allows you to selectively send the WOL Magic Packet when they are turned off.


    If you have a home router running DD-WRT, it has a built-in WOL client.


    Alternatively, you can use the PC Monitor plugin to turn on/off systems remotely, but this would require internet access.


    Notes:
    - You cannot send the Magic Packet to a specific IP; when the system is offline, chances are the switch will not know to which physical port to send the datagram. Use the broadcast address as target, this way it is sent to all physical ports and all computers connected to the network.
    - If you want to use this from the Internet, you need to do a port forwarding in your router, to send all incoming connections on port 7, protocol UDP, to the LAN's broadcast IP address and port 7; also, you should use a hostname service such as DynDNS or No-IP, so that you can always know the IP address of your router, even if it changes (depends on the ISP).

    I fully agree with the processor being overpowered. The only consideration to have a decent CPU is to be able to run multiple virtual machines or video surveillance. On the other hand, you could get a low-power CPU plus a networked standalone DVR and still come out cheaper than a powerful CPU. The standalone DVR does the video encoding etc. more efficiently anyways.

    Hi Don,


    I have to say that I'm confused between the use of the term NAS and the list of requirements above. By definition, the purpose of a network attached storage is to have lots of storage available on the network, period. It's a dedicated machine for this role, just as a "mail server" is a server running the mail services, and so on for web server, database server, proxy/gateway server, dns server. It's common practice to keep these services on separate machines with separate roles, to minimize security risks.


    What you want to achieve can be done in two ways: either an "all-in-one" server (your favorite linux server distribution should do), or virtual machines with dedicated roles running on the same physical server.


    From a hardware point of view, a server-grade motherboard should satisfy the needs for 24/7 reliability, remote management and a built-in SAS hardware RAID controller (SAS can also work with SATA drives, but not the other way around). But a rackable server case with hot-swap trays usually is bulky, noisy and also assumes it will be placed in a controlled environment (external cooling, controlled humidity, low dust etc.).


    If you are more interested in silence, you could go with a desktop motherboard and work on better cooling and smart airflow. Let's say, a Sandy Bridge/Ivy Bridge system around a LGA 1155 socket:
    - a motherboard with Z77 chipset such as the ASRock Z77 Pro3 ($90 on NewEgg)
    - an Intel Core i5-3450S Ivy Bridge quad core processor at 2.8 GHz and 65W low-power ($200)
    - passive cooling with a monster heatsink, such as the Scythe Ninja 3 or a Mugen 3 ($60)
    - a reliable power supply, such as the Seasonic S12II 520W modular or 620W, 80 Plus active PFC with a low-noise 12cm fan ($79)
    - a large, well-vented case with enough space for multiple drives and 1-2 low-speed 12cm exhaust fans
    - RAM and drives galore


    For a hardware RAID, you could jump on eBay and look for an older server-grade hardware SAS RAID controller with a PCI Express interface and a full-size backplate, such as the HP SmartArray P400 with 512MB memory and battery back-up, which goes for around $50-$80, plus cables.


    I'm not sure that the extra expense on hot-swap drive cage is worth it. You're likely not running a mission-critical system which can't be turned off for maintenance when needed, and you're also not physically limited to access the case only from the front, as it would be in a rack-mounted case. This extra cash on hand should allow you to be picky on tower cases, to find one with adequate cooling for the stack of drives mounted inside. Personally I put my 4 drives in a Cooler Master rack eating up 3 5,25" bays (SKU N82E16817993002 on NewEgg, $26); I have to take it all out to access a single drive, but I replaced its stock frontal 12cm fan with a quiet one so the cooling and noise problems are solved.


    Hope this helps :D

    ruby90, the test tools are all for Windows, so you'll have to connect the drives to a Windows PC to run the tests.


    Toshiba Diagnostic Tool - the Comprehensive Test takes about 2 hours and includes a surface scan. http://storage.toshiba.com/sto…upport/software-utilities


    Seagate SeaTools: check the Seagate drive in the list, then Basic Tests > Long Generic, or Fix All > Long. http://www.seagate.com/support…ries/seatools-win-master/ (needs .NET 4.0)


    Western Digital Data Lifeguard Diagnostic for Windows: run an Extended Test, including the surface test. http://support.wdc.com/product…groupid=608&sid=3&lang=en

    Oh? :| I'm using it and it's pretty decent. It's very easy to install/upgrade it on the forum, then anyone with the client app can use the forum in an easier way than through the browser. Plus, mods/admins have all the tools needed.

    I'd be worried why WDs show those as zeroes all the time. At such a high data density (order of gigabytes per square inch), reading errors are bound to happen and the built-in mechanisms will correct them; nevertheless, they should be reported, not hidden just to make the user feel better with his/her choice of a hard drive brand.

    Hm, okay. Issue is, I want to access the share anonymously from other computers as well as from the media player, and it doesn't work. From my Windows, I was prompted to enter my logon credentials despite the fact that my Windows account is defined on OMV and has the same password.


    [later edit] This is strange, I just had the box offline for a couple of hours, turned it back on and enabled NFS, connected from the media player with user anonymous, and it works. I'm just baffled.

    Magnetic storage on hard drive is using statistics and heuristics to figure out the information stored magnetically. No reading is 100% perfect, that's why it has built-in correction mechanisms. The values you see here are statistical rates of error of various kind. They are of interest only when they exceed a specific value. Otherwise, it's normal for them to increase and decrease as the drive is working. There is no failure or imminent failure for any of the drives.

    I'm baffled.


    Yesterday I was running 0.4.8 and could access the NAS over the network from my Western Digital Live media player. The shares have the user nobody with read-only permissions (except for an upload folder with read-write) and no rights specified for the users I created, so that Samba allows anonymous access. And it worked beautifully.


    This morning I upgraded to 0.4.9 and suddenly the anonymous browsing is gone. From my computer I have to authenticate, and I couldn't get it to work yet from the media player.


    Any clues? :evil:

    I've been managing a few forums running vBulletin, Invision Power Board, phpBB and some others over the years. They ALL rely on MySQL's built-in indexing and searching, which is poor at best. It's not a setting you can just turn on, it's the whole design of the most basic statistical keyword indexing and blind index searching. Out of despair, some forums added a google search plugin, which adds some intelligence to the search mechanism. The true alternative for an improved search is to install Sphinx Search Server - but you can't do that on shared hosting, only on your own server.