One (big/powerful) server? Or several small ones?

  • Background: I have a media server that's been offline for a while. It has several disks between 1TB-3TB (JBOD) and room to add more. I built it to simply store all my CD, DVD and BrD rips. I played the files on an HTPC so transcoding wasn't an issue - but I retired the HTPC some time ago and I now use Roku for pretty much everything. I tried setting up Plex (more than once) but had a ton of problems with it so I gave up and I've just been playing my physical media discs on my Blu-ray player when I feel like watching them.



    Recently, I started playing around with OpenMediaVault on a Raspberry Pi 3 and I'm intrigued/enthused about all the possibilities. I setup Emby Media Server on it and plugged in a 2TB external drive that's loaded with a bunch of my favorite movies and TV shows. In my tests so far, everything seems to play really well on my Roku (Ultra) using the Emby client. I setup port forwarding and tried watching some stuff outside the house on my iPhone (6). 480P content plays great. 720P stops and has to be restarted once in a while but it's functional enough. 1080P doesn't seem to work at all.



    My next step is to get my old server back on line. I don't remember its specs but I built it for low-power consumption so I don't know if it will transcode any better than the Pi3 — I suspect I'll probably have to upgrade it — especially as I start to rip some UltraHD Blu-ray discs. Anyway, I'm definitely going to install OMV and SnapRaid along with Auto Ripping Machine so I can finally get the rest of my media ripped. (I'm about 50% done with DVDs/BDs and maybe 25% done with CDs.)



    I may stop there and just use that server for media files. But I'm really interested in all the other functionality that I can implement with OMV... I definitely want to setup (using Docker containers for pretty much everything):

    • NextCloud (and stop paying Dropbox $120/year for 1 TB of cloud storage).
    • I think I'd like to move my Home Assistant server from an old Raspberry Pi 2 to a Docker container on this server
    • My wife is interested in a Calibre ebook server
    • Some sort of a photo server (Lychee? Pywigo? Other?)
    • Radarr and Sonarr look pretty intriguing and I'd also want to setup transmission and a reverse proxy/VPN
    • A personal git server (probably Gitea) for some development projects I'm working on -- including...
    • A couple of websites I'd like to try to self-host using a reverse proxy.
    • I want to implement Duplicati to backup everything to some cloud storage. (Except for the several terabytes of media files, of course, which I can always re-rip if SnapRaid fails to save my bacon.)

    I'm currently doing some of that on the Raspberry Pi 3 and I'm pretty happy with the results. So... here's where I'd really appreciate some input...


    Should I try to do all that on a single, powerful server? Or should I limit my “main” server to Emby/media storage (and Auto Ripping Machine and maybe Radarr/Sonarr) then use Raspberry Pis (I have 4 x Pi 2, 1 x Pi 3, and another old mini-ITX based server that’s sitting idle but was earmarked as a torrent server) as single-function servers? What would the pros/cons be each way? What would you do?

  • I'm not quite there yet, but my aim is to have one more powerful server for the heavy lifting, and a few SBCs as more specialized machines.
    The advantage would be keeping your backup server totally separate from the machines you might be playing around with more, so when you break something, your backups are still running.
    Or you're running pi-hole or a vpn or something and don't want those services affected by the other machines.


    I'm currently running an Odroid HC2 as my main machine, and a 2011 MacBook Pro running a couple things.

    • Offizieller Beitrag

    It is just that I don't think it is easy to find a recent version of calibre built to run on ARM. I might be wrong. Not that calibre is very demanding. I belive calibre has a crazy amount of dependencies. Not easy to build on ARM.


    And I agree, HC2 is great if you don't need 64bit arm or x86/x86-64.


    And calibre is a great e-book manager, I use it a lot!


    An alternative might be calibre-web for armhf. It allows you to access a calibre library, browse, read and download. You will have manage your books in calibre and copy/sync the library so calibre-web can access it.

  • I say 1 big powerful server (and use docker as much as possible to run several containers). I suggest a desktop workstation (as in tower case) but not rack-mountable, as they are too loud.


    Armbian on ARM isn't as mature (to put it gently) as stock Debian on a PC. All it takes is a few ARM-specific bugs and annoyances (of which I've personally experienced several), and you'll be wishing you had just gone with a plain-Jane PC, and not incurred all the hassle. That's where I'm at today.


    To me, spending roughly $250 US more to go with a new, decent, budget PC build (instead of ARM) is well worth it, to avoid several or even dozens of hours of extra futzing about on ARM-specific problems. You'll also probably get a very noticeable increase in disk and network performance, over an ARM-based NAS. Like say an extra 10-80 MB/sec, over GbE (just a rough estimate based on my own experiences).


    My time is worth something. For those who enjoy the extra tinkering and problem solving, by all means, don't let me stop you.


    For those who would point out that the ARM boards consume lower power, I say actually do some math, comparing the power consumption of both, considering what you are paying your power company per kilowatt hour. I think you'll probably find that power is so cheap (in most cases, for anyone living on-grid), that it hardly even matters, over several years, from a sheerly economic viewpoint. Those who want to make an environmental statement, are of course welcome to pick the lower power choice.

  • As you describe it it seems you need a pretty beefy x86-64 NAS. I think calibre is the clincher.

    I agree. Calibre is important to me as well. I've tried out the Calibre-web thing, and I don't really like it. I would far rather use the full-blown, normal Calibre from a Docker container. The way it's usable from within a web-browser (over the network) is awesome.


    Thanks so much for the video on how to set that up, BTW, @TechnoDadLife. :)

  • All it takes is a few ARM-specific bugs and annoyances (of which I've personally experienced several)

    For anyone else reading this it's important to know that @esbeeb compares apples with oranges and is quite immune to recommendations how to avoid the stuff that lets him believe ARM would be inferior (using crappy protocols, involving CPU bottlenecks and high latency). There exists no reason to use either FTP in 2019 or to rsync in a local network with encryption active...

  • @TechnoDadLife, I love your YouTube channel. I have a special request.


    I see you have several OMV servers, both ARM and PC. Would you be willing to do an OMV SMB transfer performance comparison between ARM and PC (between your best ARM board, hopefully not a Raspberry Pi, and that 6-ish year old Lenovo ThinkServer)?


    I'd love to see how fast you can upload 2GB of tiny files (ebooks and accompanying metadata files, from a Calibre Library folder) over SMB into both of these, using an all-GbE connection. Then an upload of a single 2GB file into both servers. The 2GB of tiny files is the "torture test", and the one 2GB file would be the "best case scenario" test. I'd like to see the MB/sec average, for all these 4 cases.


    I'd like to get a sense from you, if you can spare the time, of how realistic my claim was that "You'll also probably get a very noticeable increase in disk and network performance, over an ARM-based NAS. Like say an extra 10-80 MB/sec, over GbE (just a rough estimate based on my own experiences)."

    • Offizieller Beitrag

    I did some "real life" tests on my tiny OMV4 ARM32 2GB HC2s.


    I copied files, over GbE, from one HC2 to another HC2. The source HC2 had a 12 TB HDD and the destination HC2 had a 500 GB SSD. EXT4 & NFS. Cold caches. I only use SMB for Android tablets, phones and e-readers.


    1. A calibre library with ebooks, covers and metadata files:
    50122 files, 11 954 MB -> 30,9 MB/s.


    2. 12 movies with images, subs and other files:
    69 files, 34 649 MB -> 79,1 MB/s.

  • if you can spare the time, of how realistic my claim was that "You'll also probably get a very noticeable increase in disk and network performance, over an ARM-based NAS. Like say an extra 10-80 MB/sec, over GbE (just a rough estimate based on my own experiences)."

    As someone who already did these tests I can answer that for you. Your claim is 100% wrong and it's totally crazy that someone who conducted exactly no reasonable tests here spreads such weird claims.


    All your experiences with 'ARM as NAS' are based on one single ARM device in one single crappy installation (with USB attached Gigabit Ethernet on your only single client) and you even suffered from USB connectivity problems between disk and host I helped you to resolve. How does this qualify to make any claims about 'ARM as NAS'?


    @TechnoDadLife in case you want to take the challenge please be aware that from this list here https://sourceforge.net/projec…ngle%20Board%20Computers/ only the following boards are NOT severly bottlenecked by design (USB2 attached storage or crappy SATA implementation):


    • NanoPi M4
    • NanoPi NEO4
    • NanoPC-T4
    • Espressobin
    • Rock64
    • Renegade
    • ODROID XU4/HC1/HC2


    (I have/had all of them in my lab, tested them and optimized NAS settings on a per 'board family' basis). ARM boards that are not listed there since not enough relevance or currently WiP but also as fast or faster as any Gigabit equipped x86 box:


    • Clearfog Base/Pro
    • the yet not available ClearFog ITX outperforming even all 10GbE x86 NAS boxes
    • Helios4
    • MacchiatoBin
    • Any RK3399 board mentioned here
    • countless others


    Of course it's absolutely pointless to choose ARM boards with obvious bottlenecks like USB2 attached storage and then compare to a x86 box with SATA or USB3.

    • Offizieller Beitrag

    only the following boards are NOT severly bottlenecked by design

    I can confirm this on the NanoPi M4 (with SATA hat), Renegade, and Odroid Xu4. No problems with the RockPro64 either.

    the yet not available ClearFog ITX outperforming even all 10GbE x86 NAS boxes

    Neat. It has a 100GbE NIC!

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @esbeeb - Thanks for reviving this thread and injecting some interesting points. While your statement about ARM inferiority is clearly a little controversial, you certainly make good points regarding the low price of electricity and the value of my own time.


    @tkaiser - Clearly you have some history with esbeeb and I appreciate you providing a counterpoint on the ARM inferiority issue. Having said that, what I'd really appreciate is if you actually weighed in on my OP.


    I've thought about this quite a bit since my original post. I have to say that @Wdavery's post made a lot of sense to me. The biggest problem with the "1 big server" approach is that it puts all my eggs in one basket. I don't have the resources to build a server cluster, and even if I did, I don't want a jet engine in my office with me. -- (Having said that, from a noise standpoint, it's interesting to note that my primary (rackmount) PC and media server together are less noisy than my DGS-1024D switch! I intend to put a fan controller in that noisy bastard soon!)


    OTOH, I can't really just go with several SBC-based servers because I need something with some power for Emby transcoding. And even if there's a high-powered SBC that could do the transcoding, there's still the issue of connecting 12-16 SATA drives to an SBC. (And that's an issue I already have resolved since I already have the server case and PCI SATA expansion cards - I just need to update the anemic motherboard and processor.) A hybrid approach seems to make the most sense. I LOVE how cheap and ubiquitous a RasPi is. If the SD card dies, no worries - it's backed up and I have spares on hand. If the Pi itself dies, a replacement is dirt cheap and BAM! I'm back in business. If my (x64) media server fails, well, that's a bummer, but frankly, I can live without Emby/Radarr/Sonarr/Lidarr/Calibre/photo server (TBD) a whole lot easier than I can live without Home Assistant (gods forbid I should have to go around at night shutting off lights and locking doors!) or my Ubiquiti controller, so Home Assistant (in a Docker container along with containers for Mosquitto, Node-RED, OpenZwave, Portainer, Watchtower, Pi-hole, and Ubiquiti controller) will live on a dedicated Pi and get backed up redundantly. I think I like the idea of putting NextCloud on a dedicated RasPi, too, and having that be my local backup destination for the Home Assistant server as well as the system drive for my media server (as well as serving Dropbox duties, of course). As for my self-hosted web server, I think it needs to be on it's own VLAN, cutoff from the rest of my network (too much risk there) - and while I want to have it up and running 24/7, I'd prefer it cost me as little as possible to do so - so I'll put it on its own Pi.

  • I can't really just go with several SBC-based servers because I need something with some power for Emby transcoding

    Transcoding on the CPU is IMO the worst choice possible since it wastes energy for nothing. The 'work smarter not harder' approach is letting the video engine do the job. Be it QuickSync on Intel or the specific video engine of an ARM SoC (they're all designed for this, the 'only' problem is driver support within Linux). As an example: Helios - HC2 - Or Microserver?


    connecting 12-16 SATA drives to an SBC

    I hate those setups (doing storage for a living for over 2 decades now). The less disks the better.

  • Transcoding on the CPU is IMO the worst choice possible since it wastes energy for nothing. The 'work smarter not harder' approach is letting the video engine do the job. Be it QuickSync on Intel or the specific video engine of an ARM SoC (they're all designed for this, the 'only' problem is driver support within Linux). As an example: Helios - HC2 - Or Microserver?

    That's interesting. I didn't know that. I'll look into it. Thanks!

    I hate those setups (doing storage for a living for over 2 decades now). The less disks the better.

    You may hate it, but it's what I have (and I'm grateful to have it!) What's the alternative? AFAIK, to get the 30TB capacity I have now, I'd need 3 10TB drives plus another for parity. And then there's the system drive. Even if the HC2 could support that, I can't afford it. And if I ever want to expand, there's no great path for doing so.


    Having said that, I could leave my power-sipping motherboard and CPU in my existing media server and let it run SnapRAID and manage/share all my storage drives. Emby (and Radarr/Sonarr/Lidarr, etc.) could then run on an HC2. WAY cheaper than upgrading my media server to even a budget Ryzen system. And with the HC2, I've got the same super-cheap replacement capability if it ever fails. Great stuff @tkaiser! Thanks for all the food for thought!

  • Seems like lots of other people have done this already.


    I prefer x86 because everything I want to do runs on it already.



    I am switching to several small energy efficient silent nucs backed up to the web. My big servers are driving me crazy with noise. If you have your servers in another room, go big. I think once you take into account that you have to add powered hd to small boards they don't end up being as efficient as they seem to be. Older nucs seem to be a good balance between compute power and electric power.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!