Building an SBC NanoPi SATA hat based NAS

    • Building an SBC NanoPi SATA hat based NAS

      I've been reading up on these forums for a few months, researching a DIY NAS build. I've learned lots here and really like y'alls community!

      I haven't build my own machine of any kind for many years at this point, last having built a gaming rig back in the early 2000s. So I have a few lingering questions that I thought you folks may be able to help me out with.

      My needs for a NAS aren't too high. I will have at most one device streaming media from my NAS (via Jellyfin or miniDLNA) to my TV, a few IOT devices writing logs/images, and the occasional torrent download (although likely buffered to a dedicated external SBC before being moved to the NAS). For this purpose, I've decided on the SBC NanoPi M4 with SATA hat (partially based on comments by local mod tkaiser). I plan on running four drives once I figure out the right case and cooling situation.

      Here are my remaining questions that I could use some help with:
      • Suggestions for 12V PSU that can provide sufficient power to the M4 and to four 3.5" hard drives?
      • What are the trade-offs of 2.5" vs 3.5" hard drives? I'm defaulting to 3.5" due to the cost difference?
      • Fan / cooling suggestions/recommendations: I have the large M4 heatsink and have read I should switch to copper shims rather than the thermal pad that came with it. What are the drives likely to need?
      • HDD mounting: I have a few old desktop cases around. But none have 4x 3.5 mounts. Any suggestions for a stand-alone cage to mount these that allows me to blow air through the drives?
      • Cabling: the SATA hat came with an adapter from a 4-pin Molex connection to two SATA power connections. Should I get a 4x splitter, or should I worry about the board not being able to power four drives?


      (Attached image of my smallest old mini-ATX dell desktop case with a test layout of the NAS)
      Images
      • test nas layout.jpg

        546.22 kB, 1,250×1,079, viewed 187 times
    • I've built a NAS out of a NanoPi Neo 2, and posted about it on both this forum, and the Armbian forum. It only has an H5 processor, your M4 would have a much more powerful RK3399, if I'm not mistaken.

      While the CPU power of both of these ARM chips is far less than that AMD64, and should be adequate for simple file sharing over SMB, be warned that the actual MB/sec delivered by the bus on these ARM SOC's are **also** far lower than on PC motherboards, **even 10-year old PC motherboards**.

      My Neo2 can't sustain 40MB/sec for very long (like 10ish seconds), even under the most ideal conditions. Then it simmers down to 20 MB/sec.

      With an RK3399, you can probably expect about 40MB/sec, sustained, in the most ideal conditions. You'll be wishing you had just gone with an <= 7ish year old PC at that time (they had multiple SATA ports back at that time), with a good, inexpensive GbE PCI card in it, and a USB 3 PCI card (with a NEC chip on it, **not** VIA).

      You'll achieve way more than 40MB/sec, sustained that way. The extra CPU and RAM will also not hurt to have.

      The post was edited 1 time, last by esbeeb: Oops, h5, not h2 ().

    • esbeeb wrote:

      With an RK3399, you can probably expect about 40MB/sec, sustained, in the most ideal conditions. You'll be wishing you had just gone with an <= 7ish year old PC at that time (they had multiple SATA ports back at that time), with a good, inexpensive GbE PCI card in it, and a USB 3 PCI card (with a NEC chip on it, **not** VIA).
      Shucks, I feel a bit a fool then. I was expecting to get faster bandwidth than that given that I'm using the PCIe interface to the M4.

      This benchmark on the armbian forums suggests that one user was able to get ~300MB/s seq read/write with a JMS578 usb3 to SATA bridge. But their random read/write speed is what you described. If I'm not mistaken, the Neo2 only has USB2 which has a max bandwidth of 60MB/s, which may be your bottleneck.


      I have absolutely no idea what kind of bandwidth I will be getting out of this build, and will count it a learning effort if it's not fast enough.
    • My Neo 2 NAS was also a good learning experience.

      My benchmarking methods used in my various posts were not formal ones.

      Once your data has to traverse the all the way from one machine to another, and be stored, traversing any slow motherboard buses, those are the sorts of benchmarks I like hearing, even if they are a little rough, like say + or - 10%.
    • "If I'm not mistaken, the Neo2 only has USB2 which has a max bandwidth of 60MB/s, which may be your bottleneck."

      @sethish, yes, the first 2ish seconds of file transfer on my Neo 2 are at 60 MB/sec. Then down to 40, then down to 20 as above.

      I like knowing what the **sustained** bandwidth will look like, like 3 minutes into a large transfer. Those quick little speed bursts at the start of a transfer are just cruel teasers by which the salesmen of the world fool us into buying their hardware.
    • sethish wrote:

      Suggestions for 12V PSU that can provide sufficient power to the M4 and to four 3.5" hard drives?
      If you want to be on the safe side count with 2A per drive

      sethish wrote:

      What are the trade-offs of 2.5" vs 3.5" hard drives? I'm defaulting to 3.5" due to the cost difference?
      2.5" is more power efficient but in case you want to use the disks (which I assume) they could become a bottleneck. All drives use ZBR and once they're filled up 2.5" ones can become as slow as less than 50 MB/s. With large 3.5" drives you should get sustained NAS transfer rates of above 100 MB (with large files of course)

      sethish wrote:

      Fan / cooling suggestions/recommendations: I have the large M4 heatsink and have read I should switch to copper shims rather than the thermal pad that came with it. What are the drives likely to need?
      I wouldn't care that much and even set max cpufreq to 1.5GHz since NAS transfer rates won't suffer anyway compared to letting the big cores run at up to 2GHz. If it's a pure NAS no fan is needed.

      sethish wrote:

      HDD mounting: I have a few old desktop cases around. But none have 4x 3.5 mounts. Any suggestions for a stand-alone cage to mount these that allows me to blow air through the drives?
      I would use an old PC case from the scrap yard.

      sethish wrote:

      Cabling: the SATA hat came with an adapter from a 4-pin Molex connection to two SATA power connections. Should I get a 4x splitter, or should I worry about the board not being able to power four drives?
      2 y-cables for SATA power.
    • Thanks @esbeeb! That picture is setup as my gravatar image, so I didn't think about the fact it would show up here. Three cheers for gravatar I guess!

      Thanks @tkaiser, that was exactly the confirmation I was looking for. I picked up this board combo very much on your recommendations around this forum (even though you would prefer a different chipset on the PCIe to sata board). I ordered a 12v PSU last night on amazon. It's a 340w supply, but was well reviewed and inexpensive. I assume that it's not going to consume 340w if it's only pulling (12v*2a)x4 + (5v*3a) = ~111w. I'll look into mains power monitors later this week. I have an iTead esp8266 based IoT switch with power monitoring slowly shipping from china, but I should pick up a simple one with an LCD readout.

      Interesting detour last night:
      I got the eMMC module from FriendlyArm and installed it on my NanoPi. I flashed the M4 OMV image to an sd card... and then forgot to plug in the SD card.
      When I plugged the machine in for the first time, it seemed to boot just fine. The sd card instructions mention needing 30m on first boot before the web interface is ready. I sat around for 30m, watching the network traffic on my router (an espressobin I finally got installed earlier yesterday!), being very confused when the new device wasn't pulling packages from anything I recognized as packages.openmediavault.org. Nor did SSH work, nor did nmap give me what I would expect to see for OMV.

      I finally remembered that there _is_ an HDMI port on the NanoPi-m4, dragged a monitor down to my work table, and realized the eMMC module ships with an Android image on it. It'd been sitting on the android home screen for 40m doing nothing .


      When the SD card was actually plugged in, it bootstrapped and ran fine. I provisioned/partitioned a 500mb HDD from an old desktop of mine to test out the interface and to explore the OMV web interface. Everything worked great, and if I get the time before a D&D session this evening, I'll try to get real drives installed in something and do iozone and real world benchmarks. Is there a standard benchmark that folks use around here for a 'real world' load test?
    • sethish wrote:

      Is there a standard benchmark that folks use around here for a 'real world' load test?

      You could try to simulate 'real world' workloads with fio. But I would focus on use case first.

      If your NAS will be used in a specific way, then simply test this. If it's about storing large files this is something entirely different than storing a bunch of small files. And knowing the bottlenecks also helps interpreting the results (e.g. if storage is the bottleneck on a Gigabit Ethernet equipped NAS, then your read speeds will always be bottlenecked by storage while 'burst' write performance depends on amount of DRAM).

      I use Helios Lantest for basic NAS performance tests but it's important to know what the tool does exactly and how/why copy behavior of Windows Explorer or macOS Finder differs.

      Just to be clear: you attached one drive already to the SATA HAT and it simply works?
    • tkaiser wrote:

      Just to be clear: you attached one drive already to the SATA HAT and it simply works?
      Yep, I got the first drive installed and set up a filesystem. Yesterday afternoon I threw everything loose into an empty case and got a second 3TB hdd installed. I got as far as to provision a filesystem and set up a NFS share. But I haven't had a chance to test out the NFS share, or do any benchmarks on the machine yet.
      I set up DDNS and a port forward so I can access the box from work and may get the chance to do a disk benchmark today. Otherwise I'll be working on it again this evening.

      My file copy use cases are likely to be rsync, scp, whatever the Kodi is going to use over minidlna, and backups from my and my wife's android phones. I'll have to see what kind of block sizes those operations use going into my io benchmarks.
    • I have a nanopi m4 with the sata hat and helios says it is having no problem maintaining 103 MB/s reads and 94 MB/s writes.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • sethish wrote:

      My file copy use cases are likely to be rsync, scp, whatever the Kodi is going to use over minidlna, and backups from my and my wife's android phones

      Well, then expect real world performance being way lower than what synthetic benchmarks show since the application in question uses different semantics compared to 'copying huge chunks of data'.

      As an example see the performance numbers here with Gigabit Ethernet and 10 GbE networking: github.com/openmediavault/open…01#issuecomment-468270197

      Backup performance with Gigabit Ethernet is below 30 MB/s and with 10GbE below 40 MB/s. We tested full restores and got rates of just 20 MB/s with Gigabit Ethernet und around 50MB/s with 10GbE.

      With rsync and scp you're likely to be CPU bound since encryption might be involved now and then single-threaded CPU performance starts to matter depending on which cipher got negotiated between machines.
    • esbeeb wrote:

      maybe I only stand partially corrected then
      What you quoted has no relevance wrt NanoPi M4 at all (the tests were done with a virtualized OMV instance on a huge Intel Xeon box). It was just an example that network and storage access pattern of a specific 'application' (in this case TimeMachine backup/restore for macOS clients) varies a lot compared to other storage access patterns like 'copying huge chunks of data'. And your initial claim below is still just weird!

      esbeeb wrote:

      With an RK3399, you can probably expect about 40MB/sec, sustained, in the most ideal conditions
    • esbeeb wrote:

      which "it" are you referring to? Is it the M4, which can maintain writing 94 MB/sec? If so, that's awesome, and I stand corrected. Are you using SMB? NFS?
      Yes. "It" is the system at the beginning of the sentence. smb. I have never really used nfs with windows and helios is a windows client. I'm sure nfs would be just as fast. on a quick dd test, the m4 was writing to an SSD at over 300 MB/s. So, networking is the bottleneck on this board.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • "With an RK3399, you can probably expect about 40MB/sec, sustained, in the most ideal conditions"

      Sorry, let me clarify. Writing many tiny files, like a whole bunch of small ebooks, over GbE (this use case is the one most important to me) to an RK3399-based SBC, will probably yield no more than about 40MB/sec. I based this upon videos such as the following, see 17:00 min in: