Which energy efficient ARM platform to choose?

  • Indeed, powering all the components by means of one power supply. The power supply required depends on the amount of disk and if they are 2.5" or 3.5". As per the FriendlyARM SATA HAT page:
    12V/2A can drive one 3.5" hard disk or four 2.5" hard disks
    12V/5A can drive four 3.5" hard disks

  • now I'm thinking I'd like the ability to have 2 or more 3.5" drives (probably up to 4) connected, possibly even in a RAID1 configuration. I'd like to be able to do general backing up of files and also stream music and movies etc to a media centre


    Can't really help here since


    • RAID1 is IMO just fooling yourself (talking about mdraid, a zmirror or a btrfs raid-1 is something entirely different)
    • For RAID in general you need absolutely reliable hardware while SBC setups are usually quite the opposite. Especially all those RAID setups don't deal that great with power losses and undervoltage (which is one of the most common problems with SBC, at least those powered with 5V)
    • I like to have my backups physically separated from my productive data (that's the amazing thing with those SBC, they are that inexpensive that you can buy more than one and put the backup disks in another location --> another room, building or even area when using a VPN)


    I mentioned the SATA HAT with an if clause: 'If I would want to add up to 4 disks to an SBC' (but I really don't want to add a bunch of disks to an SBC, at least not with all disks active at the same time)


    Also you should keep in mind that this SATA HAT is brand new and currently not tested by any of us here ( ryecoaaron ordered a kit but I would believe this will take some time). I would better wait for people sharing their experiences.

  • 12V/5A can drive four 3.5" hard disks

    Be careful with such statements since how did they test? If they tested like Hardkernel with their H2 you might be surprised.


    Also vast majority of 3.5" HDDs I came across so far have "2A spinup current on the 12V rail" written in their datasheets. That's 8A on the 12V rail for spinning up the HDDs so 5A for the whole system looks like a nice overload situation.


    We should always keep in mind that SBC manufacturers have a totally different background than 'server builders'. The former create toys for tinkerers and as such are surprised themselves with each and every of their products lacking useful features or reliability (applies to all of them).

  • After reading through this thread I thought that I'd settled on getting an Odroid-HC2, but now I'm thinking I'd like the ability to have 2 or more 3.5" drives (probably up to 4) connected, possibly even in a RAID1 configuration.

    Then buy yourself two HC2, or three or even four HC2. Or go towards a normal pc / x86 server .... And if you really like the idea with ARM as a server then Gigabyte has a cool R281-T91 :D



    Does that mean that the whole setup SBC and drives would be powered by the one power supply?

    finished-setup.jpg

  • Does that mean that the whole setup SBC and drives would be powered by the one power supply? If so, what would that power supply look like?Thanks :)

    I use one 12V 20A PSU, without a fan, to power 6 HC2, a Netgear GS316 GbE switch, a Asus Lyra mesh unit and a Noctua fan. When I bought the stuff I made sure everything ran on 12 volts.


    This is the PSU I use: https://www.amazon.de/dp/B01MRSAT39


    There are some pics in this thread: My new NAS: Odroid HC2 and Seagate Ironwolf 12TB.


    (I bought two PSU, one as a spare, just in case...)

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

    Edited 2 times, last by Adoby ().

  • I hadn't realised this or really thought about. But it makes a lot of sense, especially with my experience with SBCs up to this point. I guess that's why i'm posting and asking questions here. So thanks for taking the time to respond.


    Having backups separated physically from your productive data is interesting and again something I really wasn't considering. Having said that, I like the idea! When you say that you have your backups physically separated from your productive data, how are you handling your backups? Is that an automatic process? Or are you doing it manually from your productive data?



    I mentioned the SATA HAT with an if clause: 'If I would want to add up to 4 disks to an SBC' (but I really don't want to add a bunch of disks to an SBC, at least not with all disks active at the same time)


    Also you should keep in mind that this SATA HAT is brand new and currently not tested by any of us here ( ryecoaaron ordered a kit but I would believe this will take some time). I would better wait for people sharing their experiences.

    Thank you for clarifying your points there. I think I took what you said out of context.

  • Then buy yourself two HC2, or three or even four HC2. Or go towards a normal pc / x86 server .... And if you really like the idea with ARM as a server then Gigabyte has a cool R281-T91

    I think I will go back to the HC2 option. I was really taken with it initially so didn't need to try too hard to convince myself. BTW that Gigabyte R281-T91 is off the chain!! Super cool, but slightly more than what I need I feel ;) .

  • Thanks a heap for the information in your post. That's quite the setup you've got there. And you clearly are a fan and believer in the HC2!

  • how are you handling your backups? Is that an automatic process? Or are you doing it manually from your productive data?

    Manual backup doesn't work in my experience (you always have good reasons to skip backing up prior to a data loss). We're using checksummed filesystems everywhere since data integrity is important and as such btrfs together with btrbk does the job on ARM SBCs and with large x86 installations ZFS combined with znapzend is used (or proprietary solutions like Open-e/Jovian). The differentiation between btrfs on ARM and ZFS on x86 is due to kernel support here and there.


    But if you're not a 'storage pro' and really familiar with those contemporary filesystems using the older established variants like XFS or ext4 might be a better idea (then combined with traditional approaches like rsnapshot which integrates nicely with OMV).

  • Manual backup doesn't work in my experience (you always have good reasons to skip backing up prior to a data loss). We're using checksummed filesystems everywhere since data integrity is important and as such btrfs together with btrbk does the job on ARM SBCs and with large x86 installations ZFS combined with znapzend is used (or proprietary solutions like Open-e/Jovian). The differentiation between btrfs on ARM and ZFS on x86 is due to kernel support here and there.


    But if you're not a 'storage pro' and really familiar with those contemporary filesystems using the older established variants like XFS or ext4 might be a better idea (then combined with traditional approaches like rsnapshot which integrates nicely with OMV).

    Again, thank you.
    Yeah, I'm not a storage pro and not familiar with btrfs and ZFS. Seeing as I haven't even ordered the SBC or hdd for my new setup, I've got time to research and investigate btrfs. If I'm struggling to wrap my head around getting it setup, then I can fall back to ext4 (what I'm using now) and use rsnapshot which I can see is available as a plugin in OMV.

  • Manual backup doesn't work in my experience (you always have good reasons to skip backing up prior to a data loss). We're using checksummed filesystems everywhere since data integrity is important and as such btrfs together with btrbk does the job on ARM SBCs and with large x86 installations ZFS combined with znapzend is used (or proprietary solutions like Open-e/Jovian). The differentiation between btrfs on ARM and ZFS on x86 is due to kernel support here and there.
    But if you're not a 'storage pro' and really familiar with those contemporary filesystems using the older established variants like XFS or ext4 might be a better idea (then combined with traditional approaches like rsnapshot which integrates nicely with OMV).

    Does a checksummed filesystem really replace ECC-RAM? I'm unsure about this point because i read the opposite very often.


    Edit: Okay, it's not that risky without ECC, if i got it right:

  • Does a checksummed filesystem really replace ECC-RAM?


    If you love your data then care about data integrity. That means

    • Use a checksummed filesystem if possible
    • Use ECC RAM if possible
    • Neither is 1) a requirement for 2) nor vice versa

    Some FreeNAS guy spread the rumor that using a checksummed filesystem without ECC RAM would kill your data (most probably the try to get more people to use server grade hardware with ECC RAM) but that's not true.


    ECC RAM is a bit more expensive, a checksummed filesystem you get for free. But it won't provide any protection if run on really crappy hardware. One example is using a checksummed filesystem like btrfs on a host with quirky USB implementation combined with USB drives that do not support flush/barrier semantics.

  • Quote from tkaiser

    ECC RAM is a bit more expensive, a checksummed filesystem you get for free. But it won't provide any protection if run on really crappy hardware. One example is using a checksummed filesystem like btrfs on a host with quirky USB implementation combined with USB drives that do not support flush/barrier semantics.

    Okay, so the Helios4 is the only SBC with native SATA & ECC i've found so far. It looks like the most users in this thread are happy with the HC2. Is this a sign of an reliable SATA-USB-Bridge implementation which i can use with btrfs?

  • It looks like the most users in this thread are happy with the HC2. Is this a sign of an reliable SATA-USB-Bridge implementation which i can use with btrfs?

    I'm not entirely sure. I'm using various devices with JMS578 (the USB-to-SATA bridge on ODROID HC1 and HC2), JMS567 and ASM1153 without any issues with btrfs (and Samsung Spinpoint 2.5"HDD or various SSDs for testing purposes). Since an USB-to-SATA bridge is involved its firmware could be relevant and also semantics of the SATA drive in question.


    The btrfs FAQ is pretty clear about the problem and mentions it at the top for a reason: https://btrfs.wiki.kernel.org/…m._What_does_that_mean.3F


    So you need to check for this barrier problem first (and I would strongly suggest to read through the whole btrfs FAQ first prior to using it). Also strongly recommended is to use different mount options than OMV's defaults (OMV relies on default btrfs relatime setting which destroys read performance on shares with a lot of files in it as explained in the btrfs FAQ).


    With OMV currently this means manually adjusting the opts entry for your btrfs filesystem of choice in config.xml as @votdev explained here. I use the following options and hope they get new OMV defaults at least starting with OMV 5.


    Code
    <opts>defaults,noatime,nodiratime,compress=lzo,nofail</opts>
  • Hi all,


    After going through the thread Which energy efficient ARM platform to choose? I was really taken by the Helios4, but thought that it was out of my price range. As a result I then was considering the Odroid-HC2 and was very impressed by that machine, but realised that in order to get what I wanted I'd need to get 2-3 of them. Once I realised that, that brought me back to the Helios4. I think that I'm going to get one, and am seriously excited by the prospect (as it offers the opportunity to combine ECC Ram and a checksummed filesystem) and it's performance.


    Just wanted to ask a couple of questions though before I commit to it (sorry very nervous Non-pro, first time home NAS setup user):

    • The Helios4 has a dual core ARM Cortex A9 CPU and the Odroid-HC2 has an Octa Core CPU featuring a Samsung Exynos5422 Cortex-A15 2Ghz and Cortex-A7. Now I know that it's not as simple as saying one is dual core and one is octa core. But just wondering if someone could outline what the difference is between these two CPU configurations? As to me it "sounds" as though the CPU in the Odroid-HC2 is more powerful...
    • @tkaiser in this thread Which energy efficient ARM platform to choose?, you counseled that SBCs are not really appropriate for RAID as RAID requires absolutely reliable hardware. I suppose that this is still the case with the Helios4? With the Helios4, I'd be keen to utilise it's capability of 4 HDDs down the track and have 1-2 for backing up the other 2 Productive disks. Considering this, would the appropriate method to perform backups be btrbk (for btrfs) or rsnapshot (for ext4)?

    Thanks :)

  • If you love your data then care about data integrity. That means


    Use a checksummed filesystem if possible


    Use ECC RAM if possible

    I'm at the point where I can't decide whether to get:

    • 2 or 3 HC2s (and have backups physically separated from productive data). Positive = easy to multiply
    • Spend a little more and get a RockPro64 with NAS case (have backups connected to the same SBC and therefore in the same physical location) (but this has downside of limiting to 2 hard drives) or I get another one down the track....
    • Or to spend a little more again and get a Helios4 (campaign ending in about 3 weeks) (and have backups connected to the same SBC and thus in the same physical location).

    I'm planning on using a checksummed filesystem (btrfs). As far as I can tell, Helios4 is the only SBC in this thread that has ECC Ram. Is the Helios4 worth the extra cost for a home NAS? I do care about my data, as does everyone I think ;) .


    In the long run, I think I'll be able to live with the extra cost (though it might be painful in the short term).

  • I'm planning on using a checksummed filesystem (btrfs)

    The most important thing to know about checksummed filesystems is that they seem to fail in situations where in reality hardware fails. With old/anachronistic filesystems in all these situations you simply get silent data corruption (something the average NAS user can happily live with since silent data corruption will only be noticed way too late).


    With btrfs and ZFS you'll be notified about hardware problems almost immediately (at least with the next scrub you run) but an awful lot of people then start to blame the software or filesystem in question instead of realizing that they're about to loose data if they don't fix their hardware.


    Another important and related thing (hardware) are the so called 'write barriers'. Back in the old days when we had neither journaled filesystems nor modern approaches like btrfs and ZFS a crash or power loss most of the times led to a corrupted filesystem that needed an fsck to hopefully be repaired at next boot (taking hours up to a day back then, with today's drive sizes and tons of files we might talk about weeks instead).


    With all modern (journaling or checksummed) filesystems crashes or power losses are not that much of an issue any more but there is one huge requirement for this to be true: that's correct flush/write barrier semantics available. The filesystem driver needs to be able to trust into the drive really having written data to it if 'the drive' reports as such.


    If write barriers are not correctly implemented then a crash or power loss now has different consequences: with old filesystems like a journaled ext4 for example you get silent data corruption but with modern attempts like ZFS or btrfs you're very likely to loose your whole pool at once.


    Some more details (not mentioning btrfs since the problem is a very old one that should be well understood in the meantime. But people today loose their btrfs pools for the same reason they lost their ZFS pools almost a decade ago: insufficient hardware with broken write barrier implementations).

    If you run into this problems with flush/write barriers not correctly implemented you have to fear simple crashes as well as power losses since both can result in your whole filesystem being lost (there's a reason why this issue is mentioned at the top of the btrfs FAQ). Same problem applies to mdraid in general but that's another story.


    How has the above an impact on type of storage? Let's take the worst choice first: USB storage. Whether write barriers are in place or not depends on

    • Host controller (USB host controller in this case)
    • Host controller driver (therefore OS and OS version matters)
    • USB-to-SATA bridge used in a drive enclosure
    • The bridge's firmware (controller firmware version matters and also whether it's a branded one or not)
    • The drive's own behavior (drive firmware version matters)

    With native SATA (as on the Helios4) or PCIe attached SATA (RockPro64 with NAS case and a PCIe HBA) you're only affected by 1), 2) and 5) any more (and the first two are usually not a problem today). So even if I never had any issues in this area with all my USB storage scenarios (using JMS567, JMS578, ASM1153 and VIA VL716 bridges) avoiding USB should be the obvious choice. Please also note that I'm no typical USB storage user since I would never buy 'USB disks' (e.g. from WD or Seagate) but only drive enclosures and drives separately.


    I mentioned WD and Seagate for a reason since while their disk enclosures rely on the same USB-to-SATA bridges as above their 'branded' firmwares differ and this alone causes a lot of problems (see this commit comment to fix broken behavior affecting all Seagate USB3 disks used with Linux).

  • @tkaiser Thank you for this good overview that answered lots of questions i still had.


    @ekent I've also thought about buying 1 or 2 helios4. If you also live in germany and intereseted in helios4, we may order together to save shipping costs. We are not in a hurry because Kobol statet this:


    We are using the same Pre-order approach that we did for previous campaigns. The goal is to reach at least 300 units ordered before starting production. We are planning to manufacture 750 units to have enough inventory for the late buyers, but don’t wait too long since the stock won’t last.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!