Building OMV automatically for a bunch of different ARM dev boards

  • there are some reasons..

    Huh? With a new image you would've to start from scratch and to download you need Internet connectivity anyway. Also there's no reason to buy/use crappy SD cards (see here) and you might even move the rootfs to a partition on a connected HDD (not recommended though).


    Providing the last time an update was only to save unfortunate Cloudshell 2 buyers from hassles since Hardkernel folks still fail so horribly with their NAS attempts.


    If you want to build a new image yourself you're always free to do so (it's easy once your hardware meets the requirements, details here)

  • Just another quick note: Currently all the very interesting ARM boards with Gigabit Ethernet and a H3 or H5 SoC are missing here https://sourceforge.net/projec…s/Other%20armhf%20images/ for two simple reasons: the vendor 'BSP kernel' sucks (3.4.39 for H3 and 3.10.65 for H5) and community driven mainline kernel development is still not entirely ready.


    Though I'm running H3 devices already since 1.5 years with mainline kernel still some stuff is 'work in progress' especially the mainline Ethernet driver has currently been replaced entirely but will make it into kernel 4.13. Maybe then or a few weeks later (speaking of 4.14) Armbian will switch support status from 'dev' to 'next' and as soon as that happens I'll let Armbian's build system immediately generate OMV images for the NAS capable H3/H5 boards: Banana Pi M2+, NanoPi M1 Plus, NanoPi M1 Plus 2, NanoPi NEO 2, NanoPi NEO Plus 2, OrangePi PC 2, OrangePi PC Prime and OrangePi Plus 2E.


    Fortunately the most community friendly vendor above (FriendlyELEC) could be convinced to drop shitty vendor kernel and to rely on mainline. Same with OMV support, the OMV settings @ryecoaaron and me developed for ARM boards the last 2 months have been adopted by them and they now provide already an OMV image for the interesting NanoPi NEO2 running with mainline kernel: some details and performance numbers available on CNX: http://www.cnx-software.com/20…ion-setup-and-benchmarks/


    In other words: the really inexpensive NanoPi NEO2 ($15) is currently the only low cost Gigabit Ethernet enabled ARM device fully supported by a performant OMV image. Since its own vendor is a team player, honours open source development and cares about details :)

    • Offizieller Beitrag

    NanoPi NEO2

    The single bay NAS dock looks pretty cool as well. Very cheap!

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The single bay NAS dock looks pretty cool as well. Very cheap!

    Unfortunately shipping costs to some countries are rather high so always better check first whether there's a local distributor. In the meantime CNX's Jean-Luc fortunately confirmed that their NAS 1.2 kit is directly useable with 3.5" disks too but let's try to discuss hardware related issues over there.

    • Offizieller Beitrag

    @tkaiser the solidrun CEx7 A8040 looks interesting. Hadn't seen the MacchiatoBIN either.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • the solidrun CEx7 A8040 looks interesting. Hadn't seen the MacchiatoBIN either.

    This thing is a beast but I doubt it's worth the price unless you also have a 10GbE network infrastructure. Some more details and pricing: http://www.cnx-software.com/20…available-for-349-and-up/


    For me currently the energy efficient single disk NAS thingie without any doubt is ROCK64 (1GB version starting at $25). Please take the performance numbers with a huge grain of salt since we did not even really start with software. This is the result of us doing first steps, my working dev sample arrived just few days ago and I'm pretty confident that once the few problems are solved this will perform with a single or maybe even 2 disks as good as a GbE NAS can perform: fully utilizing GbE of course.


    On a related note: It's fun to do this since board maker and chip maker are both cooperative, provide all information we need and immediately react on issues (am currently in an email conversation directly with Rockchip regarding a few USB3/UAS issues and it's really amazing how quick they are). Not remotely comparable to the mess we have to go through with board makers that behave like brain damaged retards (most recent example: Banana Pi M2 Berry which will make a lot of current RPi users unhappy soon :( )

  • Small updates on 3 of the most interesting ARM boards for OMV ever.


    1) The 1 GB DRAM ESPRESSOBin seems to be available in the meantime (for $50 on Amazon with international shipping). Performance is excellent (maxing out a Gigabit Ethernet connection since this SoC is made for NAS boxes by default), you can add one 2.5" or even 3.5" SATA disk directly (SATA power connector for 3.5" disks there but then you need an additional Molex to SATA power cable) and with the mPCIe slot there you can either add a performant Wi-Fi card or more SATA ports (2 or even 4 just like with any other mPCIe equipped board). I had some hope the internal network switch would be connected to the SoC via 2.5GbE (SGMII) but unfortunately it's only 1 GbE (RGMI) so when more than 1 client accesses an ESPRESSOBin OMV installation at the same time they have to share bandwidth.


    I let an OMV image create already 2 months ago and in the meantime it turned out that it ran out of the box (though some performance optimizations are still needed. Maybe I'll buy one board just for fun and do the work myself)


    2) Helios4 seems to get funded. It's most probably the only ARM 'DIY NAS device' out there featuring ECC DRAM (maybe there are commercial NAS that have also ECC but at least I don't know of one). OMV performance with this board will be as excellent as with Clearfog now since using the same heart (MicroSOM based on Marvell ARMADA NAS SoC 388). Approximately 24 hours left to get an $10 discount by using the coupon code KSBACKER10.


    3) Soon to be released ROCK64 board is maybe the first SBC ever that will be supported by OMV perfectly even before sales officially start (expected prices: $25 for 1GB, $35 for 2GB and $50 for the 4GB variant -- to be confirmed). Ayufan added an OMV installation variant including all the Armbian/OMV optimizations to his fully automated build system so tomorrow when the next image has been built you'll find ready to run OMV images over there: https://jenkins.ayufan.eu/job/linux-build-rock-64/


    Storage performance is excellent (best USB3 results ever seen on an ARM device, see also the post below for an overview), the chip vendor is really helpful and immediately started investigations about two potential USB3 problems, and prelimiary OMV/NAS benchmark numbers already look really promising: https://forum.armbian.com/inde…findComment&comment=34596 (the numbers will increase for sure)

  • I want to lower than maximum clock of the A15 big cores to lower the power consumption.
    I already tried to change /sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq to 1.5GHz from 2Ghz and the CIFS download speed is still over 100MB/s with Seagate 2.5inch external 5TB HDD.
    Please let me know which file I need to edit in the XU4 OMV rootfs to change the max clock permanently.

  • Please let me know which file I need to edit in the XU4 OMV rootfs to change the max clock permanently.

    /etc/defaults/cpufrequtils is the one telling cpufrequtils how to configure cpufreq scaling at boot. Since you only want to adjust frequency of the big cluster this should be sufficient. Please note that these values get overwritten once you click around in OMV GUI so ensure that you adjust 'OMV_CPUFREQUTILS_MAXSPEED' in /etc/default/openmediavault too.


    If you want to control more fine grained what's happening you would either have to adjust .dtb contents (not recommended) or adjust contents of /etc/rc.local (there you could also configure individual behaviour for big and little cores and so on but please keep in mind that with installed cpufrequtils your settings will be overwritten few seconds later. So stuff like


    Code
    (sleep 30 && echo $whatever >/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq) &

    is necessary there. But just give it a try with the stuff from first paragraph.

  • Thank you for the help. I modified both two files and the max clock is limited at 1.5Ghz after reboot.
    It reduces the power consumption and the heat obviously. The fan is running very rarely now.
    Anyway, the Samba speed is still impressive even with a quite slow 2.5inch 5TB HDD.
    The performance is very similar to my CloudShell2. But I have to build a tiny 2.5" NAS for my sister.


    This is the test result with the 2.5" HDD. So the random access speed is very slow.



    BTW, I hope the ROCK64 also can be a very affordable NAS solution for the low-power consumption 2.5" HDDs.
    Do you have a 2.5" HDD to test the ROCK64 NAS performance?

  • This is the test result with the 2.5" HDD. So the random access speed is very slow.


    Please keep in mind that all modern HDD implement ZBR (search for exactly that from here on please). If we're not talking about server 2.5" disks (SAS or 10k SATA disks like WD VelociRaptor) then you'll see sequential speeds exceeding or getting close to 100 MB/s only on the outer tracks of a 2.5" disk. Once the disk gets filled sequential transfer speeds drop down to the half on almost all 2.5" disks. It looks always more or less like this:



    In other words: Once there is really data on a 2.5" NAS throughput will continually drop. So almost all benchmarks are wrong (as usual) and that's also the reason why I use a standard test with every disk that arrives here: create 10 partitions of equal size and then run fully automated a quick iozone bench on the most outer, the inner and the middle partition. Did this just recently with an older 2.5" 7.2k Hitachi I pulled out of a MacBook (Apple branded, full SMART data).


    It's obvious that on the outer tracks sequential performance is a lot higher than on the inner tracks. Random IO performance with small block sizes is not really affected but with higher block sizes the lower sequential transfer speed always affects negatively random IO performance too.


    Please note that these are results with anachronistic Mass Storage protocol made on a XU4. I couldn't test with UAS since the only UAS capable 2.5" disk enclosure left for tests can not be powered with an external PSU and ODROID XU4 is simply too weak to provide enough power on the USB3 ports when switching from boring/slow Mass Storage to UAS here (well known problem, still happily ignored by ODROID micro community).


    In other words: if you implement a single 2.5" HDD NAS overall performance numbers will be a lot lower than a benchmark on an empty disk let you believe. All due to ZBR (more sectors on the outer than the inner tracks).


    If you have NAS use cases that need high random IO values then especially with 2.5" disks it always helps to partition. The first partition should be as small as possible and contain all the 'random IO data'. This way accessing the data in a random fashion will ensure that the stepper motors needed to position the drive's heads have not that much to do and IOPS increase. A 20% partition on the outer tracks can show twice as much IOPS (or even more) compared to a second partition holding the remaining 80% of data.


    Do you have a 2.5" HDD to test the ROCK64 NAS performance?

    Of course but there's no magic involved. With ROCK64 we're still investigating some network issues but once they're resolved I assume we're talking about maxing out GbE (940 Mbits/sec in both directions with RPS and appropriate IRQ affinity settings). And then there are one or two USB3/UAS issues currently investigated by Rockchip directly (one most probably related to the USB3 PHY since the problem only occurs on RK3328 but not on RK3399 which share the same xHCI host controller and the other potential problem might be related to xHCI host controller but one after another, I don't want to overstrain Rockchip engineers already ;) )


    All performance numbers we currently have are preliminary and no real benchmark numbers since software is still not ready. I usually only do active benchmarking: collecting numbers not to document how shitty something is but to improve them until they look sufficient. But what we have as the result of a 3 hour IRC session tweaking here and there is already perfect for any 2.5" NAS since 70/80 MB/s write/read are sufficient for 2.5" disks and all these numbers will improve within the next weeks anyway.

  • Thank you for the detail explanation. So my current 100MB/s of Samba speed will be 50MB/s once my sister fills it with 5TB files in 4~5 years later. :(


    My Seagate 2.5" 5TB and 3.5" 4TB Desktop Expansion HDDs are not Linux friendly either. My Mint x86 PC shows this message.


    Code
    UAS is blacklisted for this device, using usb-storage instead

    But I still get sufficient 100MB/s of Samba speed with those external HDDs.
    I believed the UAS is not so important for NAS application if we don't have 10Gbit infrastructure. But I was totally wrong.


    Anyway, I hope you guys can fix the issue in RK3328 kernel soon because I just ordered two ROCK64 boards. :)
    I guess ROCK64 is rock solid and its Samba performance is better than my XU4 because ROCK64 kernel must fix my shitty UAS compatibility issue soon.

  • So my current 100MB/s of Samba speed will be 50MB/s once my sister fills it with 5TB files in 4~5 years later.

    No, why? Your 5TB 2.5" is somewhat special since containing 5 platters so even if rotating pretty slow sequential transfer speeds will be rather high with this disk even if optimized for low consumption just due to the amount of sectors passing the heads with every rotation. It would be great if you could run the simple benchmarks from here to get an idea: Slow file polling with OMV 3.x


    If you do an 'fallocate -l 4500G /path/to/disk/bigfile' then a 4.5TB file gets created and when you now test again you see drive performance in 'almost full' state.


    And then also output from


    Code
    sudo smartctl -x /dev/sda | curl -F 'sprunge=<-' http://sprunge.us
    sudo armbianmonitor -u

    with this disk connected at boot would be great (uploads debug logs to an online pasteboard service and provides you with URLs showing the results, so if you do you owe me 2 links ;) -- and then I might comment on the blacklisting issue).


    UAS vs. Mass Storage is really not that important if we're talking about typical NAS use cases especially with rather slow HDDs. But there is a small difference and unfortunately if storage performance is much lower than network performance then NAS performance further slows down (so if your disk with UAS shows 110 MB/s tested locally and 100 MB/s measured through network and when you measure again with Mass Storage and the disk locally scores 100 MB/s then NAS performance will further decrease to 90-95 MB/s depending on settings).


    Wrt comparisons between UAS and Mass Storage at least on ODROID-XU4 it's somewhat difficult since we're suffering here always from an internal USB3 hub between USB host controller and USB-to-SATA bridge. So once USB3/UAS issues with RK3328 are resolved this is already scheduled as test: comprison of UAS vs. Mass Storage tests with the same set of HDD/SSD on both XU4 and ROCK64.

  • And that's why vendor micro communities are not only great sometimes but also a real disadvantage since micro communities create their micro realities and are trapped inside. The insanely stupid 'UAS is evil' campaign is still running over at ODROID forum: https://forum.odroid.com/viewtopic.php?f=146&t=27548


    A user is reporting either cabling/contact or powering problems of his disk (seems like bus-powered by ODROID-XU4 which is no good idea anyway). The problem occured already before the kernel update ('I had seen this problem before on kernel 3.10 but it was very rare') and there's only one possible fix for this then: check cabling, check powering, try to power the disk externally and not by ODROID-XU4 since known to be troublesome.


    The user reports 'using the original (old) cloudshell', the USB-to-SATA bridge there is not UAS capable, there's really no UAS involved anywhere. First and only reactions: The stupid recommendation to disable UAS. Instead of helping users the loud voices over in ODROID micro community are so fascinated by blaming UAS for any problem around that they're only able to waste their time with running this campaign instead of helping users.


    It's all understandable since micro communties work this way everywhere else too. Also updates work this way (a problem already existing long before has been successfully suppressed, then as with every update awareness for problems arises and when the same problem strikes back again after the update the update will be blamed.


    And the above mechanisms are also responsible for the Internet being full of stupid recommendations to 'disable UAS' other people then rely on to create even more stupid recommendations to 'disable UAS' (since written everywhere) :)

  • UAS vs. Mass Storage is really not that important if we're talking about typical NAS use cases especially with rather slow HDDs.

    Most people are typical NAS users like me. My 3.5" WD 4TB RED NAS HDDs are also very slow too.
    So I can live without UAS since I can't build a NAS with very expensive SSDs. :)


    But can you find any possible way to enable the UAS for my poor 2.5" 5TB Seagate HDD on ROCK64 kernel or XU4 kernel?
    Its USB ID is 0bc2:2322 Seagate RSS LLC.
    I hope the ROCK64 comes with a newer kernel 4.12 or higher.

  • Its USB ID is 0bc2:2322 Seagate RSS LLC.

    Then please read from here on: https://forum.odroid.com/viewt…?t=26016&p=188387#p188385


    At least in the past Seagate used ASM1153 chips combined with broken firmwares. One of the reasons I never buy 'USB disks' (always disk + enclosure separately to exactly know what I get and being able to flash firmware updates... just in case)

  • Too bad~~.
    I thought ROCK64 or other kernel update can solve my issue. But there is nothing to do with Linux at this moment probably.
    That storage works well on my Windows10 PC in UAS mode.
    Sadly there is no hope Microsoft opens their UAS driver source code for Linux users. :(


    I will open the case to extract the 5TB HDD.
    Can you recommend a 2.5" disk enclosure which works with Linux UAS driver?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!