Performance improvements for Odroid H2 (ram, NVME, etc)

  • I've been running OMV on single board computers for a while now. I first started with a Rockpro64, then moved to an Odroid N2 (both ARM SBC's). I saw a decent improvement switching from the Rockpro64 to the N2, however I've started to build alot of things using the Odroid H2, which is an Intel J4105 CPU with dual gigabit NICs, an M.2 for NVME and uses SODIMMs for RAM.


    Right now I have the 2 NICs bonded (rr balanced), OS is running on a 256gb NVME, and I have 8gb ram in dual channel. I know the bonded NICs don't improve single transfer speeds but they're already there and should help with multiple transfers. I also know the OS running on NVME probably isn't improving anything over running the OS on eMMC, but NVME is actually cheaper than eMMC these days. RAM, I don't know, I'm not sure what kind of use OMV makes out of ram, I have 16gb (2x8) readily available as well if there was some improvement to be had there.


    My Storage is 3 USB3 4xHDD enclosures, it's just set up as JBOD, no raid or anything. Disks are all ext4 single partition mounted NFS shares. I'm not trying to build anything crazy, I just figure I have a few hardware options on hand an I'd like to get the most out of it. Like I could either partition the NVME or move the OS to eMMC (have a 16gb chip on hand) and use the NVME disk as some sort of cache, though from searching around I can't find anyone mention a beneficial way to use SSD cache. Any thoughts?


    The H2
    https://www.hardkernel.com/shop/odroid-h2/

  • More RAM means bigger disk caches. That is very beneficial for local processing of disk data. Indexing, searching, compiling, databases, VMs and so on. It doesn't help a lot with filesharing. How much RAM is desired depends on the work load and the "active set" of data accessed.


    The old 80/20 rule usually applies. 20% of the files on a disk are requested 80% of the time. But in practice, with bigger drives, it is closer to 95/5 or even more. That is why caches can help a lot.


    By default write caches are emptied quickly. This is good because it reduce the risk of data loss in case of power failure. But bad because it lowers performance. Delayed writes could perhaps be better scheduled, merged or even replaced. It is possible to reduce the disk cache write pressure. But a UPS might be nice then...


    It should be possible to setup bcache to provide SSD cached disk access for OMV. But OMV itself has no support for it. I have used bcache, but not with OMV. It works very well!


    If the server access data over net network network reads can also be cached. On SSD, HDD or RAM. This is much more mature using fscache and NFS. This can be used both on servers reading files from other servers and directly from clients. I have used it both ways, but with OMV only as a client. It can hugely benefit a laptop with slow wifi to access a NAS.


    Extra caches like this is a great way to improve performance. But it will make things much more complex and more likely to fail. Backups becomes more important!


    I'm setting up a small home application server to run VMs and dockers, mostly accessing files from other NAS over NFS/GbE. Parts haven't all been delivered yet...


    It will be a ASRock DeskMini A300 with a AMD Ryzen 5 3400G, 32GB RAM, 1 TB 970 EVO Plus NVMe + perhaps 2 TB SATA SSD. I intended to use 2x1TB NVMe, but I received the NVMe drives early and did the mistake to put one in my laptop. It will have to stay there...


    It will not use any extra disk cache, but it will use a large amount of SSD storage for fscache to cache NFS reads from several HC2 OMV NAS.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

    Edited once, last by Adoby ().

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!