New NAS (NanoPi M4v2) with lvmcache some question

  • Hi everybody,


    As a long time reader I've just registered to this forum a moment ago. Since a while I'm planning a complete rebuild of my home network. The NAS will only be a small piece of it, but even if I don't have great demands I think it's important enough to take some time in planning. My old QNAP TS-219P running Debian stretch, runs for more then a decate, time to say goodby.


    To give you an idea of my project here's a small summary:
    Beside the NAS that will run as a "legacy" intall, there will be a number of Docker hosts holding the different service in my home. High Availabilty is an issue, but after a long time of wighting pros and cons, I decided for a special approach. In my opinion, real HA is far to expensive and not necessary in my situation. As soon as one problem is solved the next one will occur and in the end it would result in clustered file systems, redundant switches and other crazy stuff.



    So what I will do then?
    My Internet access is LTE only, no other possibilities available. For now a single Router connects to the Internet providing far enough bandwith, more then I ever had with physical connections before. Here I will possibly but an Unifi USG between the Internal Network and the Router and connect a second router (both in bridge mode of course) to the USG for load balancing. Not because I have the need for it, just because I have some SIM cards at no extra cost available and because it's fun to see how far I can go with increasing the bandwith to the Internet. No HA at all as the USG can fail and I'm out of Internet. So I will connect a third router to the network, this one in cold standby same IP as the USG but powered off. AS soon as the USG fails I have just to cut the power from the the USG and to fire up the spare router. This can even be automated.
    DHCP, DNS, PiHole and so on, even HomeAssistant when I find a solution todo so, will run in Docker Containers two Instances of each at least on different Hosts with no network attached storage, so the services don't depend on the NAS end even if a Docker Host fails completely all services are still available. To make this even more HA the Hosts will connect to different switches. Of course, if one of them fails not all will work, but there will be still a path to the Internet and as much lights that I can find the broken hardware should work in the house.
    The NAS itself will be with no hardware redundantency and with no raid. All it will to is regular backups and snapshots on a second storage. What this will be I have not decided yet, but probably a Docker container as well. As all services relying on the NAS are not critical, media streaming, photo archive etc., I call them comfort services, a manual hardware swap even if it's next day or next week will do the job.
    For the NAS I decided to go with a NanoPI M4v2 with SATA Hat. Because the introduction to my ideas got longer then expected and I've not describeted my plans for the NAS, I will stop here and describe my planned NAS setup and the open points I have questions about in a second post.
    At least you can already expect what crazy stuff is spinning in my mind. :/

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

  • In my first post I wanted to introduce myself and give a short overview of my project before asking my questions. As this got long and confuse I will go closer to my questions about the planned NAS in this post.


    My NAS I build, will be based on NanoPi4v2 with SATA Hat. I decided for a simple setup with one HDD in the range of 4-10TB. The usage of the NAS is not huge and as I wrote in my first post, only comfort services relay on it. At this point I could be already finished, put an SD Card in the board and OMV will be up and running.
    But, I would never have come here when I had no other ideas. I want to attach a SSD for the operating system to the board and a second one for bcache. Then a fourth SATA port will be left for future upgrades.
    My question is if it's worth setting up bcache on such hardware and what performance gains can I expect if there are any?
    How does network SATA and USB perform together on this boards? I'm just courios about how far one can go with such hardware at still low cost. Of course, one could attach 4 Ironwolf 4TB SSD's but then it's not low cost anymore. ;-)

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

    Edited once, last by lusteri ().

  • My question is if it's worth setting up bcache on such hardware and what performance gains can I expect if there are any?

    I don't think it is worth the effort. A single drive can saturate gigabit networking.


    How does network SATA and USB perform together on this boards?

    Depends on how you are using them. Please don't say raid...

    omv 5.5.5 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I don't think it is worth the effort. A single drive can saturate gigabit networking.

    With my current NAS this is true when copying one large file. when copying a large amount of small files performance decrease badly. My hope is that this can be improved with caching. I read about this and it seems that's the case, question is if a device like a nanopi will have enough resources for this kind of things.


    Depends on how you are using them. Please don't say raid

    I wrote earlier, no raids at all. ;-) I mean, are the SATA channels independent from each other and is for example network independent from USB, or is there some shared hardware that could decrease performance?

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

  • I mean, are the SATA channels independent from each other and is for example network independent from USB, or is there some shared hardware that could decrease performance?

    The hat has a sata controller connected to a pcie 2x channel. This is not shared with USB.

    omv 5.5.5 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • The hat has a sata controller connected to a pcie 2x channel. This is not shared with USB.

    This is good news. How about network and USB, are they shared or not?
    Regarding the bcache, is there any experience? Otherwise I'll give it a try. One could even try bonding multiple USB Ethernet adapters. I really wondring what could be possible with this little thing. Theoreticly 5 Gbit could be possible over USB3.

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

  • How about network and USB, are they shared or not?

    No. It has native gigabit ethernet.

    Regarding the bcache, is there any experience?

    Nope.


    One could even try bonding multiple USB Ethernet adapters. I really wondring what could be possible with this little thing. Theoreticly 5 Gbit could be possible over USB3.

    I think that is going a bit far. If you need that kind of network speed, I would look at the Clearfog or the Macchiato Bin. I think it only has one usb controller. So, those four ports are on a hub sharing the bandwidth.

    omv 5.5.5 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Thank you, that gives me a number. It's not about the need, it's just to see what would be possible. I do even do not have the network infrastructure and don't need them to outperform the mentioned setup.
    But I think I'll give it a try with the bcache. a 240 GB Ironwolf isn't expensive and can be ussed elsewhere if the test do not satisfy. Now I have first do order the parts, I'll get back when I have the first experience and result.

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

  • This sounds like good news. However, the NanoPi is alredy ordered and whenever they release such a board I will find something I can do with it other rhen NAS. ;-)
    Maybe the backup NAS *lol*

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

  • Keep an eye on the FriendlyARM web site and releases for the next month. I would not want you to be sorry when you see a new product perfect for your needs.


    Maybe Sata and M.2 PCIe in a smaller package than the the SOM-RK3399 dev board. Housing? WiFi? Dual GigE?

    This sounds like it might be very popular around here. I would love to know more :)

    omv 5.5.5 usul | 64 bit | 5.4 proxmox kernel | omvextrasorg 5.3.5
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I said earlier I'll give SSD cache a try. Well I can't tell if it's a performance gain or not or even performance is worse then direct access to the hard disk. But there is one point that could be of interest. If it would be possible to keep hdd spin up as low as possible this could be an advantage towards a low power consuming NAS.
    I now setup lvmcache with data on the hdd, cache and meta data on the ssd. I've configured an external journal for the ext4 file system as well, even if I don't know if it's an advantage or not. I can clearly see that read caches are getting higher and higher, making the solution promising. But, and this is my bigest problem at the moment, CacheReadHits are counting up already for hours which means that the hdd will not go to standby.
    Unfortunately there is only few documentation about lvmcache and I do not understand what migration threshold and chunk size do mean and what's the impact of them.


    I would greatly appreciate if there is somebody out there who could explain this settings to me and maybe somebody can tell me how to keep the hdd in standby as often as possible.



    My idea is to write and read to the cache, writing from the cache to the disk only if the cache get full and to read as much as possible from the cache without touching the hdd.

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

  • Some progress here. I was running watch lvs monitoring cache activity, i didn't remark that lvs every time it runs makes a disk access. Now the disks go to standby mode. I'm now able to browse the file system without spinning up the hdd. At the moment i believe access times to the NAS are faster when the hdd is in standby, but I'm not sure about this. I will need to split away the cache now to compare the behavor.
    Still need more documentation about the config of LVM cache.

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

  • Hi Lusteri, i'm looking to SSD caching too. Now i have my OMV 4.x on an old but robust HP DC77000 MT with 6GB of RAM, a 2.5" 160GB system disk, a RAID1 with 2X2TB, and i want to update it to 5.x.


    I have a unused 240GB SSD, 1 SATA3 pci-e controller (the HP is SATA2), and 1 Gigabit NIC. I'm thinking to put them in my existing NAS, and the main goal is to expedite my customers backups as well the restore of them. With the SSD Caching & dual Gigabit i think and i hope to bring my actual transfer rate from 120MB/s max to 200/240..


    All in all, if it doesn't work well i will have a faster reboot (for the few cases of rebooting neeeded, now the cycle neeeds 2 minutes...), and some SSD space to fiddle with :)


    Tomorrow i'll start, wish me luck ;)

  • @frik---I haven't been here for a while. At the moment I'm occupied with one off my other projects. I decided to migrate all services in my network but the NAS in a Docker Swarm. Quite some work... =O My projects are all on ARM based SBC's, however I'd like to hear about your progresses.:thumbup:

    I have long felt that most computers today are not powered by electricity.
    They instead seem to be powered by the "pumping" motion of the mouse! --Willian Shotts

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!