Why I chose an N100 over a Raspberry PI5

    • Official Post

    I have a server at a relative's house, it is an old i2500k whose motherboard has started to have problems, so I have to replace the hardware. This server serves as a remote backup to my server, plus provides a media center and other services in that home. So I looked for alternatives for these basic needs:


    - Connect a minimum of 4 existing hard drives on the original server through SATA ports.

    - Preferably, although not essential, reuse the minitx box and power supply in which the original server is located.

    - Docker: Jellyfin (hardware transcoding), Syncthing, Duplicati.

    - Plugins: mergerfs, compose, wireguard, nut.

    - Scheduled remote synchronization jobs.


    It's not much, any basic hardware would be enough. After looking at the possibilities of the current market, I came to this:


    Bought:

    - Asus PRIME N100I-D D4 motherboard with integrated low-power N100 processor -> €121

    - Crucial RAM 16GB DDR4 3200MHz CT16G4SFRA32A -> €35


    Reused:

    - Old miniitx box whose model I don't remember.

    - BeQuiet 300W power supply.

    - Siba PCIe x1 to 4 SATA Adapter (reused only for the good of the planet)

    - 4 hard drives.


    This more than meets the needs, in reality not much performance is necessary for this system but the extra performance is appreciated if consumption is contained.


    • N100 versus Raspberry PI5.


    The Raspberry PI5 was my first thought, it's starting to look attractive for a NAS now that it supports SATA ports, but I compared it to the low power processors on the market today and it came out on the losing end.

    See the numbers here https://www.cpu-monkey.com/es/…2-vs-intel_processor_n100 and here https://gadgetversus.com/proce…-vs-intel-processor-n100/

    Intel's N100 processor surpasses the Raspberry PI5 in everything, I chose it for the following reasons:


    - Performance: Spectacular on paper, after seeing comparisons on the internet. Superior to the core i5 2500k that the current server has, above expectations. Similar to the G6400 on my main server, I didn't expect this. Superior to the Raspberry PI5 in all tests. GPU with transcoding capacity for current codecs.


    - Consumption: TDP=6W What more could you ask for? There are no words. The Raspberry PI5 consumes twice as much, TDP=12W. The Raspberry PI4 consumes TDP=7.5W. (See previous links) The TDP might not correspond to the actual consumption, but it would still be very similar. We are not going to talk about 4W up or down when the equipment is at full capacity if most of the time it will be idle..


    - Price: Similar to a Raspberry PI5. Realistically, when you have finished buying everything you need to start the Raspberry, case, heatsink, hat, power supply, etc., you will have spent the same as if you bought a motherboard with an N100, 8GB of Ram and a PC case with power supply.

    Raspberry PI5 8GB -> €95.95

    Box -> €11.50 Raspberry official, the hard drives need to be housed.

    Power supply -> €14.95 Raspberry official, the hard drives need to be powered.

    PCIe Hat -> €31.95

    PCIe to SATA adapter -> €40


    - AMD64 architecture versus the Raspberry's arm architecture. This makes it directly compatible with debian and OMV and everything that comes with it without complications.


    - Possibility of purchasing the N100 integrated into a mini-ITX board with SATA ports to configure the server in any box with capacity for several drives and an ATX power supply. MiniPCs are not a good option unless a single drive is enough, connecting drives via USB is a bad idea in general.


    - Greater connectivity on a miniitx board compared to the Raspberry PI5. Note: The Raspberry PI5's PCIe 3.0 x1 port offers a total bandwidth of 985MB/s. For 4 mechanical hard drives it is already starting to be a bottleneck, if you connect more drives the simultaneous access speed will drop. If they are SSD drives, there should not be more than two in total unless you don't care about performance.


    - Ram Memory: the N100 supports up to 16GB compared to 8GB for the Raspberry. The price will go up €18 more but I will have 16GB of Ram, today I don't need it but tomorrow maybe I will. Note: Unofficially, the N100 can support up to 32 GB of RAM, perhaps more, despite the maximum capacity officially supported by motherboard manufacturers.


    - Passive heatsink of the N100 compared to the small and noisy fan that the Raspberry PI5 needs. I hate the noise small fans make.


    I honestly can't think of any reason to set up a server with a Raspberry PI5 after reviewing the N100. Someone might think about the size, the Raspberry is very small, but if you want to connect several hard drives you will reach the same total volume. If you only want a hard drive you can buy an N100 miniPC for the same price by searching a bit. I think that the Raspberry PI5 has arrived too late to the NAS world, before I didn't like the Raspberry because it didn't have SATA ports and now that it has SATA ports and I'm thinking about it, it turns out that there are better options with amd64 architecture. Anyone can build an N100 system, you don't even need to get your hands dirty with thermal pastes and heatsinks, the CPU and heatsink are already mounted on the motherboard.


    • Choice of motherboard.


    Right now there are three motherboard options on the market with the N100 processor, two from Asrock and one from Asus, all of them with passive CPU cooling. So there isn't much to choose from, except for hard-to-find industrial plates, I guess the options will increase over time. I chose the Asus one by default:


    Asrock N100M -> Something larger than a mini-ITX without being micro-ATX 22.6cm x 17.8cm. For other configurations this board is interesting, but I prefer a mini-ITX that fits in the case I already have, so out of the question.

    Asrock N100DC-ITX -> It is mini-ITX but it does not have an ATX power supply connector, it has a DC input for a power adapter on the rear panel. It has two SATA ports but I need to connect 4 drives and without a connection for a standard ATX power supply I rule it out for that reason. Since there is another option I don't need to do DIY to feed more disks.

    Asus PRIME N100I-D D4 -> This is the chosen board. It is mini-ITX and has an ATX power supply connector. It only has one SATA port, so you need to add an adapter. It has one PCIe 3.0 x1 port and one miniPCIe 3.0 x2 port (with 2 real lanes). This makes it impossible to use a SAS HBA (PCIe x4 connection) and forces you to look for other types of adapters to take advantage of those ports. There are options on the market so there is no problem, you just have to choose the right ones.


    • PCIe to SATA adapter.


    The keys to choosing a PCIe to SATA adapter are:

    - Never choose an adapter with a chip that uses port multiplier, especially if you are thinking about a Raid configuration. Most adapters on the market have a port multiplier.

    - Choose an adapter with a connection that has enough bandwidth for the drives you are going to connect to avoid bottlenecks.

    In this link you can see a detailed explanation of all this https://forums.unraid.net/topi…d-controllers-for-unraid/


    The PCIe to SATA card that I'm going to install is worthy of a kick and go out the window, but since I already had it in a drawer this would go against global warming :) so I'm going to reuse it. It is a Siba brand PCIe 2.0 x1 to 4 SATA card with a Marvell chip with port multiplier and FIS. When I bought it years ago I had no idea what I was buying.


    Most, if not all, Marvell chips have port multipliers. This is not ideal for performance, what it does is divide a SATA port into several ports to be able to connect several hard drives to the same SATA port. This makes operations slower in general. In this case the adapter has FIS, so at least simultaneous operations can be done on the disks. The real problem with port multiplier is configuring disks in Raid, this can lead to failures during Raid recoveries or strange Raid operations and loss of disks in the array for no apparent reason. In this case I have no intention of using any type of Raid and performance does not worry me much, only data disks will be connected, generally the accesses will be to a single disk to serve a multimedia file and massive file movements will never be made from one disk. disk to another.


    On the other hand, this card will connect to a PCIe 3.0 x1 port, but the real speed will be PCIe 2.0, which is what the card supports, I will not even take advantage of the PCIe 3.0 bandwidth offered by the motherboard. A PCIe 2.0 x1 port provides a bandwidth of 500MB/s, which divided among 4 SATA ports provides 125MB/s simultaneous for each port. This is really a pittance, less than half of what one would expect, but it would be enough in this case, at least it would not act as a bottleneck in a gigabit network. One of the drives is a docker SSD that will be connected to the SATA III port on the motherboard with full bandwidth of 750MB/s, so I will only connect 3 mechanical drives to that adapter. This improves things a bit, leaving a simultaneous bandwidth for each disk of 167MB/s. This could be a bit of a bottleneck but is acceptable for this use case with mergerfs and little to no concurrent access, so let's save the planet and reuse the old hardware.


    If I had not had this adapter, I would have bought a miniPCIe 3.0 x2 6-port SATA adapter with Asmedia 1166 chip. This chip does not use a port multiplier, therefore it is suitable for software Raid if necessary and the bandwidth of the connection PCIe 3.0 x2 is 1970MB/s. This adapter could be connected to the miniPCIe 3.0 x2 port (two real lanes) of the Asus board and take advantage of its full potential. For simultaneous access to the six disks there would be a bandwidth of 328MB/s for each disk, this is more than enough for a mechanical disk. And if more SATA ports were needed, a second adapter could still be installed in the PCIe 3.0 x1 port of the Asus motherboard. The bandwidth of this port is 985MB/s. Depending on the maximum access speed of the connected disks, it would be enough for two more disks or up to four other disks with a possible acceptable bottleneck. All these calculations would drastically reduce the number of ports if we were talking about SSD drives that have higher access speeds.


    In this section of connectivity, comparing again with the Raspberry PI5 which only has a miniPCIe 3.0 x1 port, we obtain many more connections with the N100 board for the same price.


    • Power supply.


    The maximum total consumption of this system with 3 mechanical disks and an SSD disk will never exceed 80W at most, possibly less. Therefore a 150W power supply, or 120W if the power supply is of quality, could be adequate to power it safely.


    Then the question arises whether to install a picoPSU power supply, there are models on the market with 4 SATA ports that could work. Another option would be to install it in a case with a flexATX power supply, although I have always hated them, I hate the noise that small fans make, have I already said it...? And the third option would be a standard ATX or SFX power supply, although the smallest ones are usually 300W.


    Power supplies have an efficiency curve that determines that the optimal working point is around 50% of their total power. In this case, the maximum power needed is 80W but the usual working power will be much lower, so a picoPSU would be ideal. The problem with picoPSUs is that it is a minority market that makes it difficult to find quality hardware with certain guarantees. So I prefer to waste a little energy by making a standard power supply work below its optimal efficiency zone rather than installing hardware that could give me problems in the medium term or even break the rest of the hardware.


    Having said that and since in this case I have the 300W BeQuiet power supply, I reuse it because it seems like the most reasonable option to me. At the end of the day, any hard drive that can be broken has an economic value greater than even the value of the system's motherboard and the Ram memory combined and I intend for the drives to last for many years.

    • Official Post
    • Assembly and final result.


    When I receive the parts I will try to post some photos of the assembly and my impressions of how it actually works.


    ____________________________________________________________________________________


    Edited on November 23, 2023


    It took a while but the motherboard finally arrived :)




    • OMV installation:

    Right now the installation of OMV6 on this motherboard is a bit peculiar. To take into account the following points:


    - The current OMV6 ISO is not capable of starting the installation, it hangs on the startup screen, so OMV can only be installed by installing Debian 11 and then OMV6 using the installation script https://wiki.omv-extras.org/do…6:alternate_amd64_install

    - The Bios of this motherboard does not allow you to boot in legacy mode, at least I have not found a way to do it, so it is necessary to install in EFI mode for the boot to work.

    - During the installation it asked me for a driver for the network interface, inserting a pendrive with the driver solves it.

    - Debian installs apparmor by default and it will be necessary to disable it after installing OMV to avoid some problems.

    - To enable hardware decoding of the Alder Lake iGPU it is necessary to install the proxmox 6.2 kernel using the openmediavault-kernel plugin.


    After that everything works perfectly, no errors in the boot log or operation. When the OMV7 ISO based on Debian 12 is available, all this will be more fluid.


    • Consumption:

    Later in this thread macom published a couple of links where you can see precise consumption figures for this board (thanks macom ), so it is not necessary to do many checks. I just made some measurements just out of curiosity, in each measurement I only waited 3 or 4 minutes, I probably would have dropped a little more if I had waited longer. All measurements are made with a 300W Be Quiet power supply, according to the previous links this is possibly increasing consumption by 3W or 4W more. I have not disabled audio or anything else in the Bios. The RAM memory is a single Crucial 16GB module. OMV6 was installed on a USB flash drive:


    300W power supply:

    Between 7.70W to 8.00W - Motherboard + OMV pendrive working

    Between 8.00W to 8.30W - Motherboard + working OMV pendrive + Noctua 80mm fan

    Between 12.10W to 12.50W - Motherboard + working OMV pendrive + Noctua 80mm fan + Siba PCIe to SATA Card (without disks)

    Between 25.50W to 25.50W - Motherboard + working OMV pendrive + Noctua 80mm fan + Siba PCIe to SATA Card + 1 HD 3.5" + 2 HD 2.5" + 1 SSD 2.5"

    Between 22.50W to 23.00W - The same previous configuration after one hour of rest. This will be the actual consumption of this server most of the time. I'm not going to set the disks to sleep (another topic of discussion).


    Of all this, what surprised me most was the 4W jump in consumption when connecting the PCIe to SATA adapter card.



    • Functioning:

    So far everything is working perfectly. Everything works very smoothly both in the GUI and in docker containers. I'm not going to install virtual machines on this server but from what I'm seeing so far everything should be going pretty well.


    Jellyfin is running on hardware graphics acceleration with the Alder Lake iGPU. I have tried viewing from 4 clients simultaneously encoding in all 4 from the server and the CPU has remained between 9% and 12% of use even though the clients were Windows browsers connected to a Jellyfin player website directly . This would probably improve if customers had Kodi installed (plus the Jellyfin addon for Kodi). Perfect.


    The CPU temperature remains around 40º at rest in this mini-ITX case with the 80mm fan regulated by Bios. Reading the OMV dashboard with the omv-cputemp plugin. The ambient temperature is about 20º.


    So for now everything is perfect, I am satisfied with this server.



  • Very interested into seeing how you further progress. A friend of mine and me were facing similar questions concerning the choice of hardware. While he went for a N100 ITX build I decided to take the much less foreseeable route of going with a SBC.

    Why? Mostly because I wanted and I like to tinker. Already running ARM on the desktop I also really wanted to do the same for my home NAS.

    I did not consider the Pi at all though but got pretty much fixated on the rk3588 SOC as it provides far better IO: A combo PCIe 2x1 or S-ATA port, a PCIe 3x4 M.2 socket and a 2.5Gbps NIC. Equipped with those capabilities and performance which is comparably to the N100 I took the plunge and went for a RADXA Rock 5B.

    Although we talk about single figure wattage improvements when idling those are still pretty substantial: 8W for the N100 when idling (running Arch Linux, no storage) compared to less than 1,5W idling (running Debian, no storage). For more resource intensive tasks the N100 easily breaks 30-40W while the Rock 5B is just fine with a 5V 15W power supply. That said the power draw does not yet factor in the idle power consumption of the PicoPSU which will power the S-ATA drives (and the SBC) or the M.2 to S-ATA adapter in the final setup. But I don't think that this will exceed 2W. Even adding those to the bill should still result in around 40kWh a year, when purely idling.

    So while the N100's 6W TDP does not even reflect the idle power consumption when measuring the total system power of the mini ITX board, RAM, the boot medium, NIC and a high quality PSU the Rock 5B is almost on par with the performance per watt metrics of Apple Silicon.


    The setup is much more involved though. As the hardware is still fairly new and much less standardised than the x86(_64) environment, it just now received solid kernel mainline support (6.7 rc1) and while the GPU and video decoder acceleration is great by now it hasn't yet trickled down to all projects. For instance the Jellyfin docker image (due to ffmpeg) still needs some modifications.

    • Official Post

    went for a RADXA Rock 5B.

    When I looked at possibilities I didn't want to deviate too much from standard hardware. I've seen too many queries on the forum regarding issues about unusual hardware that requires a lot of work to get it working properly. I try to follow the KISS principle, so I limited myself to the Raspberry PI5 and the low-power x86 processors, I didn't look at anything else.


    Looking a little at the board you mention, it seems attractive, 8-core CPU (4 lower speed), possibility of 16GB of RAM, 2.5 Gbps ethernet, PCIe 3.0 x4 (2 real lanes, be careful with that, it is not a x4 real). Although the price seems to be quite above the Raspberry PI5 and the N100, I don't know if it is because of the novelty or if it is already the established price. In any case, even if the price were comparable, I would not have chosen this board either, I prefer to avoid software compatibility problems. I hope you are able to overcome the difficulties.


    In a private conversation with a friend about this thread, the first thing he told me in a forceful way was that the numbers I published about consumption were not real, that I was comparing TDP instead of actual consumption. So I edited the thread and added that those figures are TDP, I'm not trying to "mislead" anyone. The only thing I have done is collect information on Google to compare both and transfer it here so that it is useful for others, I have tried to be impartial in this, the links are there. For me, consumption is just another section in the comparison, it is not something essential. If there were no low-power CPUs with decent performance, I wouldn't consider it, I would install a conventional desktop CPU.


    I really think that CPU consumption is not something that should be too obsessed, when 4 hard drives are finally connected the highest consumption will probably be from the disks. Additionally, the CPU will be idle most of the time. I wouldn't even mind installing a modern desktop CPU, in idle state the consumption will be very contained. But with the current level of development it is easy to get a low-power CPU that performs well enough for many use cases, so welcome.


    I see that you also have a lot of emphasis on the issue of consumption, although it is something that does not worry me too much, I will do my best to publish real consumption figures for the N100. I'll try to find Linux software to do some testing, I wouldn't want to be forced to install Windows for this, I won't have much time, my relatives have been waiting for their server for a week now and the parts haven't arrived yet :)

    • Official Post

    Here you'll find some power consumption values


    ASRock N100DC-ITX Test - Mini-ITX Mainboard mit Intel N100 Prozessor
    Mit dem ASRock N100DC-ITX testen wir den lang erwarteten Nachfolger des sehr erfolgreichen und bei uns häufig empfohlenen ASRock J5040-ITX Mini-ITX Mainboard..
    www.elefacts.de


    ASUS PRIME N100I-D D4 im Test - Das ideale NAS Mini-ITX Mainboard ?
    Mit dem ASUS PRIME N100I-D D4 schauen wir uns bereits das zweite Mainboard mit Intels N100 Prozessor an. Zuvor hatten wir bereits das ASRock N100DC-ITX getest..
    www.elefacts.de

    • Official Post

    Here you'll find some power consumption values

    Thank you very much macom , I had not read those articles before. After reading this I think I no longer need to do any testing, the little that I planned to do is already done in those articles, including comparing the consumption with an external power supply and an ATX power supply. I have an Antec box with a 90W picoPSU power supply and I was thinking of connecting the board to that box to measure consumption but it is no longer necessary to do all that work :)


    Regarding the 10W TDP of the Asrock boards, I had already read it before and the truth is that I don't like it too much. It sounds like overclocking the CPU, I don't need that, in my opinion the N100 already performs well enough without needing this and I am convinced that it will reduce its useful life in some way. The only real difference according to that article is in the performance of the GPU, but in a NAS it will only be used for transcoding, never for games, so it doesn't make any sense, the transcoding performance will be the same, if I I'm wrong, someone correct me.


    And with respect to the final consumption values, I can only say that... I don't care if they consume 3W more or less at full load. I was just thinking of doing some testing because it seems like it's a topic that raises some interest. In my opinion, the consumption of a NAS after a year depends mainly on the resting load since that is how it will be 95% of the time, and in this case it is around 2W. These are such small figures that it seems ridiculous to me to try to save a watt when the case fan is already going to consume that, without going into assessing the consumption of the hard drives. I think that at the moment when we are talking about low consumption, a Raspberry PI5 is going to matter the same as anything else. But each one decides based on their needs.

    • Official Post

    If you use the unofficial nonfree image of Debian 11, you don't need to install a network driver:

    https://cdimage.debian.org/cdi…-11.8.0-amd64-netinst.iso

    I ran into this when I was already installing so I moved on.

    In fact, the Realtek RTL8168 network interface on this motherboard is giving me problems, it is running at 100Mbits and I can't solve it. Apparently there is a conflict in kernel 6.2 with this interface. https://www.mail-archive.com/s…169%5C%29%22&o=newest&f=1

    I guess this is the price to pay for such recent hardware, I hope Debian 12 and OMV7 fix it.

  • With the iso I linked to, the Asus N100 board only had problems with video hardware acceleration, everything else worked. The network was running at 1GBit. I had installed OMV 6 as a test. Video hardware acceleration probably requires a kernel >= 6.2. With the Proxmox kernel 6.2.x, the video hardware acceleration worked and was passed through to the Jellyfin container, but Jellyfin was no longer reachable/usable. Before OMV 7 I don't do any more experiments with the board. OMV 6 now runs on an Esprimo Q556/2 with i5-7500T (cheap leasing returns via Ebay). The idle power consumption is no greater than that of the Asus board. Of course, things look different under load. But most of the time, OMV runs idling for me.

    • Official Post

    I have no choice but to take it to its final location and install it now, there are people who are waiting for this server, so for now I will have to connect a USB network adapter until the driver problem in OMV7 is resolved, I hope...

    Jellyfin works without problems for me with graphics acceleration. Maybe you didn't configure the acceleration options correctly within Jellyfin itself.

  • I don't think I've misconfigured anything. The video hardware acceleration of the Asus N100 did not run under Debian 11 (kernel 6.1.x). You can't "pass it through" or configure it incorrectly. Under the "old" hardware (i5-7500T, J4105, J5040) the video hardware acceleration in the Jellyfin container works perfectly. In the OMV-VM/Jellyfin container under Proxmox on a Fujitsu Esprimo Q558 (I5-8500T), the video hardware acceleration also works perfectly.

    • Official Post

    The video hardware acceleration of the Asus N100 did not run under Debian 11 (kernel 6.1.x)

    ok, so that was the problem, for it to work you must install kernel 6.2. It didn't work for me with kernel 6.1 either.

    You can't "pass it through" or configure it incorrectly

    I was referring to the acceleration settings within Jellyfin. There are several options that require configuration.

  • That case looks really long for ITX. At first I thought it was a Shuttle, but they didn't use ATX supplies back then. Could it be a old Cooler Master?

    • Official Post

    That case looks really long for ITX. At first I thought it was a Shuttle, but they didn't use ATX supplies back then. Could it be a old Cooler Master?

    This is it, it took me a while to find a link but I found it. Codegen MX31-A11, It's too old now https://be.hardware.info/behui…odegen-mx31a11-420w.70713

    The good thing about a high quality PC case is that it lasts more years than you expect.

    The bad thing about a low quality PC case like this is that it also lasts the same number of years. ^^



    The 3.5" disk makes too much noise, it has no shock absorbers. The two 2.5" disks are in a 3.5" to 2.5" adapter frame with rubber bands. But in the place where it is installed the noise doesn't bother too much, so...

    • Official Post

    In fact, the Realtek RTL8168 network interface on this motherboard is giving me problems, it is running at 100Mbits and I can't solve it.

    I received an update that installed the Realtek driver package and the problem was resolved, the network interface is now working at normal gigabit speed. I can only think that I have been making some mistake when I have tried to install this package manually. Fortunately Debian has solved it for me :)

  • I had to register to thank you chente for the writeup of your journey. It was a real pleasure reading your reasoning and arguments.

    It also convinced me to do next home server on something like N100 or some repurpused laptop instead of RPi5.

    Anyway, thank you :)


    Btw i came here from google when searching for comparison between N100 and RPi5.

    • Official Post

    I had to register to thank you chente for the writeup of your journey. It was a real pleasure reading your reasoning and arguments.

    It also convinced me to do next home server on something like N100 or some repurpused laptop instead of RPi5.

    Anyway, thank you :)


    Btw i came here from google when searching for comparison between N100 and RPi5.

    Thank you for the comment and welcome to the forum ;)

    • Official Post

    New board with N100 processor that seems interesting for a NAS. With 6 SATA ports, 4 2.5G ethernet ports and 2 M.2 NVMe slots.

    There is also a version with i3-N305 processor and 15W consumption.

    Link -> https://androidpc.es/placa-base-pasiva-intel-n100-i3-n305/

  • At least this i3 board has a open PCIe x1 header, the Asus board requires adapters for > x1.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!