I have a server at a relative's house, it is an old i2500k whose motherboard has started to have problems, so I have to replace the hardware. This server serves as a remote backup to my server, plus provides a media center and other services in that home. So I looked for alternatives for these basic needs:
- Connect a minimum of 4 existing hard drives on the original server through SATA ports.
- Preferably, although not essential, reuse the minitx box and power supply in which the original server is located.
- Docker: Jellyfin (hardware transcoding), Syncthing, Duplicati.
- Plugins: mergerfs, compose, wireguard, nut.
- Scheduled remote synchronization jobs.
It's not much, any basic hardware would be enough. After looking at the possibilities of the current market, I came to this:
Bought:
- Asus PRIME N100I-D D4 motherboard with integrated low-power N100 processor -> €121
- Crucial RAM 16GB DDR4 3200MHz CT16G4SFRA32A -> €35
Reused:
- Old miniitx box whose model I don't remember.
- BeQuiet 300W power supply.
- Siba PCIe x1 to 4 SATA Adapter (reused only for the good of the planet)
- 4 hard drives.
This more than meets the needs, in reality not much performance is necessary for this system but the extra performance is appreciated if consumption is contained.
- N100 versus Raspberry PI5.
The Raspberry PI5 was my first thought, it's starting to look attractive for a NAS now that it supports SATA ports, but I compared it to the low power processors on the market today and it came out on the losing end.
See the numbers here https://www.cpu-monkey.com/es/…2-vs-intel_processor_n100 and here https://gadgetversus.com/proce…-vs-intel-processor-n100/
Intel's N100 processor surpasses the Raspberry PI5 in everything, I chose it for the following reasons:
- Performance: Spectacular on paper, after seeing comparisons on the internet. Superior to the core i5 2500k that the current server has, above expectations. Similar to the G6400 on my main server, I didn't expect this. Superior to the Raspberry PI5 in all tests. GPU with transcoding capacity for current codecs.
- Consumption: TDP=6W What more could you ask for? There are no words. The Raspberry PI5 consumes twice as much, TDP=12W. The Raspberry PI4 consumes TDP=7.5W. (See previous links) The TDP might not correspond to the actual consumption, but it would still be very similar. We are not going to talk about 4W up or down when the equipment is at full capacity if most of the time it will be idle..
- Price: Similar to a Raspberry PI5. Realistically, when you have finished buying everything you need to start the Raspberry, case, heatsink, hat, power supply, etc., you will have spent the same as if you bought a motherboard with an N100, 8GB of Ram and a PC case with power supply.
Raspberry PI5 8GB -> €95.95
Box -> €11.50 Raspberry official, the hard drives need to be housed.
Power supply -> €14.95 Raspberry official, the hard drives need to be powered.
PCIe Hat -> €31.95
PCIe to SATA adapter -> €40
- AMD64 architecture versus the Raspberry's arm architecture. This makes it directly compatible with debian and OMV and everything that comes with it without complications.
- Possibility of purchasing the N100 integrated into a mini-ITX board with SATA ports to configure the server in any box with capacity for several drives and an ATX power supply. MiniPCs are not a good option unless a single drive is enough, connecting drives via USB is a bad idea in general.
- Greater connectivity on a miniitx board compared to the Raspberry PI5. Note: The Raspberry PI5's PCIe 3.0 x1 port offers a total bandwidth of 985MB/s. For 4 mechanical hard drives it is already starting to be a bottleneck, if you connect more drives the simultaneous access speed will drop. If they are SSD drives, there should not be more than two in total unless you don't care about performance.
- Ram Memory: the N100 supports up to 16GB compared to 8GB for the Raspberry. The price will go up €18 more but I will have 16GB of Ram, today I don't need it but tomorrow maybe I will. Note: Unofficially, the N100 can support up to 32 GB of RAM, perhaps more, despite the maximum capacity officially supported by motherboard manufacturers.
- Passive heatsink of the N100 compared to the small and noisy fan that the Raspberry PI5 needs. I hate the noise small fans make.
I honestly can't think of any reason to set up a server with a Raspberry PI5 after reviewing the N100. Someone might think about the size, the Raspberry is very small, but if you want to connect several hard drives you will reach the same total volume. If you only want a hard drive you can buy an N100 miniPC for the same price by searching a bit. I think that the Raspberry PI5 has arrived too late to the NAS world, before I didn't like the Raspberry because it didn't have SATA ports and now that it has SATA ports and I'm thinking about it, it turns out that there are better options with amd64 architecture. Anyone can build an N100 system, you don't even need to get your hands dirty with thermal pastes and heatsinks, the CPU and heatsink are already mounted on the motherboard.
- Choice of motherboard.
Right now there are three motherboard options on the market with the N100 processor, two from Asrock and one from Asus, all of them with passive CPU cooling. So there isn't much to choose from, except for hard-to-find industrial plates, I guess the options will increase over time. I chose the Asus one by default:
Asrock N100M -> Something larger than a mini-ITX without being micro-ATX 22.6cm x 17.8cm. For other configurations this board is interesting, but I prefer a mini-ITX that fits in the case I already have, so out of the question.
Asrock N100DC-ITX -> It is mini-ITX but it does not have an ATX power supply connector, it has a DC input for a power adapter on the rear panel. It has two SATA ports but I need to connect 4 drives and without a connection for a standard ATX power supply I rule it out for that reason. Since there is another option I don't need to do DIY to feed more disks.
Asus PRIME N100I-D D4 -> This is the chosen board. It is mini-ITX and has an ATX power supply connector. It only has one SATA port, so you need to add an adapter. It has one PCIe 3.0 x1 port and one miniPCIe 3.0 x2 port (with 2 real lanes). This makes it impossible to use a SAS HBA (PCIe x4 connection) and forces you to look for other types of adapters to take advantage of those ports. There are options on the market so there is no problem, you just have to choose the right ones.
- PCIe to SATA adapter.
The keys to choosing a PCIe to SATA adapter are:
- Never choose an adapter with a chip that uses port multiplier, especially if you are thinking about a Raid configuration. Most adapters on the market have a port multiplier.
- Choose an adapter with a connection that has enough bandwidth for the drives you are going to connect to avoid bottlenecks.
In this link you can see a detailed explanation of all this https://forums.unraid.net/topi…d-controllers-for-unraid/
The PCIe to SATA card that I'm going to install is worthy of a kick and go out the window, but since I already had it in a drawer this would go against global warming so I'm going to reuse it. It is a Siba brand PCIe 2.0 x1 to 4 SATA card with a Marvell chip with port multiplier and FIS. When I bought it years ago I had no idea what I was buying.
Most, if not all, Marvell chips have port multipliers. This is not ideal for performance, what it does is divide a SATA port into several ports to be able to connect several hard drives to the same SATA port. This makes operations slower in general. In this case the adapter has FIS, so at least simultaneous operations can be done on the disks. The real problem with port multiplier is configuring disks in Raid, this can lead to failures during Raid recoveries or strange Raid operations and loss of disks in the array for no apparent reason. In this case I have no intention of using any type of Raid and performance does not worry me much, only data disks will be connected, generally the accesses will be to a single disk to serve a multimedia file and massive file movements will never be made from one disk. disk to another.
On the other hand, this card will connect to a PCIe 3.0 x1 port, but the real speed will be PCIe 2.0, which is what the card supports, I will not even take advantage of the PCIe 3.0 bandwidth offered by the motherboard. A PCIe 2.0 x1 port provides a bandwidth of 500MB/s, which divided among 4 SATA ports provides 125MB/s simultaneous for each port. This is really a pittance, less than half of what one would expect, but it would be enough in this case, at least it would not act as a bottleneck in a gigabit network. One of the drives is a docker SSD that will be connected to the SATA III port on the motherboard with full bandwidth of 750MB/s, so I will only connect 3 mechanical drives to that adapter. This improves things a bit, leaving a simultaneous bandwidth for each disk of 167MB/s. This could be a bit of a bottleneck but is acceptable for this use case with mergerfs and little to no concurrent access, so let's save the planet and reuse the old hardware.
If I had not had this adapter, I would have bought a miniPCIe 3.0 x2 6-port SATA adapter with Asmedia 1166 chip. This chip does not use a port multiplier, therefore it is suitable for software Raid if necessary and the bandwidth of the connection PCIe 3.0 x2 is 1970MB/s. This adapter could be connected to the miniPCIe 3.0 x2 port (two real lanes) of the Asus board and take advantage of its full potential. For simultaneous access to the six disks there would be a bandwidth of 328MB/s for each disk, this is more than enough for a mechanical disk. And if more SATA ports were needed, a second adapter could still be installed in the PCIe 3.0 x1 port of the Asus motherboard. The bandwidth of this port is 985MB/s. Depending on the maximum access speed of the connected disks, it would be enough for two more disks or up to four other disks with a possible acceptable bottleneck. All these calculations would drastically reduce the number of ports if we were talking about SSD drives that have higher access speeds.
In this section of connectivity, comparing again with the Raspberry PI5 which only has a miniPCIe 3.0 x1 port, we obtain many more connections with the N100 board for the same price.
- Power supply.
The maximum total consumption of this system with 3 mechanical disks and an SSD disk will never exceed 80W at most, possibly less. Therefore a 150W power supply, or 120W if the power supply is of quality, could be adequate to power it safely.
Then the question arises whether to install a picoPSU power supply, there are models on the market with 4 SATA ports that could work. Another option would be to install it in a case with a flexATX power supply, although I have always hated them, I hate the noise that small fans make, have I already said it...? And the third option would be a standard ATX or SFX power supply, although the smallest ones are usually 300W.
Power supplies have an efficiency curve that determines that the optimal working point is around 50% of their total power. In this case, the maximum power needed is 80W but the usual working power will be much lower, so a picoPSU would be ideal. The problem with picoPSUs is that it is a minority market that makes it difficult to find quality hardware with certain guarantees. So I prefer to waste a little energy by making a standard power supply work below its optimal efficiency zone rather than installing hardware that could give me problems in the medium term or even break the rest of the hardware.
Having said that and since in this case I have the 300W BeQuiet power supply, I reuse it because it seems like the most reasonable option to me. At the end of the day, any hard drive that can be broken has an economic value greater than even the value of the system's motherboard and the Ram memory combined and I intend for the drives to last for many years.