J5040-itx build 8GB RAM, running OMV and 14 containers fluidly at 19W

  • Be carefull with that. At the time of server booting, the disk drives consume high peaks of energy and the system is also stressed. If you choose 4 large capacity disks with consumption close to 8 or 10W, you would get dangerously close to the limit of the power supply.

    It's not an issue with four HDDs. The bios makes sure that not all drives are being powered up at the same time when powering up the system. I am using this setup for many years now. But I agree, with more than four 3,5" HDDs one should utilize a traditional 300W ATX PSU at least.

  • Ok, I had a very simple solution to the power question.

    I had an old mini-itx computer stowed away that had been running Windows Server and I realized that I could temporarily borrow it's pico power supply.


    So, now the system (with three SSD's) is running slightly above 9W, at a CPU load below 10%., while qBittorrent is downloading some stuff.

    At max load 100% CPU, when starting up 12 Docker containers, I hit 25W. That's consistently 10W below the other PSU at both low and high CPU load.

    Just mounted it with some strips:

    • Official Post

    So, now the system (with three SSD's) is running slightly above 9W, at a CPU load below 10%., while qBittorrent is downloading some stuff.

    Good news. You have simply cut consumption in half by changing the power supply. I celebrate it. :thumbup:

    • Official Post

    That 10W was wasted on your old power supply primarily in the form of heat and the supply fan. Keep in mind that now you will probably need another fan.

  • That 10W was wasted on your old power supply primarily in the form of heat and the supply fan. Keep in mind that now you will probably need another fan.

    9W is very little power and the CPU only takes a part of it. The cabinet is fairly big, so it will easily be distributed through convection. I just checked with my FLIR camera and it's even hard to distinguish the CPU from the motherboard. Here the CPU is 27.7 degrees.

    So, I guess even at full load (25W) it will be just fine, I guess the CPU's can still operate in temperatures approaching 100 degrees celcius and I don't think I can get it up there if I wanted to.

    The only thing I noticed is that the chip in the center of this picture gets fairly hot (around 40 degrees), it's a SATA controller chip and probably because it's downloading some stuff.



    • Official Post

    A low revving fan would provide a bit of airflow, all the components would appreciate it and extend its lifespan, but you could leave it that way if you wanted.

  • I have the following setup: ASRock J5040-ITX, 16GB RAM, Fractal Design Node 304 case, Pico PSU-160W + 150W power adapter, AXAGON PCES-SA2N, OMV6 on 32GB USB 3 disk, 256GB SSD for dockers and 4 HDDs (19.5TB total). Usual power consumption is 35-37W, max peaks around 50W, the highest peak so far was 78W at reboot.

  • I have the following setup: ASRock J5040-ITX, 16GB RAM, Fractal Design Node 304 case, Pico PSU-160W + 150W power adapter, AXAGON PCES-SA2N, OMV6 on 32GB USB 3 disk, 256GB SSD for dockers and 4 HDDs (19.5TB total). Usual power consumption is 35-37W, max peaks around 50W, the highest peak so far was 78W at reboot.

    I think your spinning drives are probably consuming a lot and if they are configured in a RAID and have files accessed all the time, then they are all running all the time and never really have a chance to go idle. If for example you configured them as individual disks and divide the files according to type, so only one disk was accessed all the time, you would give some of drives a chance to spin down and save some power. The double up on RAM compared to mine also consumes a bit extra power, if you don't need it, you could take out one stick. You must also have a PCI controller that consumes a bit of power, since you have 5 disks and the motherboard has 4 SATA ports. When you restart the computer, the HDD's spin up, so that can explain the 78W.

  • FrederikSchack thank you for this post.


    After years and years of using my trusty HP N40L Microserver (gen7) I now will upgrade to this mainboard and all SSD. I also put in a 5.25 drive bay for 6 2.5 inch SSD drives. So another PCI SATA controller is required to connect all 7 drives (1 OS drive and 6x4TB SSD).


    This was the best resource to figure out what I need. The system performance will increase probably by factor 4 (not the normal NAS performance, but things like decompression etc).


    Energy was not the main concern, but the next cheapest thing is actually a AMD Ryzen 5 4500 ... which is so much overpowered and I like to keep it cool, as much as I can.

    Everything is possible, sometimes it requires Google to find out how.

  • This is pretty similar to what I'm working on currently... I've got the same ASRock J5040 board, and was hoping to build a bit of a custom case with a 4-bay drive cage I salvaged from a server at work.


    My main question to anyone using this board is: are you just using the onboard network? I had some issues where the NIC wasn't detected at all after installing to a USB drive, which was my original plan. After a bit of googling, it seems the Realtek controller it uses wasn't very well supported overall. That lead me to looking at a better NIC (and a 2.5g one at that) to go into the PCIe 1x slot. (any NIC recommendations btw?)


    I did discover that the M.2 Wifi slot is running off of one of the PCIe lanes, and put a 2-port SATA controller in mine, which I was going to use to add a proper boot SSD.


    Mainly curious about the NIC situation and this board, and if I need to do something specific to get it working better, or if just picking up a PCIe card is the better bet anyways.

  • I am using the onboard NIC and never had an issue with it. Neither while installing OMV nor while actually running it.

  • I am using the onboard NIC and never had an issue with it. Neither while installing OMV nor while actually running it.

    So far every time i've tried to install OMV, either the latest or previous stable, it'll get to the part where it tries to detect the network, then goes through ipv6 configuration, and then it just sits, and does nothing after that. I know the NIC works due to being able to use it just fine under windows, but I can't get it to do anything with OMV.

  • So far every time i've tried to install OMV, either the latest or previous stable, it'll get to the part where it tries to detect the network, then goes through ipv6 configuration, and then it just sits, and does nothing after that. I know the NIC works due to being able to use it just fine under windows, but I can't get it to do anything with OMV.

    Linux is not Windows. There may be no kernel driver for your nic, or it's possible that something may be different with the omv install iso compared to regular debian (I did encounter that issue on one system). I'd suggest that you download a debian 11 net install iso and install a minimal server, selecting only ssh and standard utilities, making sure there is no gui or other stuff selected to install. If that installs fine, then run the OMV setup script found here: https://github.com/OpenMediaVa…-Developers/installScript

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

    Edited 3 times, last by BernH ().

  • Linux is not Windows. There may be no kernel driver for your nic, or it's possible that something may be different with the omv install iso compared to regular debian (I did encounter that issue on one system). I'd suggest that you download a debian 11 net install iso and install a minimal server, selecting only ssh and standard utilities, making sure there is no gui or other stuff selected to install. If that installs fine, then run the OMV setup script found here: https://github.com/OpenMediaVa…-Developers/installScript

    This was going to be my next step, and was looking at how I needed to setup debian for an OMV install. I did decide to see how TrueNAS worked, and it seemed to do fine, although that's CORE, so the non debian version, so we'll see, hopefully tomorrow, how debian does on it.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!