Posts by bgravato

    Thank you all for the replies!

    ome may be thinking about their answer to this question. But I think nobody will tell you in writing to install OMV6 in a case like this ... What would you say if someone asked you this?

    I would never put beta software in a Production environment for a business.

    In general I'd agree that for production one should always go with stable.

    That said, I've recently installed Debian testing (bullseye) in a production server. That was just shortly before it became stable. It had been in freeze for several month and I went through the Release-Critical bugs that were holding back the stable release and none was relevant for my intended setup, so I felt confident to install a "testing" release. That saved me from upgrading it a month later and provided a newer PHP version, which I needed.

    I don't regret that decision.

    In that particular case I knew where to look for RC bugs and what to expect. When it comes to OMV, I'm not so sure... I had a look at open issues with OMV6 tag in OMV's github and I didn't see anything that would particularly raise a red flag for me, but I'm not so familiar with OMV, nor so sure that's the right place to look for this kind of info and if the issues reported there tell the whole story... that's why I asked...

    Perhaps my question should have been... What RC bugs are holding OMV6 back from being released as stable?

    And my other doubt is... Is upgrading OMV to a new major version as smooth as upgrading (vanilla) Debian?

    I may soon (within a month or less) need to make a fresh install of OMV for a small company and I'm trying to decide if I should still go with OMV5 and upgrade later to OMV6, or just jump straight to OMV6.

    Hardware setup is just a standard x86 machine (fairly old, so any kernel 5.x will work) with 1 SSD for OS and 2 HDDs for data (software RAID1 or equivalent).

    Software wise, I'd like to add a few extra things such as:

    • wireguard for VPN server
    • possibly openLDAP or similar for managing user accounts and authentication
    • host some php/mysql websites
    • nuts plugin for UPS
    • run some VMs (QEMU/KVM)

    I'm an experienced Debian, so setting up those things isn't not a problem (I actually already have all those working fine on my home NAS (OMV5), except openldap, which I never tried).

    I'm just wondering whether it's better to just jump straight to OMV6 (what release critical bugs are preventing the official stable release?), or play safe with OMV5 and go through the upgrade process later...

    I've upgraded many debian boxes before in my life, but my home NAS is my only experience with OMV and it's still in the initial install of OMV5, so not sure if upgrading to OMV6 would be as smooth as upgrading a normal debian machine, or if can cause some stress...

    If OMV6 is close to release and doesn't have any "critical" issues, I'd probably prefer to just start fresh from OMV6 and save the trouble (and time) of upgrading in a few months...

    Any thoughts?


    If power consumption is a concern, then a SBC with integrated CPU and SODIMM (low-voltage) RAM will probably do better than a more standard motherboard with a socketed CPU.

    In that department the ASRock mini-ITX boards, such as the ones you mentioned sound like a good option.

    Beginning of last year, when I was searching for hardware for my home OMV NAS, the best option I found with 4 SATA was the ASRock J5005-ITX (or alternatively the J4105-ITX, if I remember correctly). I didn't find many more similar alternatives at the time... AMD's embedded Ryzen R1000 and V1000 CPU series were out, but there weren't any motherboards yet available with them.

    There are newer models now for those ASRock mini-ITX boards, with newer CPUs in that line.

    One good thing about these J3xxx, J4xxx, J5xxx CPUs is that they're generally low power (TDP = 10W), which means they're probably easy to cool down passively.

    At the time I was going to buy that ASRock J5005-ITX for my build, but I couldn't find any supplier that had it in stock.

    I only really needed 3 SATA ports (OS SSD drive + 2 HDDs) and I had some old hardware still in good shape, so I ended up reusing an old laptop (mSATA for OS drive + 2 SATA ports for HDDs) for my NAS. It was supposed to be a temporary solution, but it's been running so well and consuming so little energy, that I'm still using it.

    If it fails, I'll probably be looking at those ASRock mini-ITX boards again... Or search for some AMD embedded ryzen R1000 or V1000 based ones, if they're not too expensive.

    Kernel only gets updated when you reboot the machine. After you install a new kernel version it's highly recommended that you reboot as soon as possible.

    Some services or libs may not be updated until you restart them or reboot in some cases.

    I recommend you install needrestart package (it's in debian's repos). It will check and tell you what needs to be restarted (and it will restart the services for you if you want). It's automatically run after you run apt upgrade.

    debian-goodies package has a checkrestart utility that works in similar way, but you need to run it and restart the services manually.

    "typically" is the keyword and I've seen the atypical with OpenCV.

    If the task is heavy in linear computations, ARM might use more power simply by taking longer than a x86/x64. I'm not "in the know" on the technical architectural design, but it might be worth measuring a Pi if you encode video at full/unrestricted settings, use ZFS, Ai recognition, etc. A company will design a very specific RISC based ARM for a very specific task to overcome this, but I'm not sure if the Pi itself is great at any one thing (which I guess is where the RPi "Compute" comes in).

    Yes, also consider that most ARMs found in Pi like devices lack some feats such as hardware support for AES encryption and such, which can make some tasks take way longer time to process on such devices.

    But considering that most of the time (as in a scenario proposed by the OP) the device will be idle, in that state I believe ARM-based devices will "typically" consume less power. My main point though was that the gain isn't significant enough to justify it in most scenarios. Running off batteries could be a deciding factor, but not the case. And as you pointed out in some applications were the CPU might need to be "active" a good amount of time, they might not be the most power efficient option.

    It seems that we are opening Pandora's box. Now an PI defender will intervene in the thread and we already have it. ^^

    Not trying to open any Pandora's boxes... :) I defend that Pi's are great devices for many situations, but they're not the holy grail for ALL situations! Maybe a Pi named 42 could be the answer to all questions though ;)

    RAM and CPU power needed for Nextcloud will depend on what apps you want to install on it. For example, if you want to use an office web-based suite on it you'll definitely need more resources (RAM and CPU).

    For streaming videos/music on your home network, I recommend you have a look at jellyfin. It's web-based and it also has native client for android. It's very easy to install along with OMV (just add their deb repository and install the jellyfin server with apt, then configure everything via its web interface).

    For a NAS I wouldn't recommend using disks connected via USB. USB isn't a very reliable connection, especially for big disks that need to be permanently on. SATA is highly preferred over USB.

    As for hardware, ARM based ones (such as Pi's) typically use less power, but in general they also have limited CPU processing capabilities. If you want to use only SSD's and not 3.5" HDD's, then you could do fine with an old laptop or mini-pc (such as Intel NUC, Gigabyte Brix, etc...). This will consume a bit more power than a Pi, but not that much more, especially when idle. If you push them to heavy load they will consume considerable more power, but that's also because they'll be providing a lot more processing power than a Pi.

    As a reference, my home NAS (OMV based) is running on an old ThinkPad X230 (which no longer has a screen), when idle, with HDDs spun down, it consumes less than 10W from the wall. That's not that much more than Pi. Same is true for my (newer) NUC (8th gen i5, which I used as my home desktop computer), which consumes between 5-9W when idle. Even running 24/7, at the end of the year that's about 15 euros in the electricity bill, with a Pi I'd save what? 5-7 euros/year? When put under heavy load (CPU and GPU) the NUC can go up to 40-70W peaks, but that's only for a small amount of time and it's nice to have that processing power available when needed.

    I'm not saying Pi's are a bad option. Pi's are great, but just wanted to let you know that for your use case there are other options, that are neither more expensive nor are they going to have a significant impact on the electricity bill.

    Pi's are great for many appliances, in particular when you need GPIO ports, but to use solely as a NAS it wouldn't be my first pick, despite how popular they are. Price isn't a deal breaker either... you can buy an used mini-pc or laptop (for example one with a broken screen) for less than a (new) Pi4.

    Mini-PC or Laptop will (in most cases) limit you to 2 disks max though (where one will probably be M.2 or mSATA and the other will be normal SATA port for 2.5" disks, HDD or SSD, no 3.5" HDD though, since those need 12V and wouldn't fit in the box anyway). Some laptops might have one M.2/mSATA + two SATA.

    For 4 SATA ports, then a mini-ITX SBC (single board computer) might be the answer (ex: ASRock J4125-ITX, but there are many other). These are usually very low power too. You'll need to buy a case, power supply, etc and assemble it. There are also some pre-built barebones, but then the prices can start to go up.

    As for RAID, it's mostly about data availability, not data safety. With RAID 1, if one disk fails, you can still access your data (from the other disk), without any downtime. Without RAID, data will be inaccessible until you replace the disk and finish restoring data from your backup. Some types of RAID can provide faster performance, but that's most relevant for HDDs, not so much for SSDs.

    Since I last posted I made several changes to this setup...

    I'm using a generic PSU now and I used a couple of buck converters to get 12V from the PSU (20V) to power the HDDs. I'm getting the 5V for the HDDs from one of the USB ports.

    This way I have the same electrical ground for everything and I only need one PSU.

    I don't fully trust these buck converters (I got 5 of them for 8 EUR on Amazon), so I used separate ones for each disk, but so far they have been holding up nicely (for several weeks now). From what I've read I think they only tend to fail when drawing more than 1A from it (because they get too hot). They're under 0.5A even when the disks are spinning, so I'm hoping they'll be fine.... and if one fails I have 3 spare ones to replace.

    Meanwhile I have also removed everything from that case and now it's hanging under my desk! So more free space around available, no need for extra cooling fans and it collects very little dust due to gravity!

    Power consumption improved as well... These are the latest numbers I measured from the wall:

    • System shutdown: 1.5-2 W
    • System booting with HDDs spinning up: Highest peak observed 43 W
    • HDDs spun down & CPU idle: 9.5-10 W
    • HDDs spun down & CPU 100% usage: 28.5-29 W
    • HDDs spinning but in idle state & CPU idle: 15-16 W
    • HDDs busy & CPU low usage: 18-23 W
    • HDDs busy & CPU 100% usage: 38-40 W

    Yes there was a lot of hacking into this and it surely doesn't look like a very robust solution, but it's been working fine for over an year now (well some parts only for some weeks). I'm not storing mission-critical data on it, so for the intended purpose it's perfectly fine and I'm happy with it, especially with the power consumption and the fact that I spent only about 30 EUR buying the parts I didn't have already...

    After a long time banging my head against the walls (metaphorically) trying to solve this I finally found the culprit with the help from fatrace util (it's in Debian repos), along with btrace.

    The culprit was udisks2. After stopping udisksd the disks spun down without hiccups after the specified time in hd-idle.

    Kudos to the reddit user that suggested using fatrace for debugging.

    TL;DR version: btrace suggests the culprit is a "[pool]" accessing the disk every 600 seconds. What is it and how do I stop it?

    Longer version:

    I've been using OMV for a while and I had this working before, but at some point it stopped working (probably after some upgrade or installing some software).

    I have 2 WD Red NAS HDDs in RAID1 (using mdadm), that are known for having some issues regarding spin down, so I installed hd-idle as a workaround (as suggested in an older thread here) and I managed to get it to work for while... but now it doesn't work unless I put a very low idle time.

    I've increased the smart check interval on OMV web interface to a high number. It currently is 86400 and power mode is set to Standby.

    I've set hd-idle to 1800 seconds, but the disks never spindown... If I change that time to very short time (for example 180 seconds) the disk will spin down, but not with 1800.

    I've run btrace and there's a "pool" accessing the disk about every 600 seconds. Example:

    8,0 1 55 4200.006161055 3626119 D N 0 [pool]

    8,0 1 56 4200.006433144 0 C N [0]

    8,0 1 57 4200.007231231 3626119 D R 512 [pool]

    8,0 1 58 4200.007689361 0 C R [0]


    (there's a few more repeated lines each time and it will repeat every 600 seconds)

    This seems to be the culprit...

    If I put for example 180 on the idle time for hd-idle the discs will spin down and will remain like that.

    [pool] entries still shows up in btrace (now just once every 600 sec) but it doesn't "wake up" the disks.

    "ps aux|grep pool" reveals only this:

    www-data 788 0.0 0.0 204488 6088 ? S out13 0:00 php-fpm: pool www

    www-data 799 0.0 0.0 204488 6104 ? S out13 0:00 php-fpm: pool www

    Is this the pool in btrace? How do I stop it from preventing the disks to spin down?

    Any insight is most welcome! Thanks.

    Nice project.

    Did you face any issues related to the laptop complaining about missing display and keyboard? Or is that why you used the docking station?

    The main reason I got the docking station was to get the eSATA port, so I can connect two disks (actually three if we count the SSD on the internal mSATA which runs the OS).

    Other options would be running the OS from USB and using an mSATA->SATA adapter, connecting one of the disks on USB or through the ExpressCard slot, but I didn't like much any of those options.

    Funny fact: the SATA controller on the x230 motherboard actually supports up to 6 disks, unfortunately there are only 2 connections available: the internal SATA and another one on the dock port (which connects to eSATA port on some docking stations), both are SATA 3.

    The mSATA is SATA 2, but that's fine, I can still get good speed on the SSD.

    The power button on the dock also comes in handy to turn it on, otherwise I'd have to use wake-on-lan.

    Apart from that it works fine without display or keyboard, no complains.

    With an USB keyboard and mouse and external monitor (either on the VGA port or the mini-DP port) it can work as a low power desktop PC.

    Only thing missing is the power button, but if you have it connected to ethernet, you can use wake-on-lan.

    Some keyboards have a power button, which might work as well, but I don't have any usb keyboard with that key, so I haven't tested that.

    Tricky part is when you first install a system on it or try to run a linux live-usb for example... On a graphical environment it usually sets the internal display as the main display (ie. where the menu/task bar goes). Depending on the desktop environment, sometimes it can be a bit tricky to disable the internal display and/or move the menus to the external display. Usually solved with tricks like Alt-F2 and entering the name of the display-properties program.

    When running on the console it usually mirrors the internal display, so installing OMV isn't a problem.

    Meanwhile I've put everything inside of an old full tower PC case:

    I had to replace the power supply for the disks. The one on the previous pic was struggling to power both disks. I'm temporarily using an old ATX PSU (not very power efficient at all).

    TODO (short-term)

    - replace the PSU for the HDDs with one from my old NAS (has 12V and 5V direct outputs and enough juice to power 2 HDD's)

    - add 1 or 2 fans to the case (USB powered) and close it

    TODO (long-term)

    - somehow use the 20V from the Thinkpad's PSU to power the disks, either through a Pico-PSU (expensive) or making my own regulator for 12V and 5V (cheaper, but more work)


    A couple of days ago both HDD's on my OMV box started making nasty noises and generate quite a few errors in syslog, like if they were about to fail...

    It turned out it was actually the power supply that was failing. After replacing the PSU, all bad noises are gone and no more errors. The disks seem to be working fine.

    I've run extended tests on both disks with smartctl and I also run badblocks on both, all tests passed and no bad blocks detected, but OMV still shows me a red light in SMART -> Devices

    I'm guessing it's because SMART info on the disks still contain error logs for those errors when the PSU was failing.

    Is there any way of resetting the red light flag on OMV? Or clearing the error logs on the disks SMART log?

    I had some spare parts laying around from an old Thinkpad x230 and this was the result:

    I still need to fix everything inside of a case and maybe find a better power source for the disks, but most of the work is done.

    Tricky part was connecting the two 3.5" HDDs to it, but sorted out with the help of a dock station with eSATA and some unusual SATA cables.

    Setup / Parts used:

    • (bottom-half of a) Lenovo Thinkpad x230 laptop
    • Dock station (with eSATA connector) + laptop power brick (not in picture)
    • WD Red 4TB connected with SATA-to-eSATA cable to the dock station
    • WD Red 4TB connected with male-to-female SATA cable to internal SATA
    • Old HDD IDE-USB enclosure with external power supply (PSU not in picture) used to power both HDDs (with power splitter)
    • OMV installed on 120GB mSATA SSD (internal mSATA connector)
    • Gigabit Ethernet connection
    • RAID1 setup on the HDDs, some VMs on the SSD (the main reason why I didn't use an USB pen for the OS)

    Total power consumption from the wall:

    • cpu idle and HDDs sleeping: 13W
    • cpu idle and HDDs spinning but idle: 18.5W
    • cpu low load and HDDs busy: 21-29W
    • cpu heavy load and HDDs busy: 39-44W

    I might be able to lower those numbers 2-3W with a more efficient power source for the HDDs.

    I'm running OMV 5.5.3.

    Adding public keys to users through OMV web interface doesn't seem to produce any effect.

    If I manually add the public key to ~/.ssh/authorized_keys of a user it works as expected (no password asked on ssh login).

    If I add the public key through OMV web interface it has no effect... Password is still asked on ssh login.

    Am I missing something or is this feature broken?




    I recently brought to life an old PC for testing purposes and I've been playing around with OMV and virtualization.

    When installing OMV as VM it shows a much higher cpu usage on the smbd and nfsd when copying files over the (local) network.


    - CPU: Intel Core 2 Quad Q9550 (4 cores, supports virtualization)

    - RAM: 4GB

    - 1 SSD for system

    - 2 HDD raid1 (mdadm) for data

    - 1 gigabit ethernet port

    With OMV installed natively, when transferring a large file from another PC on the LAN to a samba share on OMV, the CPU usage for smbd is about 50%. If using NFS, CPU usage for nfsd is close to 50% as well.

    Data transfer rate is about 115-117 MB/s (that's megabytes per second, which is very close to the gigabit ethernet limit).

    I then installed Proxmox on the same computer and OMV as a VM. Data HDD's passthrough with virtio-scsi, raid assembled with mdadm on OMV.

    All 4 cores and RAM made available for the VM.

    Data transfer rate was the same, but CPU usage for smbd was at 100%. Same result with NFS.

    I also tried to use OMV as host and OMV on a VM (created using cockpit-machines). Similar results.

    My first thought was that cpu virtualization was not being very efficient, but I ran sysbench cpu benchmark on both host and VM with similar results.

    Any ideas why this is happening?

    Thanks for all the tips and comments.

    I was able to bring back to life an old computer (CPU is Quad Core Q9550, 4GB RAM, MB has 6 SATA), which I'm using for testing OMV (and other things such as virtualization).

    This is all but low power (consumes about 80W when idle and can go up to 150W with cpu at 100%), but I will use it as a sandbox for testing purposes and from here I will evaluate how much "cpu power" I need for my final setup for running OMV + some extras.


    Yes, my choice will be mostly between those two scenarios.

    I've heard a lot about the Helios4 and I saw they're planning for the new Helios64, but it might take a few months until it's available... and I don't really want to wait that long...
    Also considering shipping and import duties it will probably cost me quite over 300€ euros, which becomes a bit over my desired 200-250 budget...
    In addition, as you mentioned, getting replacement parts for it (if needed) might be an issue... For those reasons I didn't consider it as an option.

    ASRock J5005's power consumption and price are both lower that what I was expecting for an ITX board, that made me change a bit my mind which was initially pending more towards the ARM SBC and now is pending more to the J5005 side.

    I'll need a case and a PSU, but shouldn't be hard to find...