Posts by bgravato

    After a long time banging my head against the walls (metaphorically) trying to solve this I finally found the culprit with the help from fatrace util (it's in Debian repos), along with btrace.


    The culprit was udisks2. After stopping udisksd the disks spun down without hiccups after the specified time in hd-idle.


    Kudos to the reddit user that suggested using fatrace for debugging.

    TL;DR version: btrace suggests the culprit is a "[pool]" accessing the disk every 600 seconds. What is it and how do I stop it?


    Longer version:

    I've been using OMV for a while and I had this working before, but at some point it stopped working (probably after some upgrade or installing some software).


    I have 2 WD Red NAS HDDs in RAID1 (using mdadm), that are known for having some issues regarding spin down, so I installed hd-idle as a workaround (as suggested in an older thread here) and I managed to get it to work for while... but now it doesn't work unless I put a very low idle time.


    I've increased the smart check interval on OMV web interface to a high number. It currently is 86400 and power mode is set to Standby.


    I've set hd-idle to 1800 seconds, but the disks never spindown... If I change that time to very short time (for example 180 seconds) the disk will spin down, but not with 1800.


    I've run btrace and there's a "pool" accessing the disk about every 600 seconds. Example:

    8,0 1 55 4200.006161055 3626119 D N 0 [pool]

    8,0 1 56 4200.006433144 0 C N [0]

    8,0 1 57 4200.007231231 3626119 D R 512 [pool]

    8,0 1 58 4200.007689361 0 C R [0]

    (...)

    (there's a few more repeated lines each time and it will repeat every 600 seconds)

    This seems to be the culprit...


    If I put for example 180 on the idle time for hd-idle the discs will spin down and will remain like that.

    [pool] entries still shows up in btrace (now just once every 600 sec) but it doesn't "wake up" the disks.


    "ps aux|grep pool" reveals only this:

    www-data 788 0.0 0.0 204488 6088 ? S out13 0:00 php-fpm: pool www

    www-data 799 0.0 0.0 204488 6104 ? S out13 0:00 php-fpm: pool www


    Is this the pool in btrace? How do I stop it from preventing the disks to spin down?


    Any insight is most welcome! Thanks.

    Nice project.

    Did you face any issues related to the laptop complaining about missing display and keyboard? Or is that why you used the docking station?

    The main reason I got the docking station was to get the eSATA port, so I can connect two disks (actually three if we count the SSD on the internal mSATA which runs the OS).

    Other options would be running the OS from USB and using an mSATA->SATA adapter, connecting one of the disks on USB or through the ExpressCard slot, but I didn't like much any of those options.


    Funny fact: the SATA controller on the x230 motherboard actually supports up to 6 disks, unfortunately there are only 2 connections available: the internal SATA and another one on the dock port (which connects to eSATA port on some docking stations), both are SATA 3.

    The mSATA is SATA 2, but that's fine, I can still get good speed on the SSD.


    The power button on the dock also comes in handy to turn it on, otherwise I'd have to use wake-on-lan.


    Apart from that it works fine without display or keyboard, no complains.

    With an USB keyboard and mouse and external monitor (either on the VGA port or the mini-DP port) it can work as a low power desktop PC.

    Only thing missing is the power button, but if you have it connected to ethernet, you can use wake-on-lan.

    Some keyboards have a power button, which might work as well, but I don't have any usb keyboard with that key, so I haven't tested that.


    Tricky part is when you first install a system on it or try to run a linux live-usb for example... On a graphical environment it usually sets the internal display as the main display (ie. where the menu/task bar goes). Depending on the desktop environment, sometimes it can be a bit tricky to disable the internal display and/or move the menus to the external display. Usually solved with tricks like Alt-F2 and entering the name of the display-properties program.

    When running on the console it usually mirrors the internal display, so installing OMV isn't a problem.

    Meanwhile I've put everything inside of an old full tower PC case:



    I had to replace the power supply for the disks. The one on the previous pic was struggling to power both disks. I'm temporarily using an old ATX PSU (not very power efficient at all).


    TODO (short-term)

    - replace the PSU for the HDDs with one from my old NAS (has 12V and 5V direct outputs and enough juice to power 2 HDD's)

    - add 1 or 2 fans to the case (USB powered) and close it


    TODO (long-term)

    - somehow use the 20V from the Thinkpad's PSU to power the disks, either through a Pico-PSU (expensive) or making my own regulator for 12V and 5V (cheaper, but more work)

    Hi,


    A couple of days ago both HDD's on my OMV box started making nasty noises and generate quite a few errors in syslog, like if they were about to fail...

    It turned out it was actually the power supply that was failing. After replacing the PSU, all bad noises are gone and no more errors. The disks seem to be working fine.


    I've run extended tests on both disks with smartctl and I also run badblocks on both, all tests passed and no bad blocks detected, but OMV still shows me a red light in SMART -> Devices

    I'm guessing it's because SMART info on the disks still contain error logs for those errors when the PSU was failing.


    Is there any way of resetting the red light flag on OMV? Or clearing the error logs on the disks SMART log?

    I had some spare parts laying around from an old Thinkpad x230 and this was the result:


    I still need to fix everything inside of a case and maybe find a better power source for the disks, but most of the work is done.


    Tricky part was connecting the two 3.5" HDDs to it, but sorted out with the help of a dock station with eSATA and some unusual SATA cables.


    Setup / Parts used:

    • (bottom-half of a) Lenovo Thinkpad x230 laptop
    • Dock station (with eSATA connector) + laptop power brick (not in picture)
    • WD Red 4TB connected with SATA-to-eSATA cable to the dock station
    • WD Red 4TB connected with male-to-female SATA cable to internal SATA
    • Old HDD IDE-USB enclosure with external power supply (PSU not in picture) used to power both HDDs (with power splitter)
    • OMV installed on 120GB mSATA SSD (internal mSATA connector)
    • Gigabit Ethernet connection
    • RAID1 setup on the HDDs, some VMs on the SSD (the main reason why I didn't use an USB pen for the OS)


    Total power consumption from the wall:

    • cpu idle and HDDs sleeping: 13W
    • cpu idle and HDDs spinning but idle: 18.5W
    • cpu low load and HDDs busy: 21-29W
    • cpu heavy load and HDDs busy: 39-44W


    I might be able to lower those numbers 2-3W with a more efficient power source for the HDDs.

    I'm running OMV 5.5.3.


    Adding public keys to users through OMV web interface doesn't seem to produce any effect.


    If I manually add the public key to ~/.ssh/authorized_keys of a user it works as expected (no password asked on ssh login).


    If I add the public key through OMV web interface it has no effect... Password is still asked on ssh login.


    Am I missing something or is this feature broken?



    Thanks,

    Bruno

    Hi,


    I recently brought to life an old PC for testing purposes and I've been playing around with OMV and virtualization.


    When installing OMV as VM it shows a much higher cpu usage on the smbd and nfsd when copying files over the (local) network.


    Hardware:

    - CPU: Intel Core 2 Quad Q9550 (4 cores, supports virtualization)

    - RAM: 4GB

    - 1 SSD for system

    - 2 HDD raid1 (mdadm) for data

    - 1 gigabit ethernet port


    With OMV installed natively, when transferring a large file from another PC on the LAN to a samba share on OMV, the CPU usage for smbd is about 50%. If using NFS, CPU usage for nfsd is close to 50% as well.

    Data transfer rate is about 115-117 MB/s (that's megabytes per second, which is very close to the gigabit ethernet limit).


    I then installed Proxmox on the same computer and OMV as a VM. Data HDD's passthrough with virtio-scsi, raid assembled with mdadm on OMV.

    All 4 cores and RAM made available for the VM.

    Data transfer rate was the same, but CPU usage for smbd was at 100%. Same result with NFS.


    I also tried to use OMV as host and OMV on a VM (created using cockpit-machines). Similar results.


    My first thought was that cpu virtualization was not being very efficient, but I ran sysbench cpu benchmark on both host and VM with similar results.


    Any ideas why this is happening?

    Thanks for all the tips and comments.


    I was able to bring back to life an old computer (CPU is Quad Core Q9550, 4GB RAM, MB has 6 SATA), which I'm using for testing OMV (and other things such as virtualization).


    This is all but low power (consumes about 80W when idle and can go up to 150W with cpu at 100%), but I will use it as a sandbox for testing purposes and from here I will evaluate how much "cpu power" I need for my final setup for running OMV + some extras.


    Cheers

    Yes, my choice will be mostly between those two scenarios.


    I've heard a lot about the Helios4 and I saw they're planning for the new Helios64, but it might take a few months until it's available... and I don't really want to wait that long...
    Also considering shipping and import duties it will probably cost me quite over 300€ euros, which becomes a bit over my desired 200-250 budget...
    In addition, as you mentioned, getting replacement parts for it (if needed) might be an issue... For those reasons I didn't consider it as an option.


    ASRock J5005's power consumption and price are both lower that what I was expecting for an ITX board, that made me change a bit my mind which was initially pending more towards the ARM SBC and now is pending more to the J5005 side.


    I'll need a case and a PSU, but shouldn't be hard to find...

    Some more info on the ASRock J5005-ITX:
    - board doesn't include RAM, but 4GB should cost about 20-25 €, so that's about 150€. Adding a box and a PSU should be able to keep it under my intended max. budget of 200-250€
    - rated power: 10W which is not that much more than a "Pi" solution


    Some benchmarks comparing the J5005 vs some ARM boards:
    https://rk.edu.pl/en/are-cheap…er-end-makers-arm-boards/


    Quite a big difference in processing power for a fairly small difference in power consumption.


    After seeing this numbers I'm now leaning more towards the J5005 solution for the NAS and buying a cheaper "Pi" later for playing around with the GPIO.

    Thanks for the feedback.


    I wasn't aware of how many raspberry pi-like SBC there are on the market nowadays, but a quick search revealed a great number of brands: Raspberry Pi, Pine64, Orange Pi, Rock PI, Firefly, NanoPi, etc...
    Most of them seem to be based on the RK3399 chip, such as the NanoPi M4. Definitely looks like an interesting option.


    On the downside - with exception of the Raspberry Pi 4 - most brands might not be easy to find in Europe though...


    In that matter the ASRock J5005-ITX seems much easier to purchase around here - it's even available at local stores for as low as 125€.
    Cons: probably higher power consumption, nor does have the GPIO flexibility of the ARM-based SBCs.
    Pros: already has 4 SATA ports built-in and probably more processing power.


    @Agricola what wattage is your PSU that is supplying the disks/nanopi?

    Hello everyone,


    I've been reading a lot of posts on this and other forums, but haven't yet reached a decision on what hardware (for home NAS) to get to fit 2 new 3.5" WD Red 4TB HDD that I bought recently and that will replace my very very old D'Link DNS-323.
    So I'm seeking some advice here.


    All the posts I found, none really fully fits my needs, so I'll try to give out as much info as possible (sorry for the long post).


    First some background about myself:
    - I'm an electronics engineer and a Debian user for 20+ years, so OMV seems like the best option software-wise.


    Main uses/needs for my home NAS:
    - make daily backups from other debian computers (probably using restic)
    - store some media files (photos, audio and videos) to be accessed from my LAN (samba, NFS or similar)
    - locally connect to gigabit ethernet
    - run openvpn client to connect to my personal openvpn server so I can access it remotely if needed


    Optional uses for the future (nice to have but not that much of deal breakers):
    - run LAMP server
    - connect one (or more) surveillance camera(s) and store the images from it (no immediate plans for that, so no idea yet on the type of connection for the cams)
    - connect some sensors/switches (GPIO?)


    Hardware requirements:
    - low power (under 15W idle would be great)
    - at least 2 SATA connections (3 or 4 better)
    - min. 4GB RAM
    - some USB ports are always handy
    - GPIO optional, but would be a plus
    - SD card slot for running the OS?
    - video out (HDMI?) would be a plus, making it reusable for other means in the future
    - hardware that I can buy in Europe (online is fine)


    Budget:
    - under 200-250 euros would be great, but I can be flexible



    Some thoughts on several issues:


    Low power:
    - It will be ON 24/7 and electricity costs me about 1.4€/year per Watt, so this is one of the most relevant points.
    - I have an old quad-core with a decent motherboard (ATX) with plenty of SATA connections, that I could use, but power consumption can be close to 100W when idle... that's 140€ at the end of the year... with a low power SBC I could get the new NAS paid in 2 years


    Harddrives:
    - I already bought 2 WD Red 4TB (it was too much of a bargain to let it pass), so I'll have to stick to that.
    - I've always used software RAID-1 (on linux, mdadm+ext4) in the past with no issues, so I was planning to go with that here as well, though from what I've been reading that seems to be an outdated solutions nowadays, so I'm open to alternatives.
    - I'm still deciding whether I should add a small SSD for caching and run the OS from it or run the OS from an SD card or USB pen (it would be easy to keep 2 SD's/pens and replace it if one fails) and in the latter case, I wonder how much difference I'd notice if using an SSD for caching


    Motherboard:
    - Odroid single disc devices don't really convince me, neither sound like the best option for the discs I got... and connecting discs through USB doesn't convince me at all.
    - Raspberry Pi 4 or Rockpro64 have the advantage of being low power, both can run Debian-like OS and have GPIO (a good plus, but just an optional one), on the other hand they require an expansion card to get SATA.
    - some ASRock ITX boards (J5005?) seem like a popular option as well
    - what else would you recommend? (with low power in mind and the HDD's mentioned)


    PSU:
    - efficiency and price run in opposite directions... 80 platinum would be great, but the extra cost would require quite a few years of electricity saving to recover the initial investment... On the other hand good PSU also means healthier HDDs... So still trying to figure out where the sweet spot is. Any suggestions?


    Memory/CPU:
    - 4GB RAM and a decent CPU would be nice, so it doesn't run out of steam for a few years and can still be reused for other means in the future
    - is ECC really needed? I honestly don't think it is... I understand it's advantages, but this won't be used in a life critical scenario... so if one bit gets corrupted and one file is lost at some point it's not the end of the world... Of course if I can get ECC just for a small (up to 10%?) increase in the price then it might be worth looking into it.



    I think this sums it all. Looking forward to hear (read) your suggestions.