Posts by bgravato

    Hi,


    I have 1 SSD for OS and 2 HDDs for data in RAID1 (software, mdadm).


    Currently on OMV6, upgraded from OMV5.


    The disks idle spin down feature in OMV5 never worked with my disks (2 WD 4TB Red NAS), nor does it work on OMV6 (I've tried). This seems common for this disc's model, but I managed to get them into sleep using hd-idle. Manually issueing hdparm -y /dev/dev-name also worked.


    So I got everything working fine in OMV5, but since I upgraded to OMV6 only one disks spins down when idle.

    The other disk always stays on.


    If I manually put it to standby using hdparm -y /dev/sda it will spin up a few seconds later.


    This happens when, according to btrace /dev/sda, mdadm accesses the disk.

    These mdadm reads from the disk happen every 60 seconds continuously and it only happens to one disk. This never happens to the other disk.


    Seems like mdadm is polling sda every minute for some reason, but not sdb.

    This didn't happen in OMV5.


    Killing /sbin/mdadm --monitor --scan process doesn't make any difference, so it's not that...


    What else could it be? Any thoughts?

    This could be my problem:


    The question is if there's a way to fix it without a PCI card (I only have PCIe ones).

    Interesting!


    I've never come across anything similar, but this would explain why it worked in older versions... probably the device was named eth0, regardless of the pci address and now it's the utterly awkward enp?s?? naming scheme...


    I'm sure there are other ways of working around it... I think there are ways of forcing the old network device names, so you could try that. For example try the grub options mentioned here: https://unix.stackexchange.com…rk-interfaces-in-debian-9


    There might be other ways... because on my desktop debian machine, my ethernet device is called eno1 and then ip says it's also known as enp0s31f6


    Output of ip a:


    Code
    2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 1c:69:7a:6f:1d:f5 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6


    Not sure where this eno1 comes from though...


    I also remember in some old computer running debian, after a release upgrade it maintained the eth0 name (this might have been an upgrade from debian 9 to 10 though, not debian 11), while on a clean install of the same debian version (again might have been debian 10) it assigned the same device the enp?s?? name.


    Yet another possibility, if you have an USB ethernet dongle, you can connect it and it should still have the same name after removing the GPU, so you could then connect to OMV through the USB ethernet and reconfigure the internal ethernet from there...


    One other could be removing the OMV OS disk from the computer, connect to another computer and edit the configs and change the network device name (you'd need to know the name of the device after removing the GPU, but that should possible by reading the logs in the disk).

    Alternative __custom.css to keep the image, but disable the "special effects":


    CSS
    body .omv-login-page .background { 
     -webkit-animation-name:none !important;
     animation-name:none !important;
     -webkit-animation-duration:0s !important;
     animation-duration:0s !important;
     -webkit-animation-iteration-count:0 !important;
     animation-iteration-count:0 !important;
    }

    So I picked the workaround mentioned here, which suggests this:


    Quote

    Workaround

    For anyone else stumbling over this, like I did in search of a fix, I just want to offer my current workaround to the issue:

    1. Create a file called "__custom.css" in the folder "/var/www/openmediavault" with the following contents
      body .omv-login-page .background {-webkit-animation-iteration-count: 5 !important; animation-iteration-count:5 !important;}
    2. Add the following line to "/var/www/openmediavault/index.html" above the line with "</body>"
      <link rel="stylesheet" href="__custom.css">

    This limits the animation to 5 times. You can of course tweak the number to your liking.

    Notice, this is only a work around and can break eg. if the omv is updated.


    And changed the contents of __custom.css to:


    body .omv-login-page .background {display: none !important;}


    In order to completely disable the background image. No more lag or high CPU usage when logging in.

    1. Create a file called "__custom.css" in the folder "/var/www/openmediavault" with the following contents
      body .omv-login-page .background {-webkit-animation-iteration-count: 5 !important; animation-iteration-count:5 !important;}
    2. Add the following line to "/var/www/openmediavault/index.html" above the line with "</body>"
      <link rel="stylesheet" href="__custom.css">

    Thanks for providing that workaround...

    5 iteration were still too much... so I've adapted it a bit...


    So to completely disable it... I've used this on my __custom.css:


    body .omv-login-page .background {display: none !important;}


    If you want to keep the image, but disable the "special effects", you can set __custom.css to something like this:


    CSS
    body .omv-login-page .background { 
     -webkit-animation-name:none !important;
     animation-name:none !important;
     -webkit-animation-duration:0s !important;
     animation-duration:0s !important;
     -webkit-animation-iteration-count:0 !important;
     animation-iteration-count:0 !important;
    }

    I did search the forum (and the web), but I couldn't find any info about it... I guess I didn't use the right keywords...


    Thank you for the quick reply.


    In general I like the look of the new webgui and I think it was a good improvement over OMV5, but this minor "glitch" is quite annoying. I hope it gets changed in future updates or at least that there's a way to disable it...


    In the meanwhile I'll try to manually edit it... Is it set in some CSS class or js? Any tip where I should be looking at?

    Put the graphics card in and boot, check the logs from the previous boot (if any).


    If you have systemd's journal set to persistent (should be the default) you can check previous boot logs with journalctl -b -1


    If there's no logs then it's not even booting, which means problem could be with bootloader (grub) or the BIOS that fails to boot without a GPU (though it would be weird if it booted fine before and you didn't change anything in the BIOS settings...). Sometimes BIOS also issues a warning and waits for keyboard press (usually F1) to resume...

    Ok, so I moved forward with this and tried using those env vars and it seemed to work fine.


    For anyone else who might bump into this, these were the commands I run (following the instructions in the official docs and running them as root):

    Code
    # omv-env set OMV_APT_KERNEL_BACKPORTS_PINPRIORITY 200
    # monit restart omv-engined
    # omv-salt stage run prepare
    # omv-salt stage run deploy

    I used 200 for pin priority, but anything below 500 will work.


    omv-salt stage command can take a few minutes to run... so if it feels like it hang, just wait a bit longer...


    Edit: forgot to say... after running those commands, /etc/apt/preferences.d/openmediavault-kernel-backports.pref got updated with the 200 pin priority (instead of the original 500). So this seems to be the "proper way", but please correct me if I'm wrong.

    So I just upgraded from OMV5 to OMV6.


    Since it's running on old hardware, I prefer to stay on kernel 5.10 (for stability) rather than installing 5.16 from backports...


    OMV sets priority preference for kernel from backports in /etc/apt/preferences.d/openmediavault-kernel-backports.pref


    So on a "normal" debian system I'd delete or edit that file, but I suspect OMV might just recreate it.


    I found through omv-env, that there are these 2 environment variables (neither is set on my system):

    • OMV_APT_KERNEL_BACKPORTS_PINPRIORITY
    • OMV_APT_USE_KERNEL_BACKPORTS

    So I'm wandering if the proper way of doing this would be to set one of those variables accordingly (my first pick would be changing the pin priority to a lower value).

    Am I thinking correctly on this?

    same problem here, have you found a solution? bgravato

    Yes, just run OMV bare metal instead :-)


    I first thought of running Proxmox bare metal and OMV as a VM, but because of this issue I decided to go with OMV bare metal.

    The main purpose of this machine is to serve as a NAS... I just run a couple of VMs occasionally for some specific tasks and I opted to run them using OMV as host. I didn't really need the Proxmox fancy interface for my needs, so it was a no brainer.

    Thank you all for the replies!

    ome may be thinking about their answer to this question. But I think nobody will tell you in writing to install OMV6 in a case like this ... What would you say if someone asked you this?

    I would never put beta software in a Production environment for a business.

    In general I'd agree that for production one should always go with stable.


    That said, I've recently installed Debian testing (bullseye) in a production server. That was just shortly before it became stable. It had been in freeze for several month and I went through the Release-Critical bugs that were holding back the stable release and none was relevant for my intended setup, so I felt confident to install a "testing" release. That saved me from upgrading it a month later and provided a newer PHP version, which I needed.


    I don't regret that decision.


    In that particular case I knew where to look for RC bugs and what to expect. When it comes to OMV, I'm not so sure... I had a look at open issues with OMV6 tag in OMV's github and I didn't see anything that would particularly raise a red flag for me, but I'm not so familiar with OMV, nor so sure that's the right place to look for this kind of info and if the issues reported there tell the whole story... that's why I asked...


    Perhaps my question should have been... What RC bugs are holding OMV6 back from being released as stable?


    And my other doubt is... Is upgrading OMV to a new major version as smooth as upgrading (vanilla) Debian?

    I may soon (within a month or less) need to make a fresh install of OMV for a small company and I'm trying to decide if I should still go with OMV5 and upgrade later to OMV6, or just jump straight to OMV6.


    Hardware setup is just a standard x86 machine (fairly old, so any kernel 5.x will work) with 1 SSD for OS and 2 HDDs for data (software RAID1 or equivalent).


    Software wise, I'd like to add a few extra things such as:

    • wireguard for VPN server
    • possibly openLDAP or similar for managing user accounts and authentication
    • host some php/mysql websites
    • nuts plugin for UPS
    • run some VMs (QEMU/KVM)

    I'm an experienced Debian, so setting up those things isn't not a problem (I actually already have all those working fine on my home NAS (OMV5), except openldap, which I never tried).


    I'm just wondering whether it's better to just jump straight to OMV6 (what release critical bugs are preventing the official stable release?), or play safe with OMV5 and go through the upgrade process later...

    I've upgraded many debian boxes before in my life, but my home NAS is my only experience with OMV and it's still in the initial install of OMV5, so not sure if upgrading to OMV6 would be as smooth as upgrading a normal debian machine, or if can cause some stress...

    If OMV6 is close to release and doesn't have any "critical" issues, I'd probably prefer to just start fresh from OMV6 and save the trouble (and time) of upgrading in a few months...


    Any thoughts?


    TIA

    If power consumption is a concern, then a SBC with integrated CPU and SODIMM (low-voltage) RAM will probably do better than a more standard motherboard with a socketed CPU.


    In that department the ASRock mini-ITX boards, such as the ones you mentioned sound like a good option.


    Beginning of last year, when I was searching for hardware for my home OMV NAS, the best option I found with 4 SATA was the ASRock J5005-ITX (or alternatively the J4105-ITX, if I remember correctly). I didn't find many more similar alternatives at the time... AMD's embedded Ryzen R1000 and V1000 CPU series were out, but there weren't any motherboards yet available with them.

    There are newer models now for those ASRock mini-ITX boards, with newer CPUs in that line.

    One good thing about these J3xxx, J4xxx, J5xxx CPUs is that they're generally low power (TDP = 10W), which means they're probably easy to cool down passively.


    At the time I was going to buy that ASRock J5005-ITX for my build, but I couldn't find any supplier that had it in stock.


    I only really needed 3 SATA ports (OS SSD drive + 2 HDDs) and I had some old hardware still in good shape, so I ended up reusing an old laptop (mSATA for OS drive + 2 SATA ports for HDDs) for my NAS. It was supposed to be a temporary solution, but it's been running so well and consuming so little energy, that I'm still using it.


    If it fails, I'll probably be looking at those ASRock mini-ITX boards again... Or search for some AMD embedded ryzen R1000 or V1000 based ones, if they're not too expensive.

    Kernel only gets updated when you reboot the machine. After you install a new kernel version it's highly recommended that you reboot as soon as possible.


    Some services or libs may not be updated until you restart them or reboot in some cases.


    I recommend you install needrestart package (it's in debian's repos). It will check and tell you what needs to be restarted (and it will restart the services for you if you want). It's automatically run after you run apt upgrade.


    debian-goodies package has a checkrestart utility that works in similar way, but you need to run it and restart the services manually.

    "typically" is the keyword and I've seen the atypical with OpenCV.


    If the task is heavy in linear computations, ARM might use more power simply by taking longer than a x86/x64. I'm not "in the know" on the technical architectural design, but it might be worth measuring a Pi if you encode video at full/unrestricted settings, use ZFS, Ai recognition, etc. A company will design a very specific RISC based ARM for a very specific task to overcome this, but I'm not sure if the Pi itself is great at any one thing (which I guess is where the RPi "Compute" comes in).

    Yes, also consider that most ARMs found in Pi like devices lack some feats such as hardware support for AES encryption and such, which can make some tasks take way longer time to process on such devices.


    But considering that most of the time (as in a scenario proposed by the OP) the device will be idle, in that state I believe ARM-based devices will "typically" consume less power. My main point though was that the gain isn't significant enough to justify it in most scenarios. Running off batteries could be a deciding factor, but not the case. And as you pointed out in some applications were the CPU might need to be "active" a good amount of time, they might not be the most power efficient option.



    It seems that we are opening Pandora's box. Now an PI defender will intervene in the thread and we already have it. ^^

    Not trying to open any Pandora's boxes... :) I defend that Pi's are great devices for many situations, but they're not the holy grail for ALL situations! Maybe a Pi named 42 could be the answer to all questions though ;)

    RAM and CPU power needed for Nextcloud will depend on what apps you want to install on it. For example, if you want to use an office web-based suite on it you'll definitely need more resources (RAM and CPU).


    For streaming videos/music on your home network, I recommend you have a look at jellyfin. It's web-based and it also has native client for android. It's very easy to install along with OMV (just add their deb repository and install the jellyfin server with apt, then configure everything via its web interface).


    For a NAS I wouldn't recommend using disks connected via USB. USB isn't a very reliable connection, especially for big disks that need to be permanently on. SATA is highly preferred over USB.


    As for hardware, ARM based ones (such as Pi's) typically use less power, but in general they also have limited CPU processing capabilities. If you want to use only SSD's and not 3.5" HDD's, then you could do fine with an old laptop or mini-pc (such as Intel NUC, Gigabyte Brix, etc...). This will consume a bit more power than a Pi, but not that much more, especially when idle. If you push them to heavy load they will consume considerable more power, but that's also because they'll be providing a lot more processing power than a Pi.


    As a reference, my home NAS (OMV based) is running on an old ThinkPad X230 (which no longer has a screen), when idle, with HDDs spun down, it consumes less than 10W from the wall. That's not that much more than Pi. Same is true for my (newer) NUC (8th gen i5, which I used as my home desktop computer), which consumes between 5-9W when idle. Even running 24/7, at the end of the year that's about 15 euros in the electricity bill, with a Pi I'd save what? 5-7 euros/year? When put under heavy load (CPU and GPU) the NUC can go up to 40-70W peaks, but that's only for a small amount of time and it's nice to have that processing power available when needed.


    I'm not saying Pi's are a bad option. Pi's are great, but just wanted to let you know that for your use case there are other options, that are neither more expensive nor are they going to have a significant impact on the electricity bill.


    Pi's are great for many appliances, in particular when you need GPIO ports, but to use solely as a NAS it wouldn't be my first pick, despite how popular they are. Price isn't a deal breaker either... you can buy an used mini-pc or laptop (for example one with a broken screen) for less than a (new) Pi4.


    Mini-PC or Laptop will (in most cases) limit you to 2 disks max though (where one will probably be M.2 or mSATA and the other will be normal SATA port for 2.5" disks, HDD or SSD, no 3.5" HDD though, since those need 12V and wouldn't fit in the box anyway). Some laptops might have one M.2/mSATA + two SATA.


    For 4 SATA ports, then a mini-ITX SBC (single board computer) might be the answer (ex: ASRock J4125-ITX, but there are many other). These are usually very low power too. You'll need to buy a case, power supply, etc and assemble it. There are also some pre-built barebones, but then the prices can start to go up.


    As for RAID, it's mostly about data availability, not data safety. With RAID 1, if one disk fails, you can still access your data (from the other disk), without any downtime. Without RAID, data will be inaccessible until you replace the disk and finish restoring data from your backup. Some types of RAID can provide faster performance, but that's most relevant for HDDs, not so much for SSDs.

    Since I last posted I made several changes to this setup...


    I'm using a generic PSU now and I used a couple of buck converters to get 12V from the PSU (20V) to power the HDDs. I'm getting the 5V for the HDDs from one of the USB ports.

    This way I have the same electrical ground for everything and I only need one PSU.


    I don't fully trust these buck converters (I got 5 of them for 8 EUR on Amazon), so I used separate ones for each disk, but so far they have been holding up nicely (for several weeks now). From what I've read I think they only tend to fail when drawing more than 1A from it (because they get too hot). They're under 0.5A even when the disks are spinning, so I'm hoping they'll be fine.... and if one fails I have 3 spare ones to replace.


    Meanwhile I have also removed everything from that case and now it's hanging under my desk! So more free space around available, no need for extra cooling fans and it collects very little dust due to gravity!



    Power consumption improved as well... These are the latest numbers I measured from the wall:

    • System shutdown: 1.5-2 W
    • System booting with HDDs spinning up: Highest peak observed 43 W
    • HDDs spun down & CPU idle: 9.5-10 W
    • HDDs spun down & CPU 100% usage: 28.5-29 W
    • HDDs spinning but in idle state & CPU idle: 15-16 W
    • HDDs busy & CPU low usage: 18-23 W
    • HDDs busy & CPU 100% usage: 38-40 W

    Yes there was a lot of hacking into this and it surely doesn't look like a very robust solution, but it's been working fine for over an year now (well some parts only for some weeks). I'm not storing mission-critical data on it, so for the intended purpose it's perfectly fine and I'm happy with it, especially with the power consumption and the fact that I spent only about 30 EUR buying the parts I didn't have already...

    After a long time banging my head against the walls (metaphorically) trying to solve this I finally found the culprit with the help from fatrace util (it's in Debian repos), along with btrace.


    The culprit was udisks2. After stopping udisksd the disks spun down without hiccups after the specified time in hd-idle.


    Kudos to the reddit user that suggested using fatrace for debugging.