Guide: How-to setup 'hd-idle' (a HDD spin down SW) together with the OMV plugin 'Autoshutdown' in OMV 5.x.

    • OMV 5.x (beta)
    • Guide: How-to setup 'hd-idle' (a HDD spin down SW) together with the OMV plugin 'Autoshutdown' in OMV 5.x.

      Hello, after Guide: How-to setup 'hd-idle' (a HDD spin down SW) together with the OMV plugin 'Autoshutdown'. I was now doing the same for OMV 5.x. The following guide is until now not 100 % "clean" i.e. it seems there is some random behaviour of 'hd-idle' but I hope this will be soon solved.

      P.S. If you can help to improve this guide then please do it (thank you macom). If you like this guide then please show it. :)

      *** Install 'OMV-Extras' (via web GUI) ***
      'OMV -> System -> Plugins -> Section: Utilities -> openmediavault-omvextras.org 5.x.x'

      ** Enable 'OMV-Extras' backports (via web GUI) **
      'OMV -> System -> OMV-Extra -> Settings -> Backports -> Enable Backports'

      *** Install 'hd-idle' in OMV (via command prompt) ***
      apt install hd-idle

      ** 'hd-idle' setup **

      * a) disable all HDD spin-down and power management settings in OMV (via web GUI) *
      'OMV -> Storage -> Physical Disks'

      * b) open the 'hd-idle' configuration file via the 'nano' editor (via command prompt) *
      nano /etc/default/hd-idle
      * b.1) insert for each HDD (accessed via their UUIDs, see 1)) a spin-down command '-a /dev/<...> -i <timeInSeconds>', e.g. spin-down of 3 HDDs after approx. 600 seconds: *
      HD_IDLE_OPTS="-i 0 -a /dev/disk/by-uuid/bacb10a1-6dc5-48b9-a6f4-ed836b7dfa4a -i 600 -a /dev/disk/by-uuid/481b459f-aad7-468e-b0fe-9401c94eb4fc -i 610 -a /dev/disk/by-uuid/3b5e7398-d517-4477-a3b9-aa46c4347184 -i 620"
      * hint: Parameter '-i 0' at the beginning avoids spin-down of not explicitly specified HDDs. *
      * b.2) allow hd-idle to start, insert a 2nd row with command *
      START_HD_IDLE=true
      * b.3) save the file (Ctrl + O) and close 'nano' (Ctrl + X) *

      * c) auto-start via OMV the 'hd-idle' service / daemon (via web GUI) *
      'OMV -> System -> Scheduled Jobs'
      * hint: use options 'At reboot', user 'root' and command 'service hd-idle start' *
      This is not required, see.

      * d) Do also this --if-- the OMV plug in 'Autoshutdown' is in use, otherwise 'hd-idle' will stop working after the 1st wakeup. *
      link to sourceforge.net
      * hint: The command systemctl enable hd-idle-restart-resume.service has to be only once executed at the command line. A scheduled job is not required. *

      * e) reboot OMV, now it should work *

      * f) Check that 'hd-idle' is running (via web GUI) *
      'OMV -> Diagnostics -> System Information -> Processes'
      * in column 'TIME+ COMMAND' should be somewhere such an entry: *
      0:00.xx hd-idle


      +++ 1) get the HDD UUIDs (via web GUI) +++
      'OMV -> Diagnostics -> System Information -> Report -> Block device attributes'

      The post was edited 2 times, last by _Michael_: Step c) is superfluous / wrong -> cancelled. ().

    • Great!!!

      Just a few comments:
      b.1)
      • if you want all spinning drives to spin down at the same time (you do not have to care about flash drives) you can use HD-IDLE-OPTS="-i 3600" to spin down all drives after 30min
      • instead of using the UUID you can also use
        • the label (e.g. /dev/disk-by-label-data)
        • simply sdb or sdc (unless you want to change the SATA port
      Odroid HC2 - armbian - OMV5.x | Asrock Q1900DC-ITX - Intenso SSD 120GB - OMV5.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV5 Documentation - user guide :!:
    • That pleases me, macom. FYI: I'm using UUIDs because they are drive (hardware) specific, i.e. not dependent on the used SATA port.

      To the "random behaviour" of 'hd-idle':
      The purpose of step c) is to start the 'hd-idle' service. In OMV 2.x this works well in OMV 5.x not:

      Source Code

      1. root@NAS-PC:~# systemctl status hd-idle.service
      2. ● hd-idle.service - hd-idle - spin down idle hard disks
      3. Loaded: loaded (/lib/systemd/system/hd-idle.service; enabled; vendor preset: enabled)
      4. Active: inactive (dead) since Mon 2019-12-30 16:05:30 CET; 8s ago
      5. Docs: man:hd-idle(1)
      6. Process: 1276 ExecStart=/usr/sbin/hd-idle $HD_IDLE_OPTS (code=exited, status=0/SUCCESS)
      7. Main PID: 1277 (code=exited, status=0/SUCCESS)
      8. Dec 30 16:05:30 NAS-PC.local systemd[1]: Starting hd-idle - spin down idle hard disks...
      9. Dec 30 16:05:30 NAS-PC.local systemd[1]: Started hd-idle - spin down idle hard disks.
      10. Dec 30 16:05:30 NAS-PC.local systemd[1]: hd-idle.service: Succeeded.
      Display All
      If I now try to start it manually with sudo service hd-idle start it has also no effect, i.e. it is till Active: inactive (dead) ?( . What can I do? Thanks!
      Addendum: With sudo service hd-idle force-reload I was able to start the service: Active: active (running). Should I use this command at step c)?

      The post was edited 3 times, last by _Michael_: Addendum inserted. ().

    • Sorry, what do you mean? my file:

      Shell-Script: /etc/default/hd-idle

      1. # defaults file for hd-idle
      2. ## Debian specific defaults for hd-idle
      3. # !!! text removed !!!
      4. # 1.1) LOG file output not supported, --DEFAULT--
      5. HD_IDLE_OPTS="-i 0 -a /dev/disk/by-uuid/bacb10a1-6dc5-48b9-a6f4-ed836b7dfa4a -i 200 -a /dev/disk/by-uuid/481b459f-aad7-468e-b0fe-9401c94eb4fc -i 215 -a /dev/disk/by-uuid/3b5e7398-d517-4477-a3b9-aa46c4347184 -i 230 -a /dev/disk/by-uuid/44109cc3-bc24-4e50-b108-56262a6d68b3 -i 245"
      6. # 1.2) LOG file output, --DEBUG-- option
      7. #HD_IDLE_OPTS="-i 0 -a /dev/disk/by-uuid/bacb10a1-6dc5-48b9-a6f4-ed836b7dfa4a -i 100 -l /var/log/hd-idle.log -a /dev/disk/by-uuid/481b459f-aad7-468e-b0fe-9401c94eb4fc -i 115 -l /var/log/hd-idle.log -a /dev/disk/by-uuid/3b5e7398-d517-4477-a3b9-aa46c4347184 -i 130 -l /var/log/hd-idle.log -a /dev/disk/by-uuid/44109cc3-bc24-4e50-b108-56262a6d68b3 -i 145 -l /var/log/hd-idle.log"
      8. START_HD_IDLE=true
      9. # hd-idle command line options
      10. # !!! text removed !!!
      11. #HD_IDLE_OPTS="-i 180 -l /var/log/hd-idle.log"
      Display All

      The post was edited 2 times, last by _Michael_: typo and correction of "- h" ().

    • Interesting. Mine is different.
      Maybe because I installed 1.04 from a previous compiled +.deb first and then the update came via apt-get to 1.05. That is how I noticed that hd-idle is now part of the Debian package.

      In your file I do not understand the usage of -h. Maybe try without -h
      Odroid HC2 - armbian - OMV5.x | Asrock Q1900DC-ITX - Intenso SSD 120GB - OMV5.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV5 Documentation - user guide :!:
    • This is my
      /lib/systemd/system/hd-idle.service

      that is the file that makes systemd to start the service.

      Source Code

      1. [Unit]
      2. Description=hd-idle - spin down idle hard disks
      3. Documentation=man:hd-idle(1)
      4. [Service]
      5. Type=forking
      6. EnvironmentFile=/etc/default/hd-idle
      7. ExecStart=/usr/sbin/hd-idle $HD_IDLE_OPTS
      8. [Install]
      9. WantedBy=multi-user.target
      Display All
      Odroid HC2 - armbian - OMV5.x | Asrock Q1900DC-ITX - Intenso SSD 120GB - OMV5.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV5 Documentation - user guide :!:
    • I am not sure.

      service is originally from SysVinit. It is still working with systemd, but I do not no if it works the same way.
      With systemd it is systemctl. So to start the service without reboot it should be systemctl start hd-idle.
      Odroid HC2 - armbian - OMV5.x | Asrock Q1900DC-ITX - Intenso SSD 120GB - OMV5.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV5 Documentation - user guide :!:
    • I searched the internet for an overview about systemd and found this. Now:
      The command systemctl enable hd-idle would enable it to start automatically at bootup (as you already told me for hd-idle-restart-resume.service). So how can I check if a service, in this case 'hd-idle', is enabled?:
      systemctl is-enabled hd-idle
      I tried it and the result was:
      enabled
      So there is no reason at all for step c)! I will remove the step c) command and check afterwards if it is working i.e. Active: active (running).

      P.S. Who was doing systemctl enable hd-idle? Is it related to step b.2)?
    • Happy new year!

      The current step d) 'sourceforge.net workaround' is not (stable) working! The service restarts only randomly. If I try it on CLI then I can also see that a single command execution often does not work (why?).

      To come forward I disabled hd-idle-restart-resume.service on my system via systemctl disable hd-idle-restart-resume.service and I tried again my OMV 2.x workaround:
      The file /etc/pm/sleep.d/autoshutdown-restart is still present but it seems it is no longer in use?! Can anyone tell me how can I find out if I'm right?
      I have this impression because also this, i.e. lines 6. to 15.:

      Shell-Script

      1. # starting lines: removed!
      2. thaw|resume)
      3. logger -s -t "$USER autoshutdown [$$]" "thaw/resume: autoshutdown-script restart from /etc/pm/sleep.d/autoshutdown-restart"
      4. systemctl restart autoshutdown.service
      5. for number in {0..9..1} # try 10 times
      6. do
      7. # if active (expected) then stop the service
      8. systemctl is-active hd-idle.service && systemctl stop hd-idle.service
      9. done
      10. for number in {0..9..1} # try 10 times
      11. do
      12. # if inactive (expected) then (re-)start the service
      13. systemctl is-active hd-idle.service || systemctl start hd-idle.service
      14. done
      15. ;;
      16. *)
      17. logger -s -t "$USER autoshutdown [$$]" "other: autoshutdown-script call from /etc/pm/sleep.d/autoshutdown-restart"
      18. ;;
      19. esac
      Display All
      do not work. But they should because at my CLI tests I never required more than 3 tries.

      The post was edited 1 time, last by _Michael_: text improved and typo ().

    • Why does the service (deactivation and) activation does not work as it should?:

      Source Code

      1. root@NAS-PC:~# systemctl is-active hd-idle.service
      2. active
      3. root@NAS-PC:~# systemctl is-active hd-idle.service && systemctl stop hd-idle.service
      4. active
      5. root@NAS-PC:~# systemctl is-active hd-idle.service
      6. inactive
      7. root@NAS-PC:~# systemctl is-active hd-idle.service || systemctl start hd-idle.service
      8. inactive
      9. root@NAS-PC:~# systemctl is-active hd-idle.service
      10. inactive
      11. root@NAS-PC:~# systemctl is-active hd-idle.service || systemctl start hd-idle.service
      12. inactive
      13. root@NAS-PC:~# systemctl is-active hd-idle.service
      14. active
      Display All
      Rows 1. to 6. belong together and also 7. to 14. in which row 10.: should show active. To overcome the not working "row 7. command" I triggered it again in row 11. and now it was OK.
      In case my previous post was unclear, i.e. why I introduced the for loops, now it should be clear. So currently there are two open questions:
      1. Why does the service (deactivation and) activation does not work as it should? but to solve the base problem more important:
      2. Is the file autoshutdown-restart still in use? (see the previous post)
        • If yes: What I did wrong?
    • My info, as I was struggling with hd-idle on OMV5.x too:

      First off all: I did not use the back-port way for installing, I used the "old" wget method from OP old post (read the other option later on).

      I'm currently not using auto-shutdown (will try maybe the next few days). The issue of going HDD's not going to stand-by after suspend, doesn't seem to be related to auto-shutdown (I believe OP mentioned this allready). I use hd-idle with systemctl enable hd-idle-restart-resume.service with success, consistently now; with manual suspend and WOL.

      @macom Above you mentioned it's also possible to use dev/disk-by-label or dev/sdx. I would not recommend that. Definitely not the /dev/sdx way (not sure about label), I've noticed that on a Boot or reboot /dev/sdx are all in line with the HDD's connected to the sata port: sda (SSD) is on port 0, sdb is on port 1, sdc is on port 2 and so on... When using suspend this if often not the case!!!

      I DO use step C, I think you're probably right and not necessary (as systemctl enable hd-idle-restart-resume.service) also works without a scheduled job (BUT GUI overrides things in config sometimes, but I think it doesn't apply in this case, maybe double check this?).

      Also when I first edited /etc/default/hd-idle I saw START_HD_IDLE=false is not commented in the first few lines and removed that line.

      Ow and make sure that for all HDD's involved in APM is set to 255 not just disabled (check this in /etc/hdparm.conf).


      My next step is try auto-shutdown.
      ODROID-HC2 running OMV4
      ASUS F2A85-M LE running OMV5
    • sm0ke wrote:

      The issue of going HDD's not going to stand-by after suspend, doesn't seem to be related to auto-shutdown (I believe OP mentioned this allready).
      In the Addendum of this post I mentioned the system relations, i.e. 'Autoshutdown' is the problem trigger (of course any other "got to standby" command / plugin would lead to the same problem) but not the source. The Source is the "bad" programmed hd-idle.

      sm0ke wrote:

      Ow and make sure that for all HDD's involved in APM is set to 255 not just disabled (check this in /etc/hdparm.conf).
      I don't know what you mean:
      1) My hdparm.conf file does has exactly one /dev/disk/by-id/... entry. It is for my boot drive and includes this apm = 1 line. I would say this is OK because my other drives are not visible.
      2) A user modification of this file is not intended: # WARNING: Do not edit this file, your changes will get lost.
    • @_Michael_

      A: We agree on that. Simple, the solution (patch) is on hd-idle sourceforge. That's unrelated to autoshutdown.

      B: Indeed hdparm.conf is not to make changes. But you can read the file and verify. Changes made in APM in GUI affect this file.

      I had trouble with the setting up hd-idle: The guide says make sure APM is turned off. First option in APM is "Disabled", last one is "255 - Disabled".
      Not all 6 data drives (not all the same drives, Samsung and WD) were spinning down, I was reading at the hdparm man pages about the 255 option. Since I set "255 - Disabled" for each drive in APM (GUI), they all appeared in hdparm.conf like this:

      /dev/disk/by-id/ata-WDC_WD20EARX-00PASB0_WD-WCAZAH439671 {
      apm = 255
      write_cache = on
      }

      After, all drives were spinning down.

      I haven't read all of your posts, but I though the more info the better and tried to help by sharing my experience.

      If write cache if off (it should be for SSD's) and APM is set "Disabled" (as default, first option) the drive does not appear in hdparm.conf (also other options not used, automatic acoustic management and spin-down time).

      Don't know the cause of your hd-idle-restart-resume.service not working stable. But I does in my case (also using autoshutdown now). Maybe the APM set to 255 is a solution, maybe not use backport but install like you did here or maybe DO use option C. Things to try I would say...
      ODROID-HC2 running OMV4
      ASUS F2A85-M LE running OMV5
    • Thank you, now I unterstand the apm topic. May be I have next week some time to try the 255 setting and your other proposals.
      All is running fine at you machine, interesting. It's a pity that you have not used the steps from my 1st post because otherwise I would know now that the OMV V5.x steps can 100 % work … (unfortunately not on my PC).
    • Users Online 1

      1 Guest