HDD Spindown Issues

    • OMV 4.x
    • Resolved
    • HDD Spindown Issues

      Hello. I have been having issues trying to get my HDDs to spin down properly. I am setting up RAID6 with five 8TB WD Red drives. I have created and initialized the RAID6 array but have not created a file system yet. I believe that before I created the array, the spindown fuction worked without issues during the few tests I ran. After setting up the array, some drives will stay active (3 of them), while the other 2 will go on standby. The ones that stay active or go to standby will vary. This was verified with hdparm -C. The three that stay active and do not spin down per my selected settings (10 mins) will do so if I use hdparm -y. Another test I ran was clicking on the Raid Management menu in OMV after running hdparm -y. When I do so, this will bring the 3 drives that normally will not spin down out of the standby mode, but the ones that normally will go into standby will stay in stanby mode. Clicking on Devices under the SMART menu will bring all drives back from standby.

      As a reference, my disk properties are APM 127 (it also happens with an APM of 1), AAM Min/Min, Spindown time of 10 mins, and write-cache enabled. MY OMV system drive is a 32GB USB flash drive.

      I have read in the forums that some people have found hd-idle to be a better alternative for consistently getting drives to spin down. I went ahead and installed it per this tutorial (except step d), and it clearly works at getting all drives to spin down at the selected time. However, when the drives come back from standby they will not do so again. I believe that either step d of the tutorial or the tutorial's script in the second post solves this, but I am not sure how to proceed as I am not using the Autoshutdown function. If I need to add either of the two scripts presented, I would appreciate some pointers as to where and how.

      Thank you in advance for your assistance.

      Edit: The disk properties referenced above were used with hdparm. When I installed hd-idle, all disk properties were disabled except for write-cache.

      The post was edited 2 times, last by ravennevar ().

    • Hello macom. Thank you so much for the response. The following is the output you requested:


      Source Code

      1. ● hd-idle.service - LSB: start hd-idle daemon (spin down idle hard disks)
      2. Loaded: loaded (/etc/init.d/hd-idle; generated; vendor preset: enabled)
      3. Active: active (running) since Mon 2018-09-17 14:14:37 EDT; 4min 52s ago
      4. Docs: man:systemd-sysv-generator(8)
      5. Process: 735 ExecStart=/etc/init.d/hd-idle start (code=exited, status=0/SUCCESS)
      6. Tasks: 1 (limit: 4915)
      7. Memory: 1.1M
      8. CPU: 10ms
      9. CGroup: /system.slice/hd-idle.service
      10. └─774 /usr/sbin/hd-idle -i 0 -a /dev/disk/by-uuid/2dd9df3e-b92c-54ed-a70e-4ab96aa2ca1a -i 600 -a /dev/disk/by-uuid/9e6f8
      11. 770-e1a2-070b-54f3-cf98f74f5118 -i 600 -a /dev/disk/by-uuid/4fb9aaa0-202a-c183-95b9-7c5632de218a -i 600 -a /dev/disk/by-uuid/904c1a
      12. 8d-3231-2805-43dc-acf5865f736f -i 600 -a /dev/disk/by-uuid/f9fa1469-8c80-1dba-0ba6-73e283a2560b -i 600
      13. Sep 17 14:14:36 CEREBRO systemd[1]: Starting LSB: start hd-idle daemon (spin down idle hard disks)...
      14. Sep 17 14:14:36 CEREBRO hd-idle[735]: Starting the hd-idle daemon: hd-idle.
      15. Sep 17 14:14:37 CEREBRO systemd[1]: Started LSB: start hd-idle daemon (spin down idle hard disks).
      Display All
    • Please post

      /etc/default/hd-idle

      and the output of blkid, just to double check the UUIDs.

      Maybe you can also try to put only


      HD_IDLE_OPTS="-i 600" to spin down all drives after 10min. OS drive will not spin down anyway.
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • Hello. I decided to do a clean install of OMV and destroy the RAID6 array, so as to eliminate potential culprits. Right now I have not installed hd-idle. Everything referenced in this post is based on the default spindown ability of OMV.

      Here are the results for blkid and hdparm -C:

      Source Code

      1. root@CEREBRO:~# blkid
      2. /dev/sdf1: UUID="9162c628-bff8-4b24-bb53-d03cfd3d6abf" TYPE="ext4" PARTUUID="f8cda02a-01"
      3. /dev/sdf5: UUID="eb6729bf-f667-4267-9069-c248bf02ea99" TYPE="swap" PARTUUID="f8cda02a-05"
      4. /dev/sda: PTUUID="93eaf458-ed18-46f5-838c-5482c4cec4cb" PTTYPE="gpt"
      5. /dev/sdc: PTUUID="7255013f-9e40-43b8-9bfe-3c577ec391c9" PTTYPE="gpt"
      6. /dev/sdd: PTUUID="523e8464-bf48-4bd8-af59-7674b9fc81e1" PTTYPE="gpt"
      7. /dev/sdb: PTUUID="6c3782f6-6606-450a-b363-3dded23a661d" PTTYPE="gpt"
      8. /dev/sde: PTUUID="39b35b9b-2d04-45b2-82ae-441575a74afc" PTTYPE="gpt"
      9. root@CEREBRO:~# hdparm -C /dev/sd[abcde]
      10. /dev/sda:
      11. drive state is: standby
      12. /dev/sdb:
      13. drive state is: active/idle
      14. /dev/sdc:
      15. drive state is: standby
      16. /dev/sdd:
      17. drive state is: standby
      18. /dev/sde:
      19. drive state is: standby
      Display All
      As you can see, sdb remains active/idle after the 10 minutes go by. The rest of the disks spin down after that allotted time. This is repeatable upon reboot and also if the drives are spun up due to checking SMART parameters (or when using blkid). All the drives are connected through SATA ports (4 to the MB's SATA ports and one to an Asmedia SATA controller). The drive that is not spinning down is on a MB port. The current settings for each drive are: APM 1, AAM Minimum, Spindown time 10 mins, and write cache enabled. Please note that UUIDs are different from the previous post as I did a fresh install.

      At this point I can proceed to disable all drive power management settings and install hd-idle to see what kind of results I get. I will use @macom's suggestion to spin down all drives.
    • I installed hd-idle and set the following for all hard drives: APM - Disabled, AAM - Disabled, Spindown - Disabled, write cache - Disabled.

      The following was set for hd-idle:

      Source Code

      1. GNU nano 2.7.4 File: /etc/default/hd-idle
      2. # defaults file for hd-idle
      3. # start hd-idle automatically?
      4. START_HD_IDLE=true
      5. # hd-idle command line options
      6. # Options are:
      7. # -a <name> Set device name of disks for subsequent idle-time
      8. # parameters (-i). This parameter is optional in the
      9. # sense that there's a default entry for all disks
      10. # which are not named otherwise by using this
      11. # parameter. This can also be a symlink
      12. # (e.g. /dev/disk/by-uuid/...)
      13. # -i <idle_time> Idle time in seconds.
      14. # -l <logfile> Name of logfile (written only after a disk has spun
      15. # up). Please note that this option might cause the
      16. # disk which holds the logfile to spin up just because
      17. # another disk had some activity. This option should
      18. # not be used on systems with more than one disk
      19. # except for tuning purposes. On single-disk systems,
      20. # this option should not cause any additional spinups.
      21. #
      22. # Options not exactly useful here:
      23. # -t <disk> Spin-down the specfified disk immediately and exit.
      24. # -d Debug mode. This will prevent hd-idle from
      25. # becoming a daemon and print debugging info to
      26. # stdout/stderr
      27. # -h Print usage information.
      28. #HD_IDLE_OPTS="-i 180 -l /var/log/hd-idle.log"
      29. HD_IDLE_OPTS="-i 600"
      Display All
      After reboot, the process was confirmed to be running. After 10 minutes, all drives spun down. I checked SMART status to get all drives spinning again. However, after another 10 minutes, all drives spin down, except for sde (which is on the Asmedia SATA controller). I got the other 4 drives to spin back up again, waited another 10 minutes, and all five drives went into standby. For my last test, I rebooted the system and after 10 minutes all drives went into standby. I will continue to test to see if I can recreate sde not spinning down. So far, hd-idle seems to work, but not 100%.
    • After further testing, there have been two instances when the drives would not spin down again after being spun up. For the first one I rebooted the system and the drives did what they needed to do after the 10 minutes. For the second instance, I went into different areas of the GUI and after 10 mins the drives went to standby. I believe these incidents occurred after waking the drives through accessing SMART info in the GUI. This has never happened with blkid during my testing. I will continue to monitor and test the system.

      Soon, I will need to decide on what I will be doing with the drives. It seems RAID6 may have been causing spindown problems before, but some people have had luck with it. While I appreciate RAID's redundancy, something like mergerfs and SnapRAID might be an option in my case.
    • I wanted to provide some closure to this thread in case anyone else has similar issues. After a fresh install I set the following for all hard drives: APM - Disabled, AAM - Disabled, Spindown - Disabled, write cache - enabled. Then hd-idle was installed and set to HD_IDLE_OPTS="-i 600". Everything worked fine. I then proceeded to set up mergerfs and SnapRAID. This setup, along with hd-idle, is working flawlessly, spinning my drives down at the 10-minute mark. I still need to tweak a few minor things in terms of automating some task for SnapRAID, but everything seems to be working fine. Thank you for the assistance.
    • Users Online 1

      1 Guest