Posts by _Michael_

    The issue of going HDD's not going to stand-by after suspend, doesn't seem to be related to auto-shutdown (I believe OP mentioned this allready).

    In the Addendum of this post I mentioned the system relations, i.e. 'Autoshutdown' is the problem trigger (of course any other "got to standby" command / plugin would lead to the same problem) but not the source. The Source is the "bad" programmed hd-idle.

    Ow and make sure that for all HDD's involved in APM is set to 255 not just disabled (check this in /etc/hdparm.conf).

    I don't know what you mean:
    1) My hdparm.conf file does has exactly one /dev/disk/by-id/... entry. It is for my boot drive and includes this apm = 1 line. I would say this is OK because my other drives are not visible.
    2) A user modification of this file is not intended: # WARNING: Do not edit this file, your changes will get lost.

    Why does the service (deactivation and) activation does not work as it should?:

    Rows 1. to 6. belong together and also 7. to 14. in which row 10.: should show active. To overcome the not working "row 7. command" I triggered it again in row 11. and now it was OK.
    In case my previous post was unclear, i.e. why I introduced the for loops, now it should be clear. So currently there are two open questions:

    • Why does the service (deactivation and) activation does not work as it should? but to solve the base problem more important:
    • Is the file autoshutdown-restart still in use? (see the previous post)

      • If yes: What I did wrong?

    Happy new year!


    The current step d) 'sourceforge.net workaround' is not (stable) working! The service restarts only randomly. If I try it on CLI then I can also see that a single command execution often does not work (why?).


    To come forward I disabled hd-idle-restart-resume.service on my system via systemctl disable hd-idle-restart-resume.service and I tried again my OMV 2.x workaround:
    The file /etc/pm/sleep.d/autoshutdown-restart is still present but it seems it is no longer in use?! Can anyone tell me how can I find out if I'm right?
    I have this impression because also this, i.e. lines 6. to 15.:

    do not work. But they should because at my CLI tests I never required more than 3 tries.

    The manual was limited helping it tells for "Non-public (Private)" (which I guess means 'Public -> No') that "THIS WILL ALLOW EVERY USER TO LOG INTO THE SHARE" which matches my observation. But more I could not get from the text: My impression is, that it was written for OMV versions before 5.x (of course) and may require network management knowledge above beginner level.


    OK, I checked then again the OMV options and finally found "the" difference:
    in 'OMV -> Access Rights Management -> Shared Folders -> ACL -> Extra options -> Others'
    is for '\Data1' and '\Data2'
    Reade/Write/Execute
    set.
    For '\Data3' it was
    None
    set. After I changed this all worked fine but I would expect that this setup is "too generous" for a access rights management which also should provide the option to deny some access? So I'm again confused:
    Is this setup OK or should I consider to change it to None for all 3 shares? (And start again to fight against the not working Windows access.)


    P.S. I cannot remember that I ever before entered the ACL dialog box, i.e. I have no idea why I have the above mentioned setup.

    How to reach the dialog box?: 'OMV -> SMB/CIFS -> Shares -> Edit'


    If I set No then I have access to my share '\Data3' (without entering any credentials). If I set Only guests (as in use at my other 2 shares) then I get at Win10 an error message:
    Network Error
    Windows cannot access \\NAS-PC\Data3
    You do not have permission to access ...


    I'm confused how it can work that different for my 3 equal shares … ?( In general I would like to know what No means in view of the share accessibility? Thank you!


    Addendum:
    The ''\Data3' comes with my move from OMV 2.x to OMV 5.x because I required more disk space. This means also that '\Data1' and '\Data2' have been introduced a long time ago (2017) and are in use at my Win10 PC since then and w/o any problem even after moving to OMV 5.x. This was also expected because I did (at least I tried to do) exactly the same settings in both OMV versions.

    I searched the internet for an overview about systemd and found this. Now:
    The command systemctl enable hd-idle would enable it to start automatically at bootup (as you already told me for hd-idle-restart-resume.service). So how can I check if a service, in this case 'hd-idle', is enabled?:
    systemctl is-enabled hd-idle
    I tried it and the result was:
    enabled
    So there is no reason at all for step c)! I will remove the step c) command and check afterwards if it is working i.e. Active: active (running).


    P.S. Who was doing systemctl enable hd-idle? Is it related to step b.2)?

    Sorry, what do you mean? my file:

    That pleases me, macom. FYI: I'm using UUIDs because they are drive (hardware) specific, i.e. not dependent on the used SATA port.


    To the "random behaviour" of 'hd-idle':
    The purpose of step c) is to start the 'hd-idle' service. In OMV 2.x this works well in OMV 5.x not:

    If I now try to start it manually with sudo service hd-idle start it has also no effect, i.e. it is till Active: inactive (dead) ?( . What can I do? Thanks!
    Addendum: With sudo service hd-idle force-reload I was able to start the service: Active: active (running). Should I use this command at step c)?

    Hello, after Guide: How-to setup 'hd-idle' (a HDD spin down SW) together with the OMV plugin 'Autoshutdown'. I was now doing the same for OMV 5.x. The following guide is until now not 100 % "clean" i.e. it seems there is some random behaviour of 'hd-idle' but I hope this will be soon solved.


    P.S. If you can help to improve this guide then please do it (thank you macom). If you like this guide then please show it. :)


    *** Install 'OMV-Extras' (via web GUI) ***
    'OMV -> System -> Plugins -> Section: Utilities -> openmediavault-omvextras.org 5.x.x'


    ** Enable 'OMV-Extras' backports (via web GUI) **
    'OMV -> System -> OMV-Extra -> Settings -> Backports -> Enable Backports'


    *** Install 'hd-idle' in OMV (via command prompt) ***
    apt install hd-idle


    ** 'hd-idle' setup **


    * a) disable all HDD spin-down and power management settings in OMV (via web GUI) *
    'OMV -> Storage -> Physical Disks'


    * b) open the 'hd-idle' configuration file via the 'nano' editor (via command prompt) *
    nano /etc/default/hd-idle
    * b.1) insert for each HDD (accessed via their UUIDs, see 1)) a spin-down command '-a /dev/<...> -i <timeInSeconds>', e.g. spin-down of 3 HDDs after approx. 600 seconds: *
    HD_IDLE_OPTS="-i 0 -a /dev/disk/by-uuid/bacb10a1-6dc5-48b9-a6f4-ed836b7dfa4a -i 600 -a /dev/disk/by-uuid/481b459f-aad7-468e-b0fe-9401c94eb4fc -i 610 -a /dev/disk/by-uuid/3b5e7398-d517-4477-a3b9-aa46c4347184 -i 620"
    * hint: Parameter '-i 0' at the beginning avoids spin-down of not explicitly specified HDDs. *
    * b.2) allow hd-idle to start, insert a 2nd row with command *
    START_HD_IDLE=true
    * b.3) save the file (Ctrl + O) and close 'nano' (Ctrl + X) *


    * c) auto-start via OMV the 'hd-idle' service / daemon (via web GUI) *
    'OMV -> System -> Scheduled Jobs'
    * hint: use options 'At reboot', user 'root' and command 'service hd-idle start' *
    This is not required, see.


    * d) Do also this --if-- the OMV plug in 'Autoshutdown' is in use, otherwise 'hd-idle' will stop working after the 1st wakeup. *
    link to sourceforge.net
    * hint: The command systemctl enable hd-idle-restart-resume.service has to be only once executed at the command line. A scheduled job is not required. *


    * e) reboot OMV, now it should work *


    * f) Check that 'hd-idle' is running (via web GUI) *
    'OMV -> Diagnostics -> System Information -> Processes'
    * in column 'TIME+ COMMAND' should be somewhere such an entry: *
    0:00.xx hd-idle



    +++ 1) get the HDD UUIDs (via web GUI) +++
    'OMV -> Diagnostics -> System Information -> Report -> Block device attributes'

    1) Thanks for the info, obviously it is not working as it should, see also 3).


    2) The via sourceforge.net provided patch-download (from January 2019) is a file diff i.e. nothing which can be (directly) used. As you can see in my previous posts I'm not an Linux expert (this is actually the 2nd time after 2017 that I have to work with Linux -> due to my OMV 2.x. to 5x update) and cannot do much without help.


    3) Observation of my current OMV behaviour:

    • After the 1st WOL hd-idle was NOT working.
    • After the 2nd WOL hd-idle was working.
    • After the 3rd WOL again NOT working.

    So I have some instable / strange behaviour. For today I stop the debugging. Again thank you very much, macom!

    The 2nd command: until now not in use.


    The 1st command is part of:
    'OMV -> System -> Scheduled Jobs'
    used options: 'At reboot', user 'root' and command systemctl enable hd-idle-restart-resume.service

    The 3rd command results in:

    Code
    hd-idle-restart-resume.service - Restart hd-idle
    Loaded: loaded (/etc/systemd/system/hd-idle-restart-resume.service; enabled; vendor preset: enabled)
    Active: inactive (dead) since Sun 2019-12-29 18:16:55 CET; 26min ago
    Process: 5934 ExecStart=/bin/systemctl --no-block restart hd-idle (code=exited, status=0/SUCCESS)
    Main PID: 5934 (code=exited, status=0/SUCCESS)
    Dec 29 18:16:55 NAS-PC.local systemd[1]: Started Restart hd-idle.
    Dec 29 18:16:55 NAS-PC.local systemd[1]: hd-idle-restart-resume.service: Succeeded.

    So it seems I need a further entry in 'OMV -> System -> Scheduled Jobs' for the 2nd command?!

    I was trying to find out if so, but I failed: No "resume" in the Processes list and the system log was "hard to read" especially in view of "I don't know what I have to look for".


    Addendum: This is the new code / script:

    I know the command systemctl status hd-idle.service but I have no idea how to use it together with the new code.