php7.3-fpm sock failed

    • OMV 5.x (beta)
    • Resolved
    • Upgrade 4.x -> 5.x
    • php7.3-fpm sock failed

      Hallo zusammen,

      ich habe heute von OMV 4 auf OMV5 geupdated. Soweit lief alles super, bis auf ein 502 "Bad Gateway" mit dem OMV Web-Gui.
      Meine bisherige Suche im Forum hat mich bisher auch nicht weiter gebracht.

      Installierte OMV Pakete:

      Source Code

      1. dpkg -l | grep openme
      2. ii openmediavault 5.0.14-1 all openmediavault - The open network attached storage solution
      3. ii openmediavault-backup 5.0 all backup plugin for OpenMediaVault.
      4. ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
      5. ii openmediavault-omvextrasorg 5.1.4 all OMV-Extras.org Package Repositories for OpenMediaVault





      Nginx:

      Source Code

      1. systemctl status nginx
      2. ● nginx.service - A high performance web server and a reverse proxy server
      3. Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
      4. Active: active (running) since Mon 2019-11-11 22:19:18 CET; 37s ago
      5. Docs: man:nginx(8)
      6. Process: 31365 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
      7. Process: 31366 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
      8. Main PID: 31367 (nginx)
      9. Tasks: 5 (limit: 4915)
      10. Memory: 5.4M
      11. CGroup: /system.slice/nginx.service
      12. ├─31367 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
      13. ├─31368 nginx: worker process
      14. ├─31369 nginx: worker process
      15. ├─31370 nginx: worker process
      16. └─31371 nginx: worker process
      17. Nov 11 22:19:18 NAS systemd[1]: Starting A high performance web server and a reverse proxy server...
      18. Nov 11 22:19:18 NAS systemd[1]: Started A high performance web server and a reverse proxy server.
      Display All
      php7.3-fpm

      Source Code

      1. systemctl status php7.3-fpm
      2. ● php7.3-fpm.service - The PHP 7.3 FastCGI Process Manager
      3. Loaded: loaded (/lib/systemd/system/php7.3-fpm.service; enabled; vendor preset: enabled)
      4. Active: active (running) since Mon 2019-11-11 22:08:58 CET; 11min ago
      5. Docs: man:php-fpm7.3(8)
      6. Main PID: 22263 (php-fpm7.3)
      7. Status: "Processes active: 0, idle: 2, Requests: 0, slow: 0, Traffic: 0req/sec"
      8. Tasks: 3 (limit: 4915)
      9. Memory: 13.6M
      10. CGroup: /system.slice/php7.3-fpm.service
      11. ├─22263 php-fpm: master process (/etc/php/7.3/fpm/php-fpm.conf)
      12. ├─22264 php-fpm: pool www
      13. └─22265 php-fpm: pool www
      14. Nov 11 22:08:58 NAS systemd[1]: Starting The PHP 7.3 FastCGI Process Manager...
      15. Nov 11 22:08:58 NAS systemd[1]: Started The PHP 7.3 FastCGI Process Manager.
      Display All

      openmediavault-webgui_error.log

      Source Code

      1. 2019/11/11 22:23:20 [crit] 1702#1702: *1 connect() to unix:/run/php/php7.3-fpm-openmediavault-webgui.sock failed (2: No such file or directory) while connecting to upstream, client: ::ffff:127.0.0.1, server: openmediavault-webgui, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.3-fpm-openmediavault-webgui.sock:", host: "127.0.0.1"

      Ich bin dann irgendwann auf omv-mkconf nginx und omv-mkconf php-fpm gestossen, allerdings können die Kommandos nicht gefunden werden.

      Source Code

      1. omv-mkconf nginx
      2. bash: omv-mkconf: Kommando nicht gefunden.
      3. omv-mkconf php-fpm
      4. bash: omv-mkconf: Kommando nicht gefunden.



      Ich hoffe mir kann hier einer helfen. Vielen Dank bereits!

      Best,
      str0hlke
    • Das folgende Kommando erstellt ALLE Konfigurationsdateien, daher wird es auch einige Zeit laufen, also nicht wundern.


      Source Code

      1. $ omv-salt stage run deploy
      Das Kommando omv-mkconf gibt es in OMV5 nicht mehr, statt dessen omv-salt verwenden.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Hi votdev,

      vielen Dank! Ich habe es gleich mal auf die Reise geschickt, leider nicht ohne Fehler:

      Brainfuck Source Code

      1. /usr/lib/python3/dist-packages/salt/modules/file.py:32: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
      2. from collections import Iterable, Mapping, namedtuple
      3. /usr/lib/python3/dist-packages/salt/utils/jinja.py:638: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
      4. if isinstance(lst1, collections.Hashable) and isinstance(lst2, collections.Hashable):
      5. /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly
      6. *salt.utils.args.get_function_argspec(original_function)
      7. /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly
      8. *salt.utils.args.get_function_argspec(original_function)
      9. NAS:
      10. ----------
      11. ID: sync_runners
      12. Function: salt.runner
      13. Name: saltutil.sync_runners
      14. Result: True
      15. Comment: Runner function 'saltutil.sync_runners' executed.
      16. Started: 06:44:12.710556
      17. Duration: 3299.569 ms
      18. Changes:
      19. ----------
      20. return:
      21. ----------
      22. ID: sync_modules
      23. Function: salt.runner
      24. Name: saltutil.sync_modules
      25. Result: True
      26. Comment: Runner function 'saltutil.sync_modules' executed.
      27. Started: 06:44:16.010306
      28. Duration: 3252.745 ms
      29. Changes:
      30. ----------
      31. return:
      32. ----------
      33. ID: populate_pillar
      34. Function: salt.runner
      35. Name: omv.populate_pillar
      36. Result: True
      37. Comment: Runner function 'omv.populate_pillar' executed.
      38. Started: 06:44:19.263329
      39. Duration: 3595.43 ms
      40. Changes:
      41. ----------
      42. return:
      43. True
      44. ----------
      45. ID: run_state_sync
      46. Function: salt.state
      47. Result: True
      48. Comment: States ran successfully. Updating NAS.
      49. Started: 06:44:22.859275
      50. Duration: 178.424 ms
      51. Changes:
      52. NAS:
      53. ----------
      54. ID: sync_modules
      55. Function: module.run
      56. Result: True
      57. Comment: saltutil.sync_modules: []
      58. Started: 06:44:22.961476
      59. Duration: 75.319 ms
      60. Changes:
      61. ----------
      62. saltutil.sync_modules:
      63. Summary for NAS
      64. ------------
      65. Succeeded: 1 (changed=1)
      66. Failed: 0
      67. ------------
      68. Total states run: 1
      69. Total run time: 75.319 ms
      70. ----------
      71. ID: refresh_pillar
      72. Function: salt.state
      73. Result: True
      74. Comment: States ran successfully. No changes made to NAS.
      75. Started: 06:44:23.037793
      76. Duration: 99.971 ms
      77. Changes:
      78. ----------
      79. ID: run_state_deploy
      80. Function: salt.state
      81. Result: False
      82. Comment: Run failed on minions: NAS
      83. Started: 06:44:23.137845
      84. Duration: 2235.932 ms
      85. Changes:
      86. NAS:
      87. Data failed to compile:
      88. ----------
      89. Rendering SLS 'base:omv.deploy.smartmontools.default' failed: while constructing a mapping
      90. in "<unicode string>", line 27, column 1
      91. found conflicting ID 'enable_smart_on_/dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E2049465'
      92. in "<unicode string>", line 83, column 1
      93. Summary for NAS
      94. ------------
      95. Succeeded: 5 (changed=4)
      96. Failed: 1
      97. ------------
      98. Total states run: 6
      99. Total run time: 12.662 s
      Display All



      Letztlich bleibt es beim Fehler im *webgui_error.log

      Source Code

      1. 2019/11/12 06:50:55 [crit] 872#872: *5 connect() to unix:/run/php/php7.3-fpm-openmediavault-webgui.sock failed (2: No such file or directory) while connecting to upstream, client: ::ffff:127.0.0.1, server: openmediavault-webgui, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/run/php/php7.3-fpm-openmediavault-webgui.sock:", host: "127.0.0.1"

      Danke!

      EDiT:


      Source Code

      1. smartctl -s on /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E2049465
      2. smartctl 6.6 2017-11-05 r4594 [x86_64-linux-5.2.0-0.bpo.3-amd64] (local build)
      3. Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org
      4. === START OF ENABLE/DISABLE COMMANDS SECTION ===
      5. SMART Enabled.
      Allerdigns bleibt es bei dem oben stehendem Fehler wenn ich "omv-salt stage rund deploy" noch einmal ausführe.

      The post was edited 1 time, last by str0hlke ().

    • Vielleicht sollten wir zunächst einfach mal die einzelnen Komponenten Konfigurieren:

      Source Code

      1. # omv-salt deploy run phpfpm
      2. # omv-salt deploy run nginx
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Was den Salt Fehler angeht, da scheint Deine Datenbank korrupt zu sein. Es darf und kann eigentlich nicht möglich sein dass es einen doppelten Eintrag für ein und das selbe Device (ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E2049465) gibt.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Hi votdev,

      Source Code

      1. # omv-salt deploy run phpfpm
      2. # omv-salt deploy run nginx
      Hat zumindest das webgui Problem gelöst.


      votdev wrote:

      Was den Salt Fehler angeht, da scheint Deine Datenbank korrupt zu sein. Es darf und kann eigentlich nicht möglich sein dass es einen doppelten Eintrag für ein und das selbe Device (ata-WDC_WD40EFRX-68WT0N0_WD-WCC4E2049465) gibt.

      Was müsste ich mir dazu anschauen? Bzw. wie kann ich die DB fixen?
      Das Laufwerk scheint nicht mehr im besten Zustand zu sein, in der Webgui unter SMART gibt es unter Informationen bei den Attribute einige Prefiail und Alterserscheinungen.

      Vielen Dank!

      LG,
      str0hlke
    • Die einfachste Lösung wird sein alle Konfigurationen für SMART zu löschen und anschließend neu anzulegen.

      Source Code

      1. # omv-confdbadm delete conf.service.smartmontools.device
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Users Online 1

      1 Guest