Network Error after change to Static

    • OMV 4.x
    • Network Error after change to Static

      Hi Guys,
      after changing a fresh installed OMV image the network address to a static one i get thoose error:

      Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; systemctl start 'networking' 2>&1' with exit code '1': Job for networking.service failed because the control process exited with error code. See "systemctl status networking.service" and "journalctl -xe" for details.
      Fehler #0:OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; systemctl start 'networking' 2>&1' with exit code '1': Job for networking.service failed because the control process exited with error code.See "systemctl status networking.service" and "journalctl -xe" for details. in /usr/share/php/openmediavault/system/process.inc:182Stack trace:#0 /usr/share/php/openmediavault/system/systemctl.inc(86): OMV\System\Process->execute(Array, 1)#1 /usr/share/php/openmediavault/system/systemctl.inc(146): OMV\System\SystemCtl->exec('start', NULL, false)#2 /usr/share/openmediavault/engined/module/networking.inc(44): OMV\System\SystemCtl->start()#3 /usr/share/openmediavault/engined/rpc/config.inc(194): OMVModuleNetworking->startService()#4 [internal function]: OMVRpcServiceConfig->applyChanges(Array, Array)#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)#7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(565): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatuszp...', '/tmp/bgoutputOB...')#8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))#9 /usr/share/openmediavault/engined/rpc/config.inc(213): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)#10 [internal function]: OMVRpcServiceConfig->applyChangesBg(Array, Array)#11 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)#12 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)#13 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)#14 {main}



      is someone able to help?

      dhcp is working fine




      thanks,
      Eike

      The post was edited 1 time, last by Eike ().

    • you can try it, directy in /etc/network/interfaces with nano for example something like that
      # The primary network interfaceauto eth0iface eth0 inet staticaddress 192.168.0.2netmask 255.255.255.0network 192.168.0.0broadcast 192.168.0.255gateway 192.168.0.1dns-nameservers 192.168.0.3
      OMV 3.0.99 (ERASMUS). Never change a running system. If you do it anyway, better you have a backup. :D
    • Pattex wrote:

      you can try it, directy in /etc/network/interfaces with nano for example something like that
      # The primary network interfaceauto eth0iface eth0 inet staticaddress 192.168.0.2netmask 255.255.255.0network 192.168.0.0broadcast 192.168.0.255gateway 192.168.0.1dns-nameservers 192.168.0.3
      I have tried this... OMV will time out using the webui and first aid. It looks like it still rights to /etc....interface but can't confirm the change.
    • Eike wrote:

      Job for networking.service failed because the control process exited with error code. See "systemctl status networking.service" and "journalctl -xe" for details.
      This message tells you to look into the output of these two commands:

      Source Code

      1. systemctl status networking.service
      2. journalctl -xe

      Those contain information needed to diagnose the problem which is none I would assume since:

      Eike wrote:

      dhcp is working fine
      So better stay with DHCP, check whether your router allows to set a static DHCP lease for your machine and assign an easy name like omv4 for example. Then you can access your OMV install by name and not by number.
    • tkaiser wrote:

      Source Code

      1. systemctl status networking.service
      2. journalctl -xe

      1:
      root@HP-Proliant:~# systemctl status networking.service
      ● networking.service - Raise network interfaces
      Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor prese
      t: enabled)
      Active: active (exited) since Sun 2019-03-17 20:17:24 CET; 3 days
      ago
      Docs: man:interfaces(5)
      Main PID: 723 (code=exited, status=0/SUCCESS)
      Tasks: 0 (limit: 9830)
      CGroup: /system.slice/networking.service
      Mär 17 20:17:22 HP-Proliant systemd[1]: Starting Raise network interfaces...
      Mär 17 20:17:24 HP-Proliant systemd[1]: Started Raise network interfaces.
      2:
      -- The start-up result is done.
      Mär 21 00:15:01 HP-Proliant CRON[61302]: pam_unix(cron:session): session opened for user root by (ui
      d=0)
      Mär 21 00:15:01 HP-Proliant CRON[61303]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
      Mär 21 00:15:05 HP-Proliant CRON[61302]: pam_unix(cron:session): session closed for user root
      Mär 21 00:17:01 HP-Proliant CRON[61592]: pam_unix(cron:session): session opened for user root by (ui
      d=0)
      Mär 21 00:17:01 HP-Proliant CRON[61593]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
      Mär 21 00:17:01 HP-Proliant CRON[61592]: pam_unix(cron:session): session closed for user root
      Mär 21 00:17:27 HP-Proliant rrdcached[1138]: flushing old values
      Mär 21 00:17:27 HP-Proliant rrdcached[1138]: rotating journals
      Mär 21 00:17:27 HP-Proliant rrdcached[1138]: started new journal /var/lib/rrdcached/journal/rrd.jour
      nal.1553123847.135988
      Mär 21 00:17:27 HP-Proliant rrdcached[1138]: removing old journal /var/lib/rrdcached/journal/rrd.jou
      rnal.1553116647.135965
      Mär 21 00:24:24 HP-Proliant smartd[21984]: Device: /dev/disk/by-id/ata-WDC_WD40EFRX-68WT0N0_
      WD-WCC4E6NYDDE8 [SAT], 2 Currently unreadable (pending) sectors
      Mär 21 00:30:01 HP-Proliant CRON[61812]: pam_unix(cron:session): session opened for user root by (ui
      d=0)
      Mär 21 00:30:01 HP-Proliant CRON[61813]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
      Mär 21 00:30:04 HP-Proliant CRON[61812]: pam_unix(cron:session): session closed for user root
      Mär 21 00:39:01 HP-Proliant CRON[62215]: pam_unix(cron:session): session opened for user root by (ui
      d=0)
      Mär 21 00:39:01 HP-Proliant CRON[62216]: (root) CMD ( [ -x /usr/lib/php/sessionclean ] && if [ ! -d
      /run/systemd/system ]; then /usr/lib/php/sessionclean; fi)
      Mär 21 00:39:01 HP-Proliant CRON[62215]: pam_unix(cron:session): session closed for user root
      Mär 21 00:39:04 HP-Proliant systemd[1]: Starting Clean php session files...
      -- Subject: Unit phpsessionclean.service has begun start-up
      -- Defined-By: systemd
      -- Support: debian.org/support
      --
      -- Unit phpsessionclean.service has begun starting up.
      Mär 21 00:39:04 HP-Proliant systemd[1]: Started Clean php session files.
      -- Subject: Unit phpsessionclean.service has finished start-up
      -- Defined-By: systemd
      -- Support: debian.org/support
      --
      -- Unit phpsessionclean.service has finished starting up.
      --
      -- The start-up result is done.
      I know there is a HDD broken

      [/quote]
    • Hi, anny solution?

      I have the same problem. I have a board with two nic´s.
      If I try to change network settings and anable the change it ends with an error:

      Source Code

      1. Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; systemctl start 'networking' 2>&1' with exit code '1': Job for networking.service failed because the control process exited with error code. See "systemctl status networking.service" and "journalctl -xe" for details.



      Source Code

      1. root@NAS64:~# systemctl status networking.service
      2. ● networking.service - Raise network interfaces
      3. Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
      4. Active: failed (Result: exit-code) since Tue 2019-04-30 20:42:59 CEST; 9min ago
      5. Docs: man:interfaces(5)
      6. Process: 3770 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
      7. Process: 3765 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (
      8. Main PID: 3770 (code=exited, status=1/FAILURE)
      9. CPU: 258ms
      10. Apr 30 20:42:59 NAS64 dhclient[3796]: DHCPREQUEST of 192.168.1.115 on enp3s0 to 255.255.255.255 port 67
      11. Apr 30 20:42:59 NAS64 dhclient[3796]: DHCPACK of 192.168.1.115 from 192.168.1.1
      12. Apr 30 20:42:59 NAS64 ifup[3770]: RTNETLINK answers: File exists
      13. Apr 30 20:42:59 NAS64 ifup[3770]: bound to 192.168.1.115 -- renewal in 360988 seconds.
      14. Apr 30 20:42:59 NAS64 ifup[3770]: RTNETLINK answers: File exists
      15. Apr 30 20:42:59 NAS64 ifup[3770]: ifup: failed to bring up enp3s0
      16. Apr 30 20:42:59 NAS64 systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
      17. Apr 30 20:42:59 NAS64 systemd[1]: Failed to start Raise network interfaces.
      18. Apr 30 20:42:59 NAS64 systemd[1]: networking.service: Unit entered failed state.
      19. Apr 30 20:42:59 NAS64 systemd[1]: networking.service: Failed with result 'exit-code'.
      20. ...skipping...
      21. ● networking.service - Raise network interfaces
      22. Loaded: loaded (/lib/systemd/system/networking.service; enabled; vendor preset: enabled)
      23. Active: failed (Result: exit-code) since Tue 2019-04-30 20:42:59 CEST; 9min ago
      24. Docs: man:interfaces(5)
      25. Process: 3770 ExecStart=/sbin/ifup -a --read-environment (code=exited, status=1/FAILURE)
      26. Process: 3765 ExecStartPre=/bin/sh -c [ "$CONFIGURE_INTERFACES" != "no" ] && [ -n "$(ifquery --read-environment --list --exclude=lo)" ] && udevadm settle (
      27. Main PID: 3770 (code=exited, status=1/FAILURE)
      28. CPU: 258ms
      29. Apr 30 20:42:59 NAS64 dhclient[3796]: DHCPREQUEST of 192.168.1.115 on enp3s0 to 255.255.255.255 port 67
      30. Apr 30 20:42:59 NAS64 dhclient[3796]: DHCPACK of 192.168.1.115 from 192.168.1.1
      31. Apr 30 20:42:59 NAS64 ifup[3770]: RTNETLINK answers: File exists
      32. Apr 30 20:42:59 NAS64 ifup[3770]: bound to 192.168.1.115 -- renewal in 360988 seconds.
      33. Apr 30 20:42:59 NAS64 ifup[3770]: RTNETLINK answers: File exists
      34. Apr 30 20:42:59 NAS64 ifup[3770]: ifup: failed to bring up enp3s0
      35. Apr 30 20:42:59 NAS64 systemd[1]: networking.service: Main process exited, code=exited, status=1/FAILURE
      36. Apr 30 20:42:59 NAS64 systemd[1]: Failed to start Raise network interfaces.
      37. Apr 30 20:42:59 NAS64 systemd[1]: networking.service: Unit entered failed state.
      38. Apr 30 20:42:59 NAS64 systemd[1]: networking.service: Failed with result 'exit-code'.
      39. ~
      Display All


      Source Code

      1. root@NAS64:~# journalctl -xe
      2. -- Unit user@0.service has finished starting up.
      3. --
      4. -- The start-up result is done.
      5. Apr 30 20:52:34 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      6. Apr 30 20:53:04 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      7. Apr 30 20:53:34 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      8. Apr 30 20:54:04 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      9. Apr 30 20:54:34 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      10. Apr 30 20:55:04 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      11. Apr 30 20:55:34 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      12. Apr 30 20:56:04 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      13. Apr 30 20:56:11 NAS64 rrdcached[1343]: flushing old values
      14. Apr 30 20:56:11 NAS64 rrdcached[1343]: rotating journals
      15. Apr 30 20:56:11 NAS64 rrdcached[1343]: started new journal /var/lib/rrdcached/journal/rrd.journal.1556650571.119071
      16. Apr 30 20:56:11 NAS64 rrdcached[1343]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1556643371.119228
      17. Apr 30 20:56:34 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      18. Apr 30 20:57:04 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      19. Apr 30 20:57:34 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      20. Apr 30 20:58:04 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      21. Apr 30 20:58:35 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      22. Apr 30 20:59:05 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      23. Apr 30 20:59:35 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      24. Apr 30 21:00:01 NAS64 CRON[4589]: pam_unix(cron:session): session opened for user root by (uid=0)
      25. Apr 30 21:00:01 NAS64 CRON[4590]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
      26. Apr 30 21:00:04 NAS64 CRON[4589]: pam_unix(cron:session): session closed for user root
      27. Apr 30 21:00:05 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      28. Apr 30 21:00:35 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      29. Apr 30 21:01:05 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      30. Apr 30 21:01:35 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      31. Apr 30 21:02:05 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      32. Apr 30 21:02:35 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      33. Apr 30 21:02:52 NAS64 systemd[1]: Started Run anacron jobs.
      34. -- Subject: Unit anacron.service has finished start-up
      35. -- Defined-By: systemd
      36. -- Support: https://www.debian.org/support
      37. --
      38. -- Unit anacron.service has finished starting up.
      39. --
      40. -- The start-up result is done.
      41. Apr 30 21:02:52 NAS64 anacron[4807]: Anacron 2.3 started on 2019-04-30
      42. Apr 30 21:02:52 NAS64 anacron[4807]: Normal exit (0 jobs run)
      43. Apr 30 21:02:52 NAS64 systemd[1]: anacron.timer: Adding 1min 58.305074s random time.
      44. Apr 30 21:03:05 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      45. Apr 30 21:03:35 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      46. Apr 30 21:04:05 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      47. Apr 30 21:04:35 NAS64 monit[1317]: 'filesystem_srv_dev-disk-by-label-platte' space usage 92.6% matches resource limit [space usage>85.0%]
      Display All
      regards
      Zeppelin