ahh! that was it, thanks!!
Beiträge von tincanfury
-
-
OK, I've got this for my tunnel,
and this for my client,
I was able to load the settings into the Wireguard client on my laptop. I was able to connect, however I was unable to ssh into my NAS or connect to the OMV Dashboard from my browser. So I'm guessing something is not set up correctly, but I'm not sure what I need to change?
thanks!
-
turned off NFS and it is working now. I don't need NFS for the immediate future so this is fine for me until I can figure out why NFS was causing it to fail.
-
This is now holding me up from applying Wireguard settings.
Is there a reason one failure holds up all the other pending changes?
Code
Alles anzeigenFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian: ---------- ID: configure_default_nfs-common Function: file.managed Name: /etc/default/nfs-common Result: True Comment: File /etc/default/nfs-common is in the correct state Started: 10:59:00.661283 Duration: 800.544 ms Changes: ---------- ID: configure_default_nfs-kernel-server Function: file.managed Name: /etc/default/nfs-kernel-server Result: True Comment: File /etc/default/nfs-kernel-server is in the correct state Started: 10:59:01.462407 Duration: 589.883 ms Changes: ---------- ID: configure_nfsd_exports Function: file.managed Name: /etc/exports Result: True Comment: File /etc/exports is in the correct state Started: 10:59:02.052820 Duration: 805.689 ms Changes: ---------- ID: divert_nfsd_exports Function: omv_dpkg.divert_add Name: /etc/exports Result: True Comment: Leaving 'local diversion of /etc/exports to /etc/exports.distrib' Started: 10:59:02.860214 Duration: 39.18 ms Changes: ---------- ID: start_rpc_statd_service Function: service.running Name: rpc-statd Result: True Comment: The service rpc-statd is already running Started: 10:59:07.382236 Duration: 229.299 ms Changes: ---------- ID: start_nfs_server_service Function: service.running Name: nfs-server Result: False Comment: A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. Started: 10:59:07.614488 Duration: 385.209 ms Changes: Summary for debian ------------ Succeeded: 5 Failed: 1 ------------ Total states run: 6 Total run time: 2.850 s [ERROR ] Command '/bin/systemd-run' failed with return code: 1 [ERROR ] stderr: Running scope as unit: run-ref0b574f108a489d800d1b10cc27a940.scope A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. [ERROR ] retcode: 1 [ERROR ] A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian: ---------- ID: configure_default_nfs-common Function: file.managed Name: /etc/default/nfs-common Result: True Comment: File /etc/default/nfs-common is in the correct state Started: 10:59:00.661283 Duration: 800.544 ms Changes: ---------- ID: configure_default_nfs-kernel-server Function: file.managed Name: /etc/default/nfs-kernel-server Result: True Comment: File /etc/default/nfs-kernel-server is in the correct state Started: 10:59:01.462407 Duration: 589.883 ms Changes: ---------- ID: configure_nfsd_exports Function: file.managed Name: /etc/exports Result: True Comment: File /etc/exports is in the correct state Started: 10:59:02.052820 Duration: 805.689 ms Changes: ---------- ID: divert_nfsd_exports Function: omv_dpkg.divert_add Name: /etc/exports Result: True Comment: Leaving 'local diversion of /etc/exports to /etc/exports.distrib' Started: 10:59:02.860214 Duration: 39.18 ms Changes: ---------- ID: start_rpc_statd_service Function: service.running Name: rpc-statd Result: True Comment: The service rpc-statd is already running Started: 10:59:07.382236 Duration: 229.299 ms Changes: ---------- ID: start_nfs_server_service Function: service.running Name: nfs-server Result: False Comment: A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. Started: 10:59:07.614488 Duration: 385.209 ms Changes: Summary for debian ------------ Succeeded: 5 Failed: 1 ------------ Total states run: 6 Total run time: 2.850 s [ERROR ] Command '/bin/systemd-run' failed with return code: 1 [ERROR ] stderr: Running scope as unit: run-ref0b574f108a489d800d1b10cc27a940.scope A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. [ERROR ] retcode: 1 [ERROR ] A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. in /usr/share/php/openmediavault/system/process.inc:242 Stack trace: #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute() #1 /usr/share/openmediavault/engined/rpc/config.inc(178): OMV\Engine\Module\ServiceAbstract->deploy() #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array) #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array) #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(620): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatus7H...', '/tmp/bgoutputcL...') #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure)) #7 /usr/share/openmediavault/engined/rpc/config.inc(199): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array) #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array) #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array) #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1) #12 {main}
Code
Alles anzeigen$ journalctl -xe ░░ ░░ A start job for unit nss-lookup.target has finished successfully. ░░ ░░ The job identifier is 157. Jan 16 12:43:31 home dhclient[1031]: DHCPDISCOVER on enp3s0 to 255.255.255.255 port 67 interval 6 Jan 16 12:43:31 home sh[1031]: DHCPDISCOVER on enp3s0 to 255.255.255.255 port 67 interval 6 Jan 16 12:56:18 home sudo[12543]: pam_unix(sudo:auth): authentication failure; logname=***** uid=1000 euid=0 tty=/dev/> Jan 16 12:56:22 home sudo[12543]: ***** : TTY=pts/0 ; PWD=/home/***** ; USER=root ; COMMAND=/usr/bin/bash Jan 16 12:56:22 home sudo[12543]: pam_unix(sudo:session): session opened for user root(uid=0) by *****(uid=1000) Jan 16 13:08:15 home sudo[12543]: pam_unix(sudo:session): session closed for user root Jan 16 15:23:14 home sudo[131263]: pam_unix(sudo:auth): conversation failed Jan 16 15:23:14 home sudo[131263]: pam_unix(sudo:auth): auth could not identify password for [*****] Jan 16 15:23:14 home postfix/postdrop[131274]: warning: unable to look up public/pickup: No such file or directory Jan 16 15:23:39 home sudo[131315]: ***** : TTY=pts/0 ; PWD=/home/***** ; USER=root ; COMMAND=/usr/bin/bash Jan 16 15:23:39 home sudo[131315]: pam_unix(sudo:session): session opened for user root(uid=0) by *****(uid=1000) Jan 16 16:11:08 home sudo[131315]: pam_unix(sudo:session): session closed for user root Jan 17 03:04:03 home cron[176417]: postdrop: warning: unable to look up public/pickup: No such file or directory Jan 17 03:04:03 home postfix/postdrop[176417]: warning: unable to look up public/pickup: No such file or directory Jan 17 16:43:13 home sudo[232539]: ***** : TTY=pts/0 ; PWD=/home/***** ; USER=root ; COMMAND=/usr/sbin/lsmod Jan 17 16:43:13 home sudo[232539]: pam_unix(sudo:session): session opened for user root(uid=0) by *****(uid=1000) Jan 17 16:43:13 home sudo[232539]: pam_unix(sudo:session): session closed for user root Jan 17 17:59:02 home cron[241137]: postdrop: warning: unable to look up public/pickup: No such file or directory Jan 17 17:59:02 home postfix/postdrop[241137]: warning: unable to look up public/pickup: No such file or directory Jan 17 21:04:02 home cron[250023]: postdrop: warning: unable to look up public/pickup: No such file or directory Jan 17 21:04:02 home postfix/postdrop[250023]: warning: unable to look up public/pickup: No such file or directory Jan 17 23:04:02 home cron[255036]: postdrop: warning: unable to look up public/pickup: No such file or directory Jan 17 23:04:02 home postfix/postdrop[255036]: warning: unable to look up public/pickup: No such file or directory Jan 18 10:45:17 home sudo[287487]: ***** : TTY=pts/0 ; PWD=/home/***** ; USER=root ; COMMAND=/usr/bin/bash Jan 18 10:45:17 home sudo[287487]: pam_unix(sudo:session): session opened for user root(uid=0) by *****(uid=1000)
-
Been stepping through the wiki,
omv6:omv6_plugins:wireguard [omv-extras.org]
However my setup screen for a tunnel shows "local server" as a field that is not discussed in the wiki,
Does this need to have a local IP address entered or can it be left blank?
I'm also seeing some differences with the Client setup page from what is shown in the Wiki as well.
thanks!
-
Use the openmediavault-wireguard plugin https://wiki.omv-extras.org/do…v6:omv6_plugins:wireguard
Sweet, I'll dig into this! I did have the backup of a VPN into a laptop I can keep at home, but wireguard looks like a cleaner solution. thanks!
-
I'm going to be traveling and am thinking of opening my OMV Dashboard to the internet (on non-standard port) while I'm away from home in case I need to manage my NAS. I was wondering if there are any tips for securing the Dashboard, other than a very strong password)?
Thanks!
-
I'm running on an Asustor NAS with,
Debian GNU/Linux, with Linux 6.2.16-20-bpo11-pve
is there a package I need to install?
Code
Alles anzeigen$ sudo lsmod Module Size Used by xt_nat 16384 41 xt_tcpudp 20480 59 veth 40960 0 xt_conntrack 16384 8 nft_chain_nat 16384 19 xt_MASQUERADE 20480 17 nf_nat 61440 3 xt_nat,nft_chain_nat,xt_MASQUERADE nf_conntrack_netlink 57344 0 nf_conntrack 192512 5 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE nf_defrag_ipv6 24576 1 nf_conntrack nf_defrag_ipv4 16384 1 nf_conntrack xfrm_user 61440 1 xfrm_algo 20480 1 xfrm_user xt_addrtype 16384 2 nft_compat 20480 127 nf_tables 339968 752 nft_compat,nft_chain_nat nfnetlink 24576 4 nft_compat,nf_conntrack_netlink,nf_tables overlay 184320 8 snd_hda_codec_hdmi 94208 1 intel_rapl_msr 20480 0 intel_rapl_common 40960 1 intel_rapl_msr intel_soc_dts_thermal 20480 0 intel_soc_dts_iosf 20480 1 intel_soc_dts_thermal intel_powerclamp 24576 0 coretemp 24576 0 snd_hda_codec_realtek 188416 1 snd_hda_codec_generic 114688 1 snd_hda_codec_realtek ledtrig_audio 16384 1 snd_hda_codec_generic kvm_intel 438272 0 mei_hdcp 28672 0 mei_pxp 20480 0 kvm 1302528 1 kvm_intel snd_hda_intel 57344 0 i915 3706880 1 irqbypass 16384 1 kvm punit_atom_debug 16384 0 snd_intel_dspcfg 36864 1 snd_hda_intel crct10dif_pclmul 16384 1 snd_intel_sdw_acpi 20480 1 snd_intel_dspcfg polyval_generic 16384 0 ghash_clmulni_intel 16384 0 cryptd 28672 1 ghash_clmulni_intel drm_buddy 20480 1 i915 sha512_ssse3 53248 0 snd_hda_codec 200704 4 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec_realtek intel_cstate 24576 0 ttm 102400 1 i915 serio_raw 20480 0 pcspkr 16384 0 snd_hda_core 135168 5 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek snd_hwdep 20480 1 snd_hda_codec efi_pstore 16384 0 drm_display_helper 204800 1 i915 cec 94208 2 drm_display_helper,i915 rc_core 77824 1 cec mei_txe 36864 2 cmdlinepart 16384 0 snd_pcm 188416 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda_core spi_nor 126976 0 snd_timer 45056 1 snd_pcm drm_kms_helper 237568 2 drm_display_helper,i915 i2c_algo_bit 16384 1 i915 syscopyarea 16384 1 drm_kms_helper sysfillrect 20480 1 drm_kms_helper sysimgblt 16384 1 drm_kms_helper snd 135168 8 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hwdep,snd_hda_intel,snd_hda_codec,snd_hda_codec_realtek,snd_timer,snd_pcm mei 159744 5 mei_hdcp,mei_pxp,mei_txe soundcore 16384 1 snd dw_dmac_pci 16384 0 at24 24576 0 mtd 98304 3 spi_nor,cmdlinepart dw_dmac_core 36864 1 dw_dmac_pci mac_hid 16384 0 zfs 4300800 16 zunicode 352256 1 zfs zzstd 684032 1 zfs zlua 204800 1 zfs zavl 24576 1 zfs icp 348160 1 zfs zcommon 118784 2 zfs,icp znvpair 131072 2 zfs,zcommon spl 126976 6 zfs,icp,zzstd,znvpair,zcommon,zavl drm 671744 6 drm_kms_helper,drm_display_helper,drm_buddy,i915,ttm nfsd 741376 3 auth_rpcgss 163840 1 nfsd nfs_acl 16384 1 nfsd lockd 122880 1 nfsd grace 16384 2 nfsd,lockd sunrpc 712704 5 nfsd,auth_rpcgss,lockd,nfs_acl ip_tables 36864 0 x_tables 65536 7 xt_conntrack,nft_compat,xt_tcpudp,xt_addrtype,xt_nat,ip_tables,xt_MASQUERADE autofs4 53248 2 btrfs 1855488 0 blake2b_generic 20480 0 raid10 69632 0 raid456 184320 0 async_raid6_recov 24576 1 raid456 async_memcpy 20480 2 raid456,async_raid6_recov async_pq 24576 2 raid456,async_raid6_recov async_xor 20480 3 async_pq,raid456,async_raid6_recov async_tx 20480 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_recov xor 24576 2 async_xor,btrfs raid6_pq 126976 4 async_pq,btrfs,raid456,async_raid6_recov libcrc32c 16384 5 nf_conntrack,nf_nat,btrfs,nf_tables,raid456 raid1 57344 0 raid0 24576 0 multipath 20480 0 linear 20480 0 uas 28672 3 usb_storage 81920 1 uas spi_intel_platform 16384 0 spi_intel 32768 1 spi_intel_platform crc32_pclmul 16384 0 sdhci_pci 81920 0 i2c_i801 36864 0 cqhci 40960 1 sdhci_pci xhci_pci 24576 0 psmouse 204800 0 i2c_smbus 20480 1 i2c_i801 xhci_pci_renesas 20480 1 xhci_pci sdhci 86016 1 sdhci_pci lpc_ich 28672 0 tg3 217088 0 i2c_designware_pci 16384 0 i2c_ccgx_ucsi 16384 1 i2c_designware_pci xhci_hcd 368640 1 xhci_pci ahci 49152 4 libahci 57344 1 ahci video 69632 1 i915 wmi 40960 1 video
-
Each "service" or container in the compose file will have a separate line on the Services tab. You can click Pull then Up to update each one individually.
Ah, the Services tab! perfect, thanks!
-
Understood!
In the past I was able to update one of the containers on it's own. With them being part of the same File, is there a way to do this from the Dashboard interface?
thanks!
-
Ah, ran into something.
My Nextcloud container has,
However even with mariadb container installed I get an error with the Nextcloud container.
Is this not supported?
Should I have both containers in the same File, or is there something else I need to be doing?
thanks!
update: I created a File with Nextcloud, mariadb, and swag, and created it that way. This puts them all in the same stack together, not sure if that is why it wasn't working before?
-
The plugin would have to try to use portainer's compose file and environment files in order to manage the containers. And I'm not sure portainer would like that.
I did write a script to *try* to import compose files from portainer. I don't use portainer so it hasn't had a lot of testing. But worst thing you would have to do is delete something that imported wrong.
You can run docker compose commands from the command line against files in the plugin but you need to know the paths/names for the compose file and environment file(s). You cannot edit the files though. The OMV database is the source of truth. See this post for other info - RE: Old compose files after OMV restore
Understood! Easy enough for me to manually recreate, I wanted to make sure I wasn't missing something. So far the OMV Dashboard has worked well and adding container files is easy, so I'm on board with doing the work through it.
Thanks for all the work on this, it's a nice addition to the Dashboard.
-
Code
Alles anzeigenroot@$ journalctl -u nfs-server -- Journal begins at Tue 2024-01-02 05:09:29 EST, ends at Tue 2024-01-16 15:23:39 EST. -- Jan 16 10:51:36 home systemd[1]: Dependency failed for NFS server and services. Jan 16 10:51:36 home systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'. Jan 16 10:51:36 home systemd[1]: Dependency failed for NFS server and services. Jan 16 10:51:36 home systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'. Jan 16 10:52:28 home systemd[1]: Dependency failed for NFS server and services. Jan 16 10:52:28 home systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'. Jan 16 11:03:59 home systemd[1]: Dependency failed for NFS server and services. Jan 16 11:03:59 home systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'. Jan 16 11:15:56 home systemd[1]: Dependency failed for NFS server and services. Jan 16 11:15:56 home systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'. Jan 16 11:48:44 home systemd[1]: Dependency failed for NFS server and services. Jan 16 11:48:44 home systemd[1]: nfs-server.service: Job nfs-server.service/start failed with result 'dependency'.
-
And because? Occasionally there are updates that require a restart. If you don't, the system may respond strangely.
it's just fun having a long uptime 😜
Can you post the full error in a code box? Use the </> button to create a code box in the forum.
I rebooted and these same configuration changes message appeared.
Code
Alles anzeigenFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian: ---------- ID: configure_default_nfs-common Function: file.managed Name: /etc/default/nfs-common Result: True Comment: File /etc/default/nfs-common is in the correct state Started: 12:52:58.300193 Duration: 487.302 ms Changes: ---------- ID: configure_default_nfs-kernel-server Function: file.managed Name: /etc/default/nfs-kernel-server Result: True Comment: File /etc/default/nfs-kernel-server is in the correct state Started: 12:52:58.787858 Duration: 522.814 ms Changes: ---------- ID: configure_nfsd_exports Function: file.managed Name: /etc/exports Result: True Comment: File /etc/exports is in the correct state Started: 12:52:59.311082 Duration: 713.044 ms Changes: ---------- ID: divert_nfsd_exports Function: omv_dpkg.divert_add Name: /etc/exports Result: True Comment: Leaving 'local diversion of /etc/exports to /etc/exports.distrib' Started: 12:53:00.025833 Duration: 39.469 ms Changes: ---------- ID: start_rpc_statd_service Function: service.running Name: rpc-statd Result: True Comment: The service rpc-statd is already running Started: 12:53:04.617110 Duration: 109.274 ms Changes: ---------- ID: start_nfs_server_service Function: service.running Name: nfs-server Result: False Comment: A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. Started: 12:53:04.728920 Duration: 212.479 ms Changes: Summary for debian ------------ Succeeded: 5 Failed: 1 ------------ Total states run: 6 Total run time: 2.084 s [ERROR ] Command '/bin/systemd-run' failed with return code: 1 [ERROR ] stderr: Running scope as unit: run-r9746cbfe457c42d0860d9a2f1c4d3ad4.scope A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. [ERROR ] retcode: 1 [ERROR ] A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian: ---------- ID: configure_default_nfs-common Function: file.managed Name: /etc/default/nfs-common Result: True Comment: File /etc/default/nfs-common is in the correct state Started: 12:52:58.300193 Duration: 487.302 ms Changes: ---------- ID: configure_default_nfs-kernel-server Function: file.managed Name: /etc/default/nfs-kernel-server Result: True Comment: File /etc/default/nfs-kernel-server is in the correct state Started: 12:52:58.787858 Duration: 522.814 ms Changes: ---------- ID: configure_nfsd_exports Function: file.managed Name: /etc/exports Result: True Comment: File /etc/exports is in the correct state Started: 12:52:59.311082 Duration: 713.044 ms Changes: ---------- ID: divert_nfsd_exports Function: omv_dpkg.divert_add Name: /etc/exports Result: True Comment: Leaving 'local diversion of /etc/exports to /etc/exports.distrib' Started: 12:53:00.025833 Duration: 39.469 ms Changes: ---------- ID: start_rpc_statd_service Function: service.running Name: rpc-statd Result: True Comment: The service rpc-statd is already running Started: 12:53:04.617110 Duration: 109.274 ms Changes: ---------- ID: start_nfs_server_service Function: service.running Name: nfs-server Result: False Comment: A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. Started: 12:53:04.728920 Duration: 212.479 ms Changes: Summary for debian ------------ Succeeded: 5 Failed: 1 ------------ Total states run: 6 Total run time: 2.084 s [ERROR ] Command '/bin/systemd-run' failed with return code: 1 [ERROR ] stderr: Running scope as unit: run-r9746cbfe457c42d0860d9a2f1c4d3ad4.scope A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. [ERROR ] retcode: 1 [ERROR ] A dependency job for nfs-server.service failed. See 'journalctl -xe' for details. in /usr/share/php/openmediavault/system/process.inc:242 Stack trace: #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute() #1 /usr/share/openmediavault/engined/rpc/config.inc(178): OMV\Engine\Module\ServiceAbstract->deploy() #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array) #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array) #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(620): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusWu...', '/tmp/bgoutputIA...') #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure)) #7 /usr/share/openmediavault/engined/rpc/config.inc(199): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array) #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array) #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array) #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1) #12 {main}
-
Ok, so it sounds like the answer is:
Yes, and the containers need to be redeployed via the Dashboard from now on to have the containers properly manageable by the dashboard, and the yml information must be manually entered.
-
I also just noticed in my Dashboard notifications,
"System restart required
A reboot is needed to fully apply the changes introduced by package installation or upgrade."
there is no date to when this message appeared, but I'm assuming it has to do with my updating packages.
Hate to loose my "uptime", but I'll do that if I have to and see if it resolves the issue.
-
I had updated updatable packages via the Dashboard, then changed the docker stuff to the new OMV docker compose system, including adding the "Shared" location for the Dashboard's Docker Compose files.
-
In recreating the portainer container with the new system, if I'm understanding the Files section of the new Dashboard docker Compose system I could create individual yml files for the containers in my original docker-compose.yml file and manage my container updates. I attempted to do this, but the system does not appear to "link" an existing container to one of these new yml files in how the portainer entry I created will show the container status.
I assume this means I would need to do similarly for my existing containers as I did for portainer and remove the existing container and have the Dashboard system recreate it?
Is there a way to import my original docker-compose.yml file to create individual yml? I can do it manually, but figured I'd check if this has been "automated".
In doing this, I also assume if I wanted to update the container I would no longer be able to do so from the command line?Thanks for the help/answers!
-
Pending configuration changes
You must apply these changes in order for them to take effect.The following modules will be updated:
- compose
- nfs
- nginx
- samba
- sharedfolders
- systemd
When I select apply I'm provided the following error, but I'm not seeing anything obvious that needs to be fixed.
500 - Internal Server Error
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian: ---------- ID: configure_default_nfs-common Function: file.managed Name: /etc/default/nfs-common Result: True Comment: File /etc/default/nfs-common is in the correct state Started: 11:15:49.552374 Duration: 558.458 ms Changes: ---------- ID: configure_default_nfs-kernel-server Function: file.managed Name: /etc/default/nfs-kernel-server Result: True Comment: File /etc/default/nfs-kernel-server is in the correct state Started: 11:15:50.111257 Duration: 614.82 ms Changes: ---------- ID: configure_nfsd_exports Function: file.managed Name: /etc/exports Result: True Comment: File /etc/exports is in the correct state Started: 11:15:50.726495 Duration: 979.244 ms Changes: ---------- ID: divert_nfsd_exports Function: omv_dpkg.divert_add Name: /etc/exports Result: True Comment: Leaving 'local diversion of /etc/exports to /etc/exports.distrib' Started: 11:15:51.707570 Duration: 38.805 ms Changes: ---------- ID: start_rpc_statd_service Function: service.running Name: rpc-statd Result: True Comment: The service rpc-statd is already running Started: 11:15:55.7...
-
I think you should have only one driver. Either json-file or local.
https://docs.docker.com/config/containers/logging/configure/
Check what is filling /var/log and why. The logs should give you a hint, what is going wrong.
several sites made it sound like both json-file and local could be used together, but removing the local one got it started! This is also after "clearing out" my /var/log location.
as for the size of the logs being large, I agree, and I'll have to start digging into that.
However, I don't see why the size command is not used in logrotate to catch large log files and rotate them so the log drive does not fill up? I'm pretty sure my other/previous linux systems did this and it seemed to work fine. Is there something about using logrotate this way that I'm not aware of that informed the decision for OMV?
Thanks for the quick response on docker, your help is much appreciated in getting it up and running again!