strangest thing was that I was able to smbget content from the SMB host but the GUI of the Mac app Finder was blank
so it probably was not a SMB level problem and more likely a Mac thing
strangest thing was that I was able to smbget content from the SMB host but the GUI of the Mac app Finder was blank
so it probably was not a SMB level problem and more likely a Mac thing
My use case for AFP is an ancient 2008 Mac mini which is the last place I can run my hi performance flat bed scanner ( needs 10.6.8)
Upgrading the Mac would be the last thing on my mind, but I find that I can get AFP running here with Docker on the OMV hardware -- as you can see I get a recent NetaTalk image
So what I am trying to say is that the Mac cannot talk better than SMB1 and may be the OMV will not talk down to this level of protocol. Any definitive views ?
Thanks Teschbert for picking up
I can see Services>SMS>Settings
If I leave it at SMB2 as found in the distro then as I say in OP the old Macintosh does not even find the server as a SMB server
Only by setting it to SMB1 ( highly deprecated) can I get the Mac to see the OMV host
I want to allow a very old Mac ( NacOS 10.6.8 ) to access a share on my OMV
I have to set protocol to SMB1 for the Mac to "see" the server
Then browsing the server from the Mac shows the share names and disk capacities but NO content
I have NO problems accessing the shares and modifying content from more modern clients
Any ideas ? anyone ? As I am running Debian 12 I cannot run netatalk with an option to use AFP
Version 7.1.1-1 (Sandworm)
Processor Raspberry Pi 5 Model B Rev 1.0
Kernel Linux 6.6.28+rpt-rpi-2712
Samba version 4.17.12-Debian
PID Username Group Machine Protocol Version Encryption Signing
----------------------------------------------------------------------------------------------------------------------------------------
49452 nobody nogroup 192.168.1.156 (ipv4:192.168.1.156:48582) SMB3_11 - -
49531 nobody nogroup 192.168.1.156 (ipv4:192.168.1.156:60588) SMB3_11 - -
49468 nobody nogroup localhost (ipv4:192.168.1.156:46868) SMB3_11 - -
62079 nobody nogroup 192.168.1.45 (ipv4:192.168.1.45:49608) NT1 - -
Service pid Machine Connected at Encryption Signing
---------------------------------------------------------------------------------------------
Website 49531 192.168.1.156 Wed May 22 16:33:45 2024 BST - -
IPC$ 49468 localhost Wed May 22 16:33:38 2024 BST - -
imageWillsMac2 62079 192.168.1.45 Wed May 22 17:12:06 2024 BST - -
imageWillsMac2 49452 192.168.1.156 Wed May 22 16:33:25 2024 BST - -
thank you for responding , my wickednesses catch up with me. I begin to appreciate the intricacies of OMV, UUID and omv-confdbadm
and thanks all for making OMV a helpful place
Then I repeat setting autologout and it shows pending changes in 'cron' 'monit 'nginx'
log reports
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color cron 2>&1' with exit code '1': pi5-vault.local:
Data failed to compile:
----------
Rendering SLS 'base:omv.deploy.cron.20userdefined' failed: while constructing a mapping
in "<unicode string>", line 29, column 1
found conflicting ID 'create_cron_userdefined_522621fb-d06b-4f89-9b84-164c42eeb8e6_script'
in "<unicode string>", line 85, column 1
[CRITICAL] Rendering SLS 'base:omv.deploy.cron.20userdefined' failed: while constructing a mapping
in "<unicode string>", line 29, column 1
found conflicting ID 'create_cron_userdefined_522621fb-d06b-4f89-9b84-164c42eeb8e6_script'
in "<unicode string>", line 85, column 1
OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color cron 2>&1' with exit code '1': pi5-vault.local:
Data failed to compile:
----------
Rendering SLS 'base:omv.deploy.cron.20userdefined' failed: while constructing a mapping
in "<unicode string>", line 29, column 1
found conflicting ID 'create_cron_userdefined_522621fb-d06b-4f89-9b84-164c42eeb8e6_script'
in "<unicode string>", line 85, column 1
[CRITICAL] Rendering SLS 'base:omv.deploy.cron.20userdefined' failed: while constructing a mapping
in "<unicode string>", line 29, column 1
found conflicting ID 'create_cron_userdefined_522621fb-d06b-4f89-9b84-164c42eeb8e6_script'
in "<unicode string>", line 85, column 1 in /usr/share/php/openmediavault/system/process.inc:247
Stack trace:
#0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()
#1 /usr/share/openmediavault/engined/rpc/config.inc(178): OMV\Engine\Module\ServiceAbstract->deploy()
#2 [internal function]: Engined\Rpc\Config->applyChanges()
#3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array()
#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod()
#5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(622): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}()
#6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(146): OMV\Rpc\ServiceAbstract->execBgProc()
#7 /usr/share/openmediavault/engined/rpc/config.inc(199): OMV\Rpc\ServiceAbstract->callMethodBg()
#8 [internal function]: Engined\Rpc\Config->applyChangesBg()
#9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array()
#10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod()
#11 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call()
#12 {main}
By the way where are these temporary scripts being built?
thanks for your time
Display More
output from "sudo omv-salt deploy run webgui"
pi5-vault.local:
----------
ID: webgui_build_configs
Function: cmd.run
Name: omv-mkworkbench all
Result: True
Comment: Command "omv-mkworkbench all" run
Started: 17:49:23.705714
Duration: 1325.169 ms
Changes:
----------
pid:
3207
retcode:
0
stderr:
stdout:
----------
ID: webgui_document_root_perms_recursive
Function: cmd.run
Name: chown -R 'openmediavault-webgui:openmediavault-webgui' '/var/www/openmediavault'
Result: True
Comment: Command "chown -R 'openmediavault-webgui:openmediavault-webgui' '/var/www/openmediavault'" run
Started: 17:49:25.031072
Duration: 5.084 ms
Changes:
----------
pid:
3208
retcode:
0
stderr:
stdout:
Summary for pi5-vault.local
------------
Succeeded: 2 (changed=2)
Failed: 0
------------
Total states run: 2
Total run time: 1.330 s
Display More
Seems like it has determined that I gotta logout nearly every 10 minutes or so
can see <timeout>60</timeout> in config.xml
all best and thanks for OMV
I'd like to check the logs but the container has disappeared by the time I get to it -- may be a way to hang on to it -- all I have to go on otherwise is the systemctl status. I'm not so good with podman or docker for that matter
well all I know is
this box is
Linux pi-vault 5.15.30-v8+ #1536 SMP PREEMPT Mon Mar 28 13:53:14 BST 2022 aarch64 GNU/Linux
and
http://www.phoronix.com/scan.php?page=news_item&px=MTY5ODk
leads me to believe that aarch64 = arm64 && !( podman equivalent docker )
the docker hub version of photoprism runs just fine on this platform under docker, seems like it is podman control that is giving me grief
This is openmediavault-photoprism 6.0.1-3 on
Release: 6.0.20-1
OS is debian bullseye on RPi4b with 8Gb ram
Photoprism will not stay alive
# systemctl status pod-photoprism.service
● pod-photoprism.service - Podman pod-photoprism.service
Loaded: loaded (/etc/systemd/system/pod-photoprism.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2022-04-04 10:30:37 BST; 5s ago
Process: 66166 ExecStartPre=/bin/rm -f /run/pod-photoprism.pid /run/pod-photoprism.pod-id (code=exited, status=0/SUCCESS)
Process: 66167 ExecStartPre=/usr/bin/podman pod create --infra-conmon-pidfile /run/pod-photoprism.pid --pod-id-file /run/pod-photoprism>
Process: 66198 ExecStart=/usr/bin/podman pod start --pod-id-file /run/pod-photoprism.pod-id (code=exited, status=0/SUCCESS)
Main PID: 66305 (conmon)
Tasks: 2 (limit: 8985)
CPU: 768ms
CGroup: /system.slice/pod-photoprism.service
└─66305 /usr/bin/conmon --api-version 1 -c 918c4fc99ba15ee7e68a7dd66008d30e01359597a238019b50d9a90c327899ce -u 918c4fc99ba15ee>
Apr 04 10:30:36 pi-vault systemd[1]: Starting Podman pod-photoprism.service...
Apr 04 10:30:36 pi-vault podman[66167]: 2022-04-04 10:30:36.458569394 +0100 BST m=+0.233314132 container create 918c4fc99ba15ee7e68a7dd6600>
Apr 04 10:30:36 pi-vault podman[66167]: 2022-04-04 10:30:36.468826406 +0100 BST m=+0.243571145 pod create c5bcd8e8d686c23611f7cc82a9eaf944c>
Apr 04 10:30:36 pi-vault podman[66167]: c5bcd8e8d686c23611f7cc82a9eaf944c3f4453f8fa2799072bef1e601ec5f23
Apr 04 10:30:37 pi-vault podman[66198]: 2022-04-04 10:30:37.066169857 +0100 BST m=+0.548978179 container init 918c4fc99ba15ee7e68a7dd66008d>
Apr 04 10:30:37 pi-vault podman[66198]: 2022-04-04 10:30:37.093123517 +0100 BST m=+0.575931859 container start 918c4fc99ba15ee7e68a7dd66008>
Apr 04 10:30:37 pi-vault podman[66198]: 2022-04-04 10:30:37.093861808 +0100 BST m=+0.576670149 pod start c5bcd8e8d686c23611f7cc82a9eaf944c3>
Apr 04 10:30:37 pi-vault podman[66198]: c5bcd8e8d686c23611f7cc82a9eaf944c3f4453f8fa2799072bef1e601ec5f23
Apr 04 10:30:37 pi-vault systemd[1]: Started Podman pod-photoprism.service.
While on a crash course to learn systemctl I find this masked status
sudo systemctl list-unit-files
docker.service masked
I was successfully running docker on this RPi until a dist-upgrade on the kernel, short of a fresh build pf OMV. I have run out of ideas
OMV 5.5.19-1 --- built from Github
Debian Buster 5.10.11-v7+
RPI 3B 1Gb
OMV extras 5.5.3 --- from installsh script
no other plugins
----
docker version "5:20.10.3~3-0~debian-buster"
docker "installed and not running"
---- restart
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; echo "Restarting docker ..." && systemctl restart docker.service ': Restarting docker ...
journalctl -xe
::::::::::::
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.service has finished with a failure.
--
-- The job identifier is 7768 and the job result is failed.
Feb 23 09:47:17 openmediavault systemd[1]: docker.service: Service RestartSec=10s expired, scheduling restart.
Feb 23 09:47:17 openmediavault systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- Automatic restarting of the unit docker.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Feb 23 09:47:17 openmediavault systemd[1]: Stopped Docker Application Container Engine.
-- Subject: A stop job for unit docker.service has finished
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A stop job for unit docker.service has finished.
--
-- The job identifier is 7859 and the job result is done.
Feb 23 09:47:17 openmediavault systemd[1]: docker.service: Start request repeated too quickly.
Feb 23 09:47:17 openmediavault systemd[1]: docker.service: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit docker.service has entered the 'failed' state with result 'exit-code'.
Feb 23 09:47:17 openmediavault systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: A start job for unit docker.service has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit docker.service has finished with a failure.
--
-- The job identifier is 7859 and the job result is failed.
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Tue 2021-02-23 10:06:06 UTC; 22min ago
Docs: https://docs.docker.com
Process: 30561 ExecStart=/usr/bin/dockerd -H fd://* --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 30561 (code=exited, status=1/FAILURE)
Feb 23 10:06:06 openmediavault systemd[1]: docker.service: Service RestartSec=10s expired, scheduling restart.
Feb 23 10:06:06 openmediavault systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Feb 23 10:06:06 openmediavault systemd[1]: Stopped Docker Application Container Engine.
Feb 23 10:06:06 openmediavault systemd[1]: docker.service: Start request repeated too quickly.
Feb 23 10:06:06 openmediavault systemd[1]: docker.service: Failed with result 'exit-code'.
Feb 23 10:06:06 openmediavault systemd[1]: Failed to start Docker Application Container Engine.
Feb 23 09:47:17 openmediavault systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--