You mean delete the share, the plugin or both?
The share in the remotemount plugin. No need to uninstall the plugin.
You mean delete the share, the plugin or both?
The share in the remotemount plugin. No need to uninstall the plugin.
ryecoaaron It work. Thank You!
I am getting a similar error on a clean install. I've reinstalled once which hasn't helped.
Can anyone point me in the right direction? I am very new with linux.
TIA
I think you have a different error than the ones discussed here.
But you will find other threads dealing with such error lately.
QuoteW: GPG error: http://packages.openmediavault.org/public shaitan InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7E7A6C592EF35D13 NO_PUBKEY 24863F0C716B980B
E: The repository 'http://packages.openmediavault.org/public shaitan InRelease' is not signed.
W: GPG error: https://openmediavault.github.io/packages shaitan InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 7E7A6C592EF35D13 NO_PUBKEY 24863F0C716B980B
E: The repository 'https://openmediavault.github.io/packages shaitan InRelease' is not signed.
I am getting a similar error on a clean install.
How did you make the clean install?
Which steps did you follow?
Thank you Ryecoaaron and macom. I see it is a key issue now.
running the install/reinstall cmd does not work. I recieve:
root@openmediavault:~# apt-get install --reinstall --allow-unauthenticated openmediavault-keyring
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Reinstallation of openmediavault-keyring is not possible, it cannot be downloaded.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
root@openmediavault:~#
I ran the cmd via cmd prompt ssh. Would that change anything?
Run this instead.
wget http://packages.openmediavault.org/public/pool/main/o/openmediavault-keyring/openmediavault-keyring_1.0.2-2_all.deb
sudo dpkg -i openmediavault-keyring_1.0.2-2_all.deb
I have received the similar error (fresh install OMV6 + docker + 2 conteiners = all OK) trying to install pihole as 3rd one:
QuoteOMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; docker compose --file '/appdata/pihole/pihole.yml' --env-file '/appdata/pihole/pihole.env' --env-file '/appdata/global.env' up -d 2>&1': Container pihole Starting
Error response from daemon: driver failed programming external connectivity on endpoint pihole (7e01c6e11f1e5debef4796903a4f798e00817a7943928070c32bcdcbeffde30c): Error starting userland proxy: listen tcp4 0.0.0.0:80: bind: address already in use in /usr/share/openmediavault/engined/rpc/compose.inc:613
Container is ready but not able to deploy - I think this type of error (first one) I used to have years ago while installing pi-hole on arm (pi-like) devices and usually it was associated with user settings. But now I can not find solution - any advice most welcome.
I have received the similar error (fresh install OMV6 + docker + 2 conteiners = all OK) trying to install pihole as 3rd one:
Container is ready but not able to deploy - I think this type of error (first one) I used to have years ago while installing pi-hole on arm (pi-like) devices and usually it was associated with user settings. But now I can not find solution - any advice most welcome.
Publish your pi-hole yaml file. It uses a code box to do by clicking the </> button in the forum window. Omit sensitive data.
Check if OMV GUI is set to port 80 and pi-hole as well.
You must use a vlan so that OMV and the container in the use of port 53 do not collide. That is explained here. https://wiki.omv-extras.org/do…_the_same_lan_as_the_host
It will be easier to see the errors when you publish the yaml file.
Hi chente - thank you once again for your reply - you already helped me a lot today!!
Well, that was the first idea I had to re-deploy with macvlan... well I should ![]()
Yes, port 80 is set both to OMV and pi-hole - I guess it is more easy to change OMV using "omv-firstaid". Below yaml file - is just a standard one without any additional changes:
version: "3"
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
- "80:80/tcp"
environment:
TZ: 'EUROPE/London'
# WEBPASSWORD: 'set a secure password here or it will be random'
# Volumes store your data between container upgrades
volumes:
- './etc-pihole:/etc/pihole'
- './etc-dnsmasq.d:/etc/dnsmasq.d'
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
restart: unless-stopped
Display More
thank you once again for your reply - you already helped me a lot today!!
You are welcome ![]()
Well, that was the first idea I had to re-deploy with macvlan... well I should
Yes, port 80 is set both to OMV and pi-hole - I guess it is more easy to change OMV using "omv-firstaid". Below yaml file - is just a standard one without any additional changes:
Change port 80 to whatever you want (that is available) in System>Workbench
Then use the link I gave you to configure a vlan. That should solve all the pi-hole problems. The rest of the yaml appears to be correct.
Done! Changed GUI port for OMV and with macvlan for pi-hole! Runs without issues!
Thanx again and... time for brake - that was a long day
(and still some work to be done next days to move my ssd drives to new pc.....)
Runs without issues!
I celebrate it ![]()
How did you know which of the duplicates to remove?
Here how you do it
..
<uuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</uuid>
<fsname>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|xxxx-xxxx|/dev/xxx</fsname>
..
Just remove
<fsname>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|xxxx-xxxx|/dev/xxx</fsname>
and reboot system
Hello,
i hope someone can help i have installed OMV in Hyer V Windows 11.
Now I want to install Portainer or Nextcloud, but I always get the error code when uploading:
sudo omv-aptclean repos
Fixed my issue - same issue as Tullsta
I am getting this same issue and google search brought me here
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-Storage.mount' with exit code '1':
OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-Storage.mount' with exit code '1': in /usr/share/php/openmediavault/system/process.inc:242
Stack trace:
#0 /usr/share/openmediavault/engined/rpc/mergerfs.inc(203): OMV\System\Process->execute(Array, 1)
#1 [internal function]: OMVRpcServiceMergerfs->restartPool(Array, Array)
#2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
#3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('restartPool', Array, Array)
#4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Mergerfs', 'restartPool', Array, Array, 1)
#5 {main}
but my issue is that the merge is mounted at startup, but does not contain all the directories from all the folders, and seems to not be writable
merge dir
I have tried some of the updates and multiple restarts but am still having no luck. any help would be appreciated.
Well I figured it out actually.
Ran
root@illmatic:~# systemctl restart srv-mergerfs-Storage.mount
Job failed. See "journalctl -xe" for details.
root@illmatic:~# journalctl -xe
got
Subject: Mount point is not empty
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The directory /srv/mergerfs/Storage is specified as the mount point (second field in
░░ /etc/fstab or Where= field in systemd unit file) and is not empty.
░░ This does not interfere with mounting, but the pre-exisiting files in
░░ this directory become inaccessible. To see those over-mounted files,
░░ please manually mount the underlying file system to a secondary
░░ location.
Jul 28 09:22:07 illmatic systemd[1]: Mounting MergerFS mount for Storage...
░░ Subject: A start job for unit srv-mergerfs-Storage.mount has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit srv-mergerfs-Storage.mount has begun execution.
░░
░░ The job identifier is 1567.
Jul 28 09:22:07 illmatic mount[38991]: * ERROR: invalid value - dropcacheonclose=
Jul 28 09:22:07 illmatic systemd[1]: srv-mergerfs-Storage.mount: Mount process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An n/a= process belonging to unit srv-mergerfs-Storage.mount has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Jul 28 09:22:07 illmatic systemd[1]: srv-mergerfs-Storage.mount: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit srv-mergerfs-Storage.mount has entered the 'failed' state with result 'exit-code'.
Jul 28 09:22:07 illmatic systemd[1]: Failed to mount MergerFS mount for Storage.
░░ Subject: A start job for unit srv-mergerfs-Storage.mount has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
Removed dropcacheonclose from UI and restarted the pool and it works now
Thanks for your help
Hi,
After Docker-Compose install, I did setup paths for data, share and the last one...???
But it connot work, even if the files are created and can be seen in storage/share...
Do not know hat to do, here is a copy of the message :
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LC_ALL=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color compose 2>&1' with exit code '1': debiannasgalope:
----------
ID: configure_compose_global_env_file
Function: file.managed
Name: /srv/mergerfs/Merger_Pool1/Compose Share/global.env
Result: False
Comment: Parent directory not present
Started: 09:56:38.564429
Duration: 944.251 ms
Changes:
Thanks in advance for your help.
Fred
Don’t have an account yet? Register yourself now and be a part of our community!