yes, the plugin shows them as mounted but File Systems cannot fetch and display any drive
Beiträge von MarcS
-
-
I deleted and recreated all remote mounts. The Filesystems works but when I reboot the same error comes up.
Every single remote mount throws that error when installed via Plugin.
Code
Alles anzeigenCould not fetch a matching mount point from the provided fsname: '/srv/remotemount/T17nnnnfs2'. OMV\Exception: Could not fetch a matching mount point from the provided fsname: '/srv/remotemount/T17nnnnfs2'. in /usr/share/php/openmediavault/system/filesystem/backend/remoteabstract.inc:220 Stack trace: #0 /usr/share/php/openmediavault/system/filesystem/backend/remoteabstract.inc(139): OMV\System\Filesystem\Backend\RemoteAbstract::fetchMountPointFromFstabByFsnameAndType('/srv/remotemoun...', 'cifs') #1 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(811): OMV\System\Filesystem\Backend\RemoteAbstract->getImpl('/srv/remotemoun...') #2 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(182): OMV\System\Filesystem\Filesystem::getFilesystems() #3 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->enumerateFilesystems(NULL, Array) #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #5 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(386): OMV\Rpc\ServiceAbstract->callMethod('enumerateFilesy...', NULL, Array) #6 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getList(Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array) #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(619): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusOi...', '/tmp/bgoutputm9...') #10 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure)) #11 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(525): OMV\Rpc\ServiceAbstract->callMethodBg('getList', Array, Array) #12 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getListBg(Array, Array) #13 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #14 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getListBg', Array, Array) #15 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'getListBg', Array, Array, 1) #16 {main}
-
Ok thanks - how do I delte and create it manually?
(Dont want to mess this up)
-
After recent upgrade (and reboot), I am getting an error, when clicking on Storage options in the OMV GUI menue
>>File Systems:Code
Alles anzeigenCould not fetch a matching mount point from the provided fsname: '/srv/196ac54e-b420-4c53-a4e4-1bebca5e77ce'. OMV\Exception: Could not fetch a matching mount point from the provided fsname: '/srv/196ac54e-b420-4c53-a4e4-1bebca5e77ce'. in /usr/share/php/openmediavault/system/filesystem/backend/remoteabstract.inc:220 Stack trace: #0 /usr/share/php/openmediavault/system/filesystem/backend/remoteabstract.inc(139): OMV\System\Filesystem\Backend\RemoteAbstract::fetchMountPointFromFstabByFsnameAndType('/srv/196ac54e-b...', 'cifs') #1 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(811): OMV\System\Filesystem\Backend\RemoteAbstract->getImpl('/srv/196ac54e-b...') #2 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(182): OMV\System\Filesystem\Filesystem::getFilesystems() #3 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->enumerateFilesystems(NULL, Array) #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #5 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(386): OMV\Rpc\ServiceAbstract->callMethod('enumerateFilesy...', NULL, Array) #6 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getList(Array, Array) #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #8 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array) #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(619): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusJq...', '/tmp/bgoutputad...') #10 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure)) #11 /usr/share/openmediavault/engined/rpc/filesystemmgmt.inc(525): OMV\Rpc\ServiceAbstract->callMethodBg('getList', Array, Array) #12 [internal function]: Engined\Rpc\OMVRpcServiceFileSystemMgmt->getListBg(Array, Array) #13 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #14 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getListBg', Array, Array) #15 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('FileSystemMgmt', 'getListBg', Array, Array, 1) #16 {main}
-
Journalctl says nothing usefull
Code
Alles anzeigenMay 03 17:59:25 raspberrypi dockerd[502870]: time="2023-05-03T17:59:25.143866506+01:00" level=warning msg="Usage of loopback devices is strongly discouraged for production> May 03 17:59:25 raspberrypi dockerd[502870]: time="2023-05-03T17:59:25.204437473+01:00" level=warning msg="Base device already exists and has filesystem xfs on it. User sp> May 03 17:59:25 raspberrypi dockerd[502870]: time="2023-05-03T17:59:25.245742454+01:00" level=error msg="[graphdriver] prior storage driver devicemapper is deprecated and > May 03 17:59:25 raspberrypi dockerd[502870]: failed to start daemon: error initializing graphdriver: prior storage driver devicemapper is deprecated and will be removed in> May 03 17:59:25 raspberrypi systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE May 03 17:59:25 raspberrypi systemd[1]: docker.service: Failed with result 'exit-code'. May 03 17:59:25 raspberrypi systemd[1]: Failed to start Docker Application Container Engine. May 03 17:59:26 raspberrypi systemd[1]: Stopped Docker Application Container Engine. May 03 17:59:27 raspberrypi systemd[1]: Starting Docker Application Container Engine... May 03 17:59:27 raspberrypi dockerd[502888]: time="2023-05-03T17:59:27.140257462+01:00" level=info msg="Starting up" May 03 17:59:27 raspberrypi dockerd[502888]: time="2023-05-03T17:59:27.175149264+01:00" level=warning msg="Usage of loopback devices is strongly discouraged for production> May 03 17:59:27 raspberrypi dockerd[502888]: time="2023-05-03T17:59:27.248149615+01:00" level=warning msg="Base device already exists and has filesystem xfs on it. User sp> May 03 17:59:27 raspberrypi dockerd[502888]: time="2023-05-03T17:59:27.286404551+01:00" level=error msg="[graphdriver] prior storage driver devicemapper is deprecated and > May 03 17:59:27 raspberrypi dockerd[502888]: failed to start daemon: error initializing graphdriver: prior storage driver devicemapper is deprecated and will be removed in> May 03 17:59:27 raspberrypi systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE May 03 17:59:27 raspberrypi systemd[1]: docker.service: Failed with result 'exit-code'. May 03 17:59:27 raspberrypi systemd[1]: Failed to start Docker Application Container Engine. May 03 17:59:29 raspberrypi systemd[1]: docker.service: Scheduled restart job, restart counter is at 2. May 03 17:59:29 raspberrypi systemd[1]: Stopped Docker Application Container Engine. May 03 17:59:29 raspberrypi systemd[1]: docker.service: Start request repeated too quickly. May 03 17:59:29 raspberrypi systemd[1]: docker.service: Failed with result 'exit-code'. May 03 17:59:29 raspberrypi systemd[1]: Failed to start Docker Application Container Engine. -- Boot 8243e0ca90b34d79a8ad98570619563a -- May 03 19:06:09 raspberrypi dockerd[8247]: time="2023-05-03T19:06:09.305114565+01:00" level=error msg="Error unmounting device a9edfadecafbcc2f50ed1fbea52d080558e5860bc671> May 03 19:06:09 raspberrypi dockerd[8247]: time="2023-05-03T19:06:09.305235508+01:00" level=error msg="error unmounting container" container=fd235587ff6eab24161f1a4c9bb7a0> May 03 19:06:09 raspberrypi dockerd[8247]: time="2023-05-03T19:06:09.829711464+01:00" level=error msg="Error unmounting device 2597929bb61ab28ed2c8facb1b22b92c92d4cd48d305> May 03 19:06:09 raspberrypi dockerd[8247]: time="2023-05-03T19:06:09.829825518+01:00" level=error msg="error unmounting container" container=8c12cd1ccc8d8efa2a04ad146a6318> May 03 19:06:10 raspberrypi dockerd[8247]: time="2023-05-03T19:06:10.079712992+01:00" level=error msg="Error unmounting device 98691df45d4d011084f1476d6c3942c13a633ff64d79> May 03 19:06:10 raspberrypi dockerd[8247]: time="2023-05-03T19:06:10.079823972+01:00" level=error msg="error unmounting container" container=ac48fb2b4987dec2be4b85d1e95e6e> May 03 19:06:10 raspberrypi dockerd[8247]: time="2023-05-03T19:06:10.754951305+01:00" level=error msg="Error unmounting device 80112d81395963981aa0bd99b8d45018dad23cc13919> May 03 19:06:10 raspberrypi dockerd[8247]: time="2023-05-03T19:06:10.756203397+01:00" level=error msg="error unmounting container" container=c0394888e2dead8140d7c330875620> May 03 19:06:11 raspberrypi dockerd[8247]: time="2023-05-03T19:06:11.455874314+01:00" level=info msg="stopping event stream following graceful shutdown" error="<nil>" modu> May 03 19:06:11 raspberrypi dockerd[8247]: time="2023-05-03T19:06:11.456615377+01:00" level=info msg="Daemon shutdown complete" May 03 19:06:11 raspberrypi systemd[1]: docker.service: Succeeded. May 03 19:06:11 raspberrypi systemd[1]: Stopped Docker Application Container Engine. May 03 19:06:11 raspberrypi systemd[1]: docker.service: Consumed 21.712s CPU time.
-
...Add apparmor=0 to /boot/cmdline.txt
tried that. same problem when re-installing Docker 23
-
Is grub really relevant on Raspberry?
-
I get error
bash "update-grp" - command not found
-
a sorry - just notice you added a different link. I have not tried that
-
yes - as mentioned above, the downgrade works but that cannot be a solution.
-
After recent OMV Upgrades, Docker still did not launch and I had to downgrade docker to make it work.
This downgrade still seems to be the only workaround for the bugs.
Is there no solution in sight?
-
Unfortunately I does not finish the task. When I log in the back port tick is gone. Is there a way to trigger the OMV commands from cli?
-
When I try to switch backports on and confirm the change in OMV, I constantly get error 504 - timeout in th ebrowser.
Any ideas?
-
Ah thanks for sharing. I will try that. Is there a downside to activating back ports? (I am not familiar with the concept).
-
>>remotemounts in OMV_Extras
-
Also worth mentioning that the paramaters aparently make a big difference to performance. I have not verified this but its probably worth trying out for OMV users.
-
At votdev: This text in GUI should probably be changed to reflect the fact that for v4 to work, the share definition MUST NOT have a path (like ..export) included, otherwise the server will automoatically default to v3, and not even tell anybody about it
As jeff0001 kindly pointed out above, the def for v4 must only be server IP.
-
Incredible. thanks Jeff001. That fixed it! Thank you very much!
The reason why I used the old convention is that OMV GUI is asking to do so (including export). I did not know that v4 required a different mount path and it seems strange that the server changes protocol version just because of a way to describe the access path, even though I explicitly ask vor v4 in the command. Anyway thanks for pointing this out.
Probably worth changing the wording in the OMV Plugin to correct that. Otherwise people will always connect as v3.
-
OK - I have now setup all LUKS drives to auto-mount via /etc/crypttab so they cannot be the cause of MergerFS failing.
After booting the MergerFS pool is still not available although it shows in GUI as defined. If I manually restarting the pool (inside GUI) I am getting this error:
CodeFailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-JPool.mount' with exit code '1': OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; systemctl restart srv-mergerfs-JPool.mount' with exit code '1': in /usr/share/php/openmediavault/system/process.inc:220 Stack trace: #0 /usr/share/openmediavault/engined/rpc/mergerfs.inc(197): OMV\System\Process->execute(Array, 1) #1 [internal function]: OMVRpcServiceMergerfs->restartPool(Array, Array) #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array) #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('restartPool', Array, Array) #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Mergerfs', 'restartPool', Array, Array, 1) #5 {main}
-
So the systemd command is
"systemctl restart srv-mergerfs-JPool.mount"
Unfortunately it is failing and the error just says Job failed. See "journalctl -xe" for details.
I have attached my journalctl, in case anyone can identify the issue. Any assistance much appreciated.