Beiträge von ggr

    Thanks. I'll try deleting the volume and retry.

    I can free up 8080 by reconfiguring my sabnzbd container, but its actually the Openmediavault web interface/dashboard thats using port 80. I take it I'd need to move that to a different port using omv-firstaid?

    Hi chente, I've been following your guide to install Nextcloud (option 1) but I think Ive messed up.

    I cut and pasted your code and blindly hit up, before realizing that I already had port 80 and 8080 in use. I then chose different ports and tried again.

    I'm now stuck at "Accept the certificate to access, copy the password and follow the instructions to configure Nextcloud AIO"

    I dont get asked for a certificate and have no idea what the password should be.

    How do I start again? Theres no nextcloud data in my data dir.

    Sorry to reply to my own thread, but it looks like my issue was caused by not unmounting the original disk before trying to set uo the new one.

    I was replacing an 8tb drive with a 16tb one and unplugged the 8tb without unmounting it first, so the original disk was indeed "missing".

    I put it back in and then unmounted both the disks, before installing the new one. Mounting the new disk, then went smoothly.

    Hi, Im running OMV 6.8.0-1, and having trouble adding a new HDD.

    I wiped the disk and am now trying to "create and mount a file system" from the GUI. The process appeared to complete OK, but the file system status is still "missing"

    I 'm now trying the "mount an existing file system". It finds the disk, but applying the changes, gets me to the "Please wait, the configuration changes are being applied.." screen.

    Its been stuck like this for over 10 minutes. Is it normal for it to take so long?

    Its a large disk - 16tb. Should I just continue to wait?


    Edit - It finally finished applying the changes but the file system is still missing.

    If I attempt to use the edit button I get this error -

    .0-1 (Shaitan)

    Having the same issue here.

    Running docker ps shows all containers running OK and they are still accessable. It looks like its just the compose GUI thats confused.


    Edit- Used this as an opportunity to test the clonezilla backup that I took last night.

    Recovered fine. :)

    The Clonezilla headless backup/recovery is really seamless, thanks.

    I'll wait for ryecoaaron's tested fix before I install the latest Docker updates.

    Thanks.

    I was using DHCP with a reservation on the router for my OMV box.

    Switching to a static IP and DNS fixed it.

    The DHCP reservation had been working up till tonight. I was trying to setup a VPN client when the problems started.

    I'll stick to the static IP for now,

    my openmedia vault server hangs while installing updates -


    Reading package lists...
    Building dependency tree...
    Reading state information...
    Calculating upgrade...
    The following packages will be upgraded: openmediavault protonvpn-stable-release salt-common salt-minion
    4 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
    Need to get 10.0 MB of archives.
    After this operation, 12.3 kB of additional disk space will be used.
    Err:1 https://repo.protonvpn.com/debian stable/main all protonvpn-stable-release all 1.0.3-2 Could not resolve 'repo.protonvpn.com'
    Ign:2 http://packages.openmediavault.org/public shaitan/main amd64 salt-minion all 3006.0+ds-1+191.1
    Ign:3 http://packages.openmediavault.org/public shaitan/main amd64 salt-common all 3006.0+ds-1+191.1
    Ign:4 http://packages.openmediavault.org/public shaitan/main amd64 openmediavault all 6.8.0-1
    Err:2 https://openmediavault.github.io/packages shaitan/main amd64 salt-minion all 3006.0+ds-1+191.1 Could not resolve 'packages.openmediavault.org'
    Err:3 https://openmediavault.github.io/packages shaitan/main amd64 salt-common all 3006.0+ds-1+191.1 Could not resolve 'packages.openmediavault.org'
    Err:4 https://openmediavault.github.io/packages shaitan/main amd64 openmediavault all 6.8.0-1 Could not resolve 'packages.openmediavault.org'

    ** CONNECTION LOST **


    I have to close the window, but the updates never get applied.

    Can anyone help me troubleshoot?

    You need to set the IP range that is allowed to access the share in the NFS share settings page. It seems a * is also allowed.

    Thanks.

    From memory, previous releases allowed you to leave the IP range blank, but since the last OMV upgrade new shares required it. I think I had some of my older shares with the IP range blank. I've now edited all the entries to have an IP range assigned.

    Check out https://github.com/openmediavault/openmediavault/issues/1583. The user reported the issue did not configure the NFS share correctly.


    Run journalctl -u nfs-server | tee to get more info why the daemon is not starting.

    I did eventually get it running, but I'm intrigued as to what the


    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/seagate1media (fsid=11807640-0262-4435-b9ec-fe635ac42e5e,rw,subtree_check,insecure), suggest *(fsid=11807640-0262-4435-b9ec-fe635ac42e5e,rw,subtree_check,insecure) to avoid warning


    messages are telling me. Should I modify my nfs shares?


    Anyway here is the first part of the full output of journalctl -u nfs-server | tee in case it helps..


    -- Journal begins at Fri 2022-10-21 21:15:13 BST, ends at Fri 2023-08-18 16:01:54 BST. --

    Oct 21 21:15:14 omvll systemd[1]: Starting NFS server and services...

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/sabnzbd (fsid=33905c64-aa35-4451-bb92-6280e84f4277,rw,subtree_check,insecure), suggest *(fsid=33905c64-aa35-4451-bb92-6280e84f4277,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/seagate1media (fsid=11807640-0262-4435-b9ec-fe635ac42e5e,rw,subtree_check,insecure), suggest *(fsid=11807640-0262-4435-b9ec-fe635ac42e5e,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/seagate2media (fsid=5b5f752d-a686-4b78-a5a6-0c2858e45638,rw,subtree_check,insecure), suggest *(fsid=5b5f752d-a686-4b78-a5a6-0c2858e45638,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/seagate3media (fsid=dbbb977d-4961-4601-9a31-29c25a269ae2,rw,subtree_check,insecure), suggest *(fsid=dbbb977d-4961-4601-9a31-29c25a269ae2,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/seagate4media (fsid=b0bfeb8b-5a1b-400a-9242-28102f57b388,rw,subtree_check,insecure), suggest *(fsid=b0bfeb8b-5a1b-400a-9242-28102f57b388,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/seagate5media (fsid=b8be7d76-4772-4b25-aeef-239487475624,rw,subtree_check,insecure), suggest *(fsid=b8be7d76-4772-4b25-aeef-239487475624,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/wd1media (fsid=74a0d595-be73-4e33-b07c-cb34b4ed19ae,rw,subtree_check,insecure), suggest *(fsid=74a0d595-be73-4e33-b07c-cb34b4ed19ae,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/seagate6media (fsid=427cdd6d-be4e-4d02-9dcb-2e70c04856b3,rw,subtree_check,insecure), suggest *(fsid=427cdd6d-be4e-4d02-9dcb-2e70c04856b3,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export/public (fsid=fa34f3ad-0f0d-4b6b-8299-33065448d719,rw,subtree_check,insecure), suggest *(fsid=fa34f3ad-0f0d-4b6b-8299-33065448d719,rw,subtree_check,insecure) to avoid warning

    Oct 21 21:15:14 omvll exportfs[715]: exportfs: No host name given with /export (ro,fsid=0,root_squash,no_subtree_check,hide), suggest *(ro,fsid=0,root_squash,no_subtree_check,hide) to avoid warning

    Oct 21 21:15:16 omvll systemd[1]: Started NFS server and services.

    Oct 21 21:51:25 omvll systemd[1]: Stopping NFS server and services...

    Oct 21 21:51:25 omvll systemd[1]: nfs-server.service: Succeeded.

    Oct 21 21:51:25 omvll systemd[1]: Stopped NFS server and services.

    Oct 21 21:51:26 omvll systemd[1]: Starting NFS server and services...

    What is the output of:


    sudo systemctl stop nfs-server

    sudo omv-salt deploy run nfs

    I managed to get nfs working again by a combination of enabling / disabling versions and rebooting.


    Here is the output after it was fixed, jut for info -


    root@omvll:~# sudo systemctl stop nfs-server

    root@omvll:~# sudo omv-salt deploy run nfs

    debian:

    ----------

    ID: configure_default_nfs-common

    Function: file.managed

    Name: /etc/default/nfs-common

    Result: True

    Comment: File /etc/default/nfs-common is in the correct state

    Started: 15:52:44.014297

    Duration: 414.699 ms

    Changes:

    ----------

    ID: configure_default_nfs-kernel-server

    Function: file.managed

    Name: /etc/default/nfs-kernel-server

    Result: True

    Comment: File /etc/default/nfs-kernel-server is in the correct state

    Started: 15:52:44.429255

    Duration: 257.307 ms

    Changes:

    ----------

    ID: configure_nfsd_exports

    Function: file.managed

    Name: /etc/exports

    Result: True

    Comment: File /etc/exports is in the correct state

    Started: 15:52:44.686763

    Duration: 366.805 ms

    Changes:

    ----------

    ID: start_rpc_statd_service

    Function: service.running

    Name: rpc-statd

    Result: True

    Comment: The service rpc-statd is already running

    Started: 15:52:55.282375

    Duration: 61.865 ms

    Changes:

    ----------

    ID: start_nfs_server_service

    Function: service.running

    Name: nfs-server

    Result: True

    Comment: Service nfs-server is already enabled, and is running

    Started: 15:52:55.345567

    Duration: 1476.248 ms

    Changes:

    ----------

    nfs-server:

    True


    Summary for debian

    ------------

    Succeeded: 5 (changed=1)

    Failed: 0

    ------------

    Total states run: 5

    Total run time: 2.577 s

    Unfortunately, this didnt work for me.

    When I try to reenable NFS I get the same 500 internal server error as above, and have to revert.

    :(

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian:


    No combination of NFS versions allows me to re-enable NFS. without triggering the errors. Looks like I'm stuck with SMB/CIFS for the time being. Which is rubbish for streaming 4K to Kodi. :(


    UPDATE - Finally got it working. It was either one or a combination of all to get it working again.

    With NFS Disabled

    1) I removed all NFS shares.

    2) unchecked all NFS versions EXCEPT NFSv3 (don't know if thats significant)

    3) Saved the version changes, but DIDN'T Apply the changes,

    4) Rebooted

    5) Enabled NFS and applied the changes. NO ERRORS!

    6) Recreated NFS shares. Applied changes. No Errors!

    7) Re-enabled NFS versions 3,4,4.1,4.2 Applied changes No Errors!

    All good now.

    Cant be sure but I think step 3 may have been crucial. Save the versions changes, but dont apply until after reboot.

    Im alo having this problem since updating OMV.

    NFS is competely borked.

    Ive tried enabling/reenabling versions, but everytime i try to apply changes, I get this error and have to revert-


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian:

    ----------

    ID: configure_default_nfs-common

    Function: file.managed

    Name: /etc/default/nfs-common

    Result: True

    Comment: File /etc/default/nfs-common is in the correct state

    Started: 09:34:08.036469

    Duration: 146.14 ms

    Changes:

    ----------

    ID: configure_default_nfs-kernel-server

    Function: file.managed

    Name: /etc/default/nfs-kernel-server

    Result: True

    Comment: File /etc/default/nfs-kernel-server updated

    Started: 09:34:08.182753

    Duration: 184.916 ms

    Changes:

    ----------

    diff:

    ---

    +++

    @@ -8,7 +8,7 @@

    RPCNFSDPRIORITY="0"


    # Options for rpc.nfsd.

    -RPCNFSDOPTS="--no-nfs-version 4.2 --no-nfs-version 4.1 --no-nfs-version 2 --nfs-version 4 --nfs-version 3 "

    +RPCNFSDOPTS="--no-nfs-version 2 --nfs-version 4.2 --nfs-version 4.1 --nfs-version 4 --nfs-version 3 "


    # Options for rpc.mountd.

    RPCMOUNTDOPTS="--no-nfs-version 2 --nfs-version 3 --nfs-version 4 --manage-gids"

    ----------

    ID: configure_nfsd_exports

    Function: file.managed

    Name: /etc/exports

    Result: True

    Comment: File /etc/exports is in the correct state

    Started: 09:34:08.367808

    Duration: 287.229 ms

    Changes:

    ----------

    ID: start_rpc_statd_service

    Function: service.running

    Name: rpc-statd

    Result: True

    Comment: Service rpc-statd has been enabled, and is running

    Started: 09:34:09.802715

    Duration: 811.798 ms

    Changes:

    ----------

    rpc-statd:

    True

    ----------

    ID: start_nfs_server_service

    Function: service.running

    Name: nfs-server

    Result: False

    Comment: Job for nfs-server.service canceled.

    Started: 09:34:10.805854

    Duration: 171.289 ms

    Changes:


    Summary for debian

    ------------

    Succeeded: 4 (changed=2)

    Failed: 1

    ------------

    Total states run: 5

    Total run time: 1.601 s

    [ERROR ] Command '/bin/systemd-run' failed with return code: 1

    [ERROR ] stderr: Running scope as unit: run-rd4f3e71c1ee449d09c2bcfb268fc85d9.scope

    Job for nfs-server.service canceled.

    [ERROR ] retcode: 1

    [ERROR ] Job for nfs-server.service canceled.

    [ERROR ] Command '/bin/systemd-run' failed with return code: 1

    [ERROR ] stderr: Running scope as unit: run-r7c785c2fa8c84687a6990dc8b6ef7776.scope

    Job for nfs-server.service canceled.

    [ERROR ] retcode: 1

    [ERROR ] Job for nfs-server.service canceled.


    OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian:

    ----------

    ID: configure_default_nfs-common

    Function: file.managed

    Name: /etc/default/nfs-common

    Result: True

    Comment: File /etc/default/nfs-common is in the correct state

    Started: 09:34:08.036469

    Duration: 146.14 ms

    Changes:

    ----------

    ID: configure_default_nfs-kernel-server

    Function: file.managed

    Name: /etc/default/nfs-kernel-server

    Result: True

    Comment: File /etc/default/nfs-kernel-server updated

    Started: 09:34:08.182753

    Duration: 184.916 ms

    Changes:

    ----------

    diff:

    ---

    +++

    @@ -8,7 +8,7 @@

    RPCNFSDPRIORITY="0"


    # Options for rpc.nfsd.

    -RPCNFSDOPTS="--no-nfs-version 4.2 --no-nfs-version 4.1 --no-nfs-version 2 --nfs-version 4 --nfs-version 3 "

    +RPCNFSDOPTS="--no-nfs-version 2 --nfs-version 4.2 --nfs-version 4.1 --nfs-version 4 --nfs-version 3 "


    # Options for rpc.mountd.

    RPCMOUNTDOPTS="--no-nfs-version 2 --nfs-version 3 --nfs-version 4 --manage-gids"

    ----------

    ID: configure_nfsd_exports

    Function: file.managed

    Name: /etc/exports

    Result: True

    Comment: File /etc/exports is in the correct state

    Started: 09:34:08.367808

    Duration: 287.229 ms

    Changes:

    ----------

    ID: start_rpc_statd_service

    Function: service.running

    Name: rpc-statd

    Result: True

    Comment: Service rpc-statd has been enabled, and is running

    Started: 09:34:09.802715

    Duration: 811.798 ms

    Changes:

    ----------

    rpc-statd:

    True

    ----------

    ID: start_nfs_server_service

    Function: service.running

    Name: nfs-server

    Result: False

    Comment: Job for nfs-server.service canceled.

    Started: 09:34:10.805854

    Duration: 171.289 ms

    Changes:


    Summary for debian

    ------------

    Succeeded: 4 (changed=2)

    Failed: 1

    ------------

    Total states run: 5

    Total run time: 1.601 s

    [ERROR ] Command '/bin/systemd-run' failed with return code: 1

    [ERROR ] stderr: Running scope as unit: run-rd4f3e71c1ee449d09c2bcfb268fc85d9.scope

    Job for nfs-server.service canceled.

    [ERROR ] retcode: 1

    [ERROR ] Job for nfs-server.service canceled.

    [ERROR ] Command '/bin/systemd-run' failed with return code: 1

    [ERROR ] stderr: Running scope as unit: run-r7c785c2fa8c84687a6990dc8b6ef7776.scope

    Job for nfs-server.service canceled.

    [ERROR ] retcode: 1

    [ERROR ] Job for nfs-server.service canceled. in /usr/share/php/openmediavault/system/process.inc:242

    Stack trace:

    #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(62): OMV\System\Process->execute()

    #1 /usr/share/openmediavault/engined/rpc/config.inc(174): OMV\Engine\Module\ServiceAbstract->deploy()

    #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)

    #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)

    #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)

    #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(620): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatusb3...', '/tmp/bgoutputDn...')

    #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))

    #7 /usr/share/openmediavault/engined/rpc/config.inc(195): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)

    #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)

    #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)

    #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)

    #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)

    #12 {main}

    Sorry to resurrect an old thread, but I was trying to do this today but omv-salt deploy run monit brought up this -


    debian:

    Data failed to compile:

    ----------

    Rendering SLS 'base:omv.deploy.monit.default' failed: Jinja error: tls: Invalid value '0', allowed values are 'none, ssl, starttls, auto'.

    Traceback (most recent call last):

    File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 497, in render_jinja_tmpl

    output = template.render(**decoded_context)

    File "/usr/lib/python3/dist-packages/jinja2/asyncsupport.py", line 76, in render

    return original_render(self, *args, **kwargs)

    File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1008, in render

    return self.environment.handle_exception(exc_info, True)

    File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 780, in handle_exception

    reraise(exc_type, exc_value, tb)

    File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise

    raise value.with_traceback(tb)

    File "<template>", line 37, in top-level template code

    File "/usr/lib/python3/dist-packages/jinja2/sandbox.py", line 438, in call

    return __context.call(__obj, *args, **kwargs)

    File "/usr/lib/python3/dist-packages/salt/loader.py", line 1235, in __call__

    return self.loader.run(run_func, *args, **kwargs)

    File "/usr/lib/python3/dist-packages/salt/loader.py", line 2268, in run

    return self._last_context.run(self._run_as, _func_or_method, *args, **kwargs)

    File "/usr/lib/python3/dist-packages/salt/loader.py", line 2283, in _run_as

    return _func_or_method(*args, **kwargs)

    File "/var/cache/salt/minion/extmods/modules/omv_conf.py", line 39, in get

    objs = db.get(id_, identifier)

    File "/usr/lib/python3/dist-packages/openmediavault/config/database.py", line 85, in get

    query.execute()

    File "/usr/lib/python3/dist-packages/openmediavault/config/database.py", line 726, in execute

    self._response = self._elements_to_object(elements)

    File "/usr/lib/python3/dist-packages/openmediavault/config/database.py", line 487, in _elements_to_object

    result.validate()

    File "/usr/lib/python3/dist-packages/openmediavault/config/object.py", line 236, in validate

    self.model.validate(self.get_dict())

    File "/usr/lib/python3/dist-packages/openmediavault/config/datamodel.py", line 202, in validate

    self.schema.validate(data)

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 175, in validate

    self._validate_type(value, schema, name)

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 229, in _validate_type

    raise last_exception

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 200, in _validate_type

    self._validate_object(value, schema, name)

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 305, in _validate_object

    self._check_properties(value, schema, name)

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 519, in _check_properties

    self._validate_type(value[propk], propv, path)

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 229, in _validate_type

    raise last_exception

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 209, in _validate_type

    self._validate_string(value, schema, name)

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 284, in _validate_string

    self._check_enum(value, schema, name)

    File "/usr/lib/python3/dist-packages/openmediavault/json/schema.py", line 475, in _check_enum

    % (valuev, ", ".join(map(str, schema['enum']))),

    openmediavault.json.schema.SchemaValidationException: tls: Invalid value '0', allowed values are 'none, ssl, starttls, auto'.


    ; line 37


    ---

    [...]

    # http://www.debianadmin.com/mon…-servers-using-monit.html

    # http://www.uibk.ac.at/zid/systeme/linux/monit.html

    # http://wiki.ubuntuusers.de/Monit

    # http://viktorpetersson.com/201…-and-postgresql-on-ubuntu


    {% set email_config = salt['omv_conf.get']('conf.system.notification.email') %} <======================


    configure_default_monit:

    file.managed:

    - name: "/etc/default/monit"

    - contents:

    [...]

    ---


    Any ideas?

    I'm not at home during 2 weeks, so cannot perform any test for the time being. Did you on your side?

    If you did, may you please share with me how to change to older kernel and I'll be pleased to do some testing.

    I was having the same issue. Connecting to NFS shares on OMV caused the OMV NAS to reboot after a time.

    Changing the kernel from 5.6.0.0 to 5.5.0.0 fixed it for me too.:)

    I really don't. I have no idea how nfs could make a nas reboot. It doesn't make sense to me. I use nfs A LOT and have never seen anything like this.

    I've been scratching my head over that for the past couple of weeks. I thought my OSMC box was somehow to blame until I mapped some OMV NFS shares to my windows 10 box. I found I could also make the OMV NAS crash just my browsing the shares. I found the solution here - OMV spontaneous reboots


    Reverting to the 5.5.0-0 kernel fixes the problem. Looks like the issue is with 5.6.0.0.

    Since reverting to 5.5.0.0 my system has been up for 3days and 15hrs despite continual button mashing and sleep/wake cycles of the Apple TV while at home on lockdown.

    This is longer than any single uptime while using 5.6.0.0


    Think I'm going to stay on 5.5.0.0 for the foreseeable future.

    Thanks for this.

    The reboots and crashes while connecting to NFS shares has been driving me mad for the past few weeks. After messing around with memory checkers, trawling through logs and changing various NFS parameters (including adding unique fsid's to shares) to no avail, reverting to the 5.5.00 kernel has finally fixed it for me too.

    Thanks again, it was driving me crazy