Posts by zevleu

    Hi,

    I've migrated from OMV5 to OMV6 few weeks ago, and I'd like to congrat the team for the great migration tool; it was totally seemless!

    I'm using rSnapShot plugin; it's still running fine on OMV6 but I can't get the logs from the GUI.

    In OMV5, it was available in Journal logs, but it's no more available.

    Where can we find it with GUI?

    Regards

    Have you attempted what that other thread states about changing the kernal to the older version and seeing if you can then use NFS?

    I'm not at home during 2 weeks, so cannot perform any test for the time being. Did you on your side?

    If you did, may you please share with me how to change to older kernel and I'll be pleased to do some testing.

    Same for me, no more reboot: this is clearly caused by nfs.

    On my side, I've kept NFS enabled but I'm not mounting any shares: it implies the issue is coming from the mounting process.

    Now the question is 'how can we move forward to get this investigated by Debian??'

    I use SMB for my Android devices. And NFS between my OMV servers and between OMV servers and Ubuntu Clients.


    The NFS shares are all created in OMV, but I mount all the NFS shares in the network, on all the Linux clients and servers, using autofs and on my Ubuntu MATE laptop I also cache NFS using fscache and 128GB NVMe storage.


    Works fine.

    Hi Adoby,

    I guess that running autofs may hide the issue we are reporting here: nfs mounts can be handled slightly differently than with fstab, and if you didn't use your nfs mount just before reboot, your shares will not have to be handled during the client reboot because they already unmounted.

    Just my thoughts...

    PS: that's anyway good practice to use autofs and I will deploy it on my clients:thumbup:

    Hi,

    Thx for sharing this thread, as it seems to be clearly the same issue, linked with nfs.

    Unfortunately I will not be able to perform any additional tests during the next 2 weeks (and I even don't know how to revert to previous os...), but the os upgrade could be definitely a good explanation.

    How can we raise this issue, so it can be investigated by our omv gurus and debian os masters?

    Yes I have both NFS and Samba shares enabled on OMV.

    I'm using mainly Ubuntu clients and they were (until now) connected with OMV using NFS. I was using Samba only when I have to use Windows from time to time.

    Now, I'm not mounting anymore the NFS shares on my Ubuntu Client, and I'm using the Samba Shares only.

    Since then, OMV does not reboot erratically.

    yes I've maintained NFS shares active in OMV, but don't use them anymore. I'm using Samba instead.

    Since 2 days, no more crash so the issue is clearly coming from NFS. Maybe there is a conflict between NFS client on my Ubuntu 18 and the NFS server in OMV...

    this is what I did also today as I don't see other option to increminate NFS... I've kept NFS shares on OMV, but I've removed my NFS mounts on my Ubuntu client and I'm using Samba.

    So far so good, today no reboot. I'll see in the coming days if this is confirmed.

    Since my comments above have been extracted from another thread, I give below the full details of the NFS issues I'm facing...



    After running OMV4 since years, it was time for me to migrate to OMV5 and implement Docker for Plex, Jdownloader and Duplicati.

    I started with an upgrade from OMV4 to OMV5, but I was facing many unstabilities, so I did a fresh install last week end.

    My configuration is the following: 1 system disk + 1 RAID5 volume (4 disks 2TB). I've installed Docker with the 3 containers for Plex, duplicati, jdownloader. I'm using the plugins backup and RemoteMount (NFS mount of Synology array to store rsnapshot images of my RAID5 volume).

    I'm using extensively NFS shares with my RAID5 volume; they are mounted by Ubuntu 18.04 clients on my internal network.

    I emphasize that (except Docker) I had exactly the same configuration with OMV4 and it was very stable.

    Since I moved to OMV5, my NAS reboots regularly (2-3 times a day) and I've found out that it happens when my Ubuntu clients resumes from hybernate or restart; they are mounting the NFS shares and it causes the NAS to reboot.


    1/ Looking at the syslog, I've found out that there is an error message caused by blkmapd service, juste when my ubuntu clients tries to mount the NFS shares:

    Code

    Code
    Jul 25 21:01:56 Gemenos blkmapd[327]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory

    then it seems there are tentative to recover the system and finally system reboots. You have the syslog extract with few comments in file syslog20200725.txt.

    syslog20200725.txt

    2/ I decided to get rid of blkmapd service (the thread OMV4->5 residue cleanup blkmapd[275]: open pipe file /run/rpc_pipefs/nfs/blocklayout relates about this issue and seems to say that this service is not required).

    Since then, I do not get any error message in the syslog, my system just reboots few minutes after my ubuntu client tries to mount (unsuccessfully) the NFS shares.

    You will find the syslog with my comments in file syslog20200726.txt.

    syslog20200726.txt


    Honestly, I'm completely stuck as I don't find any error or issue in the logs.

    I wish someone can guide me shortly to identify the root cause of this issue, as I need to rely on my NAS for all my personal data.


    Thx in advance for your support.

    After running OMV4 since years, it was time for me to migrate to OMV5 and implement Docker for Plex, Jdownloader and Duplicati.

    I started with an upgrade from OMV4 to OMV5, but I was facing many unstabilities, so I did a fresh install last week end.

    My configuration is the following: 1 system disk + 1 RAID5 volume (4 disks 2TB). I've installed Docker with the 3 containers for Plex, duplicati, jdownloader. I'm using the plugins backup and RemoteMount (NFS mount of Synology array to store rsnapshot images of my RAID5 volume).

    I'm using extensively NFS shares with my RAID5 volume; they are mounted by Ubuntu 18.04 clients on my internal network.

    I emphasize that (except Docker) I had exactly the same configuration with OMV4 and it was very stable.

    Since I moved to OMV5, my NAS reboots regularly (2-3 times a day) and I've found out that it happens when my Ubuntu clients resumes from hybernate or restart; they are mounting the NFS shares and it causes the NAS to reboot.


    1/ Looking at the syslog, I've found out that there is an error message caused by blkmapd service, juste when my ubuntu clients tries to mount the NFS shares:

    Code
    Jul 25 21:01:56 Gemenos blkmapd[327]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory

    then it seems there are tentative to recover the system and finally system reboots. You have the syslog extract with few comments in file syslog20200725.txt.


    2/ I decided to get rid of blkmapd service (the thread OMV4->5 residue cleanup blkmapd[275]: open pipe file /run/rpc_pipefs/nfs/blocklayout relates about this issue and seems to say that this service is not required).

    Since then, I do not get any error message in the syslog, my system just reboots few minutes after my ubuntu client tries to mount (unsuccessfully) the NFS shares.

    You will find the syslog with my comments in file syslog20200726.txt.


    Honestly, I'm completely stuck as I don't find any error or issue in the logs.

    I wish someone can guide shortly to identify the root cause of this issue, as I need to rely my NAS for all my personal data.


    Thx in advance for your support.

    I confirm it does not change anything. My NAS still reboots from time to time when my ubuntu laptop resumes from hybernate or restart.

    Last crash I had, after my Ubuntu resumes from hybernate, I get in syslog information that there is an autenthicated munt request received by rpc.mountd, and after 3min the reboot sequence starts (there is nothing in between).

    I'm really stuck; knowing this is a fresh install, there is nothing in the logs, I really don't know what to do here...

    Any idea?

    same for me all disks are clean.

    I'm currently wondering if there is no side effects with Docker.

    My RAID5 volume contains all the NFS shares. With OMV5, I've created multiple docker containers, which mount volumes within the RAID5; I didn't use Docker with OMV4, and that's the only change in my config since the upgrade. In particular, config directories for my docker containers are in my RAID5; I'm currently moving all my config folders outside of RAID5 to see if it changes something. I'll keep you posted.