Oh my god...
That's was the issue; I can swap the disk now and it is under recovery.
Thank you for your prompt help!
Oh my god...
That's was the issue; I can swap the disk now and it is under recovery.
Thank you for your prompt help!
Hi,
I am running OMV6, with a RAID 5 volume of 4 2TB disks.
As my RAID volume is 80% full, I'd like to extend its capacity; my plan is to swap progressively each 2TB disk by a new 4TB disk. I would expect the RAID to go into degraded mode, to recover with the new 4TB disk so it is added in the grap; when the RAID is recovered, then I swap the next 2TB disk and so on... Utilmately, mdadm shall manage actually 4 disks of 4TB and I will enjoy larger space for my RAID5 volume.
Unfortunately, when I swap the first 2TB disk with the new 4TB disk, the RAID5 is not mounted in OMV (FYI, I have formatted EXT4 then wiped the new 4TB disk prior to this operation, and it's clean). Moreover, I have no possibility to recover the RAID5; the button "Recover" is greyed out in the OMV interface, menu "RAID".
This is the status of my volume:
mdadm -vQD /dev/md127
/dev/md127:
Version : 1.2
Raid Level : raid0
Total Devices : 3
Persistence : Superblock is persistent
State : inactive
Working Devices : 3
Name : GemenosNAS:Gemenos
UUID : 5e222c44:31a161c4:3899a442:92cbf54f
Events : 2238
Number Major Minor RaidDevice
- 8 32 - /dev/sdc
- 8 48 - /dev/sdd
- 8 16 - /dev/sdb
Display More
We can see that 3 of the 4 disks of my RAID5 are still attached, so I don't understand why I can't recover by adding the new 4TB disk.
When I reconnect my old 2TB disk, the RAID5 mounts properly.
I didn't find any similar issue in the forum.
Any help would be appreciated!
Regards
Great stuff! I see the logs now.
Thx a lot!
Hi,
I've migrated from OMV5 to OMV6 few weeks ago, and I'd like to congrat the team for the great migration tool; it was totally seemless!
I'm using rSnapShot plugin; it's still running fine on OMV6 but I can't get the logs from the GUI.
In OMV5, it was available in Journal logs, but it's no more available.
Where can we find it with GUI?
Regards
This issues with NFS has been fixed with kernel 5.7.
I confirm also on my side, no more erratic reboot with the latest kernel (5.7).
I guess we can now close this thread, as this is hopefully fixed
Thx ggr for the update; I've seen in the change logs of this new kernel there are 2 NFS fixes, hopefully it fixes our issue.
I'll test next week.
May you please describe what does exactly happen when you say login fails?
Do you access login page? Does login fail? ...
Can you also try to clear your browser cache and retry?
Hi,
Very basic verification, as it happened to me several times: clear your browser cache or try a browser you never used before with omv, to see if it fixes the issue.
Thx! Will do testing in 2 weeks when I'm back home.
Have you attempted what that other thread states about changing the kernal to the older version and seeing if you can then use NFS?
I'm not at home during 2 weeks, so cannot perform any test for the time being. Did you on your side?
If you did, may you please share with me how to change to older kernel and I'll be pleased to do some testing.
Same for me, no more reboot: this is clearly caused by nfs.
On my side, I've kept NFS enabled but I'm not mounting any shares: it implies the issue is coming from the mounting process.
Now the question is 'how can we move forward to get this investigated by Debian??'
I use SMB for my Android devices. And NFS between my OMV servers and between OMV servers and Ubuntu Clients.
The NFS shares are all created in OMV, but I mount all the NFS shares in the network, on all the Linux clients and servers, using autofs and on my Ubuntu MATE laptop I also cache NFS using fscache and 128GB NVMe storage.
Works fine.
Hi Adoby,
I guess that running autofs may hide the issue we are reporting here: nfs mounts can be handled slightly differently than with fstab, and if you didn't use your nfs mount just before reboot, your shares will not have to be handled during the client reboot because they already unmounted.
Just my thoughts...
PS: that's anyway good practice to use autofs and I will deploy it on my clients
Hi,
Thx for sharing this thread, as it seems to be clearly the same issue, linked with nfs.
Unfortunately I will not be able to perform any additional tests during the next 2 weeks (and I even don't know how to revert to previous os...), but the os upgrade could be definitely a good explanation.
How can we raise this issue, so it can be investigated by our omv gurus and debian os masters?
Yes I have both NFS and Samba shares enabled on OMV.
I'm using mainly Ubuntu clients and they were (until now) connected with OMV using NFS. I was using Samba only when I have to use Windows from time to time.
Now, I'm not mounting anymore the NFS shares on my Ubuntu Client, and I'm using the Samba Shares only.
Since then, OMV does not reboot erratically.
yes I've maintained NFS shares active in OMV, but don't use them anymore. I'm using Samba instead.
Since 2 days, no more crash so the issue is clearly coming from NFS. Maybe there is a conflict between NFS client on my Ubuntu 18 and the NFS server in OMV...
this is what I did also today as I don't see other option to increminate NFS... I've kept NFS shares on OMV, but I've removed my NFS mounts on my Ubuntu client and I'm using Samba.
So far so good, today no reboot. I'll see in the coming days if this is confirmed.
Since my comments above have been extracted from another thread, I give below the full details of the NFS issues I'm facing...
After running OMV4 since years, it was time for me to migrate to OMV5 and implement Docker for Plex, Jdownloader and Duplicati.
I started with an upgrade from OMV4 to OMV5, but I was facing many unstabilities, so I did a fresh install last week end.
My configuration is the following: 1 system disk + 1 RAID5 volume (4 disks 2TB). I've installed Docker with the 3 containers for Plex, duplicati, jdownloader. I'm using the plugins backup and RemoteMount (NFS mount of Synology array to store rsnapshot images of my RAID5 volume).
I'm using extensively NFS shares with my RAID5 volume; they are mounted by Ubuntu 18.04 clients on my internal network.
I emphasize that (except Docker) I had exactly the same configuration with OMV4 and it was very stable.
Since I moved to OMV5, my NAS reboots regularly (2-3 times a day) and I've found out that it happens when my Ubuntu clients resumes from hybernate or restart; they are mounting the NFS shares and it causes the NAS to reboot.
1/ Looking at the syslog, I've found out that there is an error message caused by blkmapd service, juste when my ubuntu clients tries to mount the NFS shares:
Code
Jul 25 21:01:56 Gemenos blkmapd[327]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory
then it seems there are tentative to recover the system and finally system reboots. You have the syslog extract with few comments in file syslog20200725.txt.
2/ I decided to get rid of blkmapd service (the thread OMV4->5 residue cleanup blkmapd[275]: open pipe file /run/rpc_pipefs/nfs/blocklayout relates about this issue and seems to say that this service is not required).
Since then, I do not get any error message in the syslog, my system just reboots few minutes after my ubuntu client tries to mount (unsuccessfully) the NFS shares.
You will find the syslog with my comments in file syslog20200726.txt.
Honestly, I'm completely stuck as I don't find any error or issue in the logs.
I wish someone can guide me shortly to identify the root cause of this issue, as I need to rely on my NAS for all my personal data.
Thx in advance for your support.
Hi ryecaaron,
goodness I did not see thx for moving it, I was not sure what was the best place for it...
is that possible for you to move also my message above with attachments and delete this thread?
BTW, do you have any idea about this issue? I have no clue how to investigate further as logs look clean, and I'm really concerned about the reliability of my NAS now.
After running OMV4 since years, it was time for me to migrate to OMV5 and implement Docker for Plex, Jdownloader and Duplicati.
I started with an upgrade from OMV4 to OMV5, but I was facing many unstabilities, so I did a fresh install last week end.
My configuration is the following: 1 system disk + 1 RAID5 volume (4 disks 2TB). I've installed Docker with the 3 containers for Plex, duplicati, jdownloader. I'm using the plugins backup and RemoteMount (NFS mount of Synology array to store rsnapshot images of my RAID5 volume).
I'm using extensively NFS shares with my RAID5 volume; they are mounted by Ubuntu 18.04 clients on my internal network.
I emphasize that (except Docker) I had exactly the same configuration with OMV4 and it was very stable.
Since I moved to OMV5, my NAS reboots regularly (2-3 times a day) and I've found out that it happens when my Ubuntu clients resumes from hybernate or restart; they are mounting the NFS shares and it causes the NAS to reboot.
1/ Looking at the syslog, I've found out that there is an error message caused by blkmapd service, juste when my ubuntu clients tries to mount the NFS shares:
Jul 25 21:01:56 Gemenos blkmapd[327]: open pipe file /run/rpc_pipefs/nfs/blocklayout failed: No such file or directory
then it seems there are tentative to recover the system and finally system reboots. You have the syslog extract with few comments in file syslog20200725.txt.
2/ I decided to get rid of blkmapd service (the thread OMV4->5 residue cleanup blkmapd[275]: open pipe file /run/rpc_pipefs/nfs/blocklayout relates about this issue and seems to say that this service is not required).
Since then, I do not get any error message in the syslog, my system just reboots few minutes after my ubuntu client tries to mount (unsuccessfully) the NFS shares.
You will find the syslog with my comments in file syslog20200726.txt.
Honestly, I'm completely stuck as I don't find any error or issue in the logs.
I wish someone can guide shortly to identify the root cause of this issue, as I need to rely my NAS for all my personal data.
Thx in advance for your support.