Honestly, I don't remember exactly.
I'm sorry.
Honestly, I don't remember exactly.
I'm sorry.
I don't remember exactly but it has been happening now and then for almost a year...
As I said, probably is triggered by monit being restarted so it happens only when this happens (not sure if it happens *every* time monitor is reloaded/restarted).
The system has been running for years, through different OMV releases. Between OMV5 and OMV6 I had a catastrophic upgrade process and had to reinstall.
No recent hardware changes.
I will, but it's strange that it happens after a service restart and that a reboot or a manual mount will solve it.
Also strange is that it gets mounted and just after is unmounted (see syslog above).
And it's not always the same disk...
This morning it happened with another disk too (the snapraid parity disk) and I couldn't remount it. I had to reboot.
This situation is common if the system is installed on a Raspberry and the disks have been connected directly to the Raspberry without a power supply for the drives.
If this is not the case, other causes should be sought.
HDD are internal SATA mechanical drives, so I assume power is not an issue AFAIK.
Thanks
I have a strange issue with my OMV6 installation.
Sometimes, it seems to me after an 'apply config' in the UI I get some filesystem automatically unmounted and often the only way I can get them back is with a reboot.
In my email I get an alert that says:
"
The system monitoring needs your attention
Host: omv
Date: Mon, 11 Sep 2023 09:30:12
Service: filesystem_srv_dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd
Event: Does not exist
Description: unable to read filesystem '/srv/dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd' state
This triggered the monitoring system to: restart"
Then another email that says:
"The system monitoring needs your attention.
Host: omv
Date: Mon, 11 Sep 2023 09:30:15
Service: mountpoint_srv_dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd
Event: Status failed
Description: status failed (1) -- /srv/dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd is not a mountpoint
This triggered the monitoring system to: alert"
I tried to mount it manually via CLI and it gets unmounted immediately.
In the logs I found also this message:
Sep 11 09:30:46 omv monit[1685]: 'mountpoint_srv_dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd' exec: '/usr/bin/mount /srv/dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd'
Sep 11 09:30:46 omv monit[1685]: 'mountpoint_srv_dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd' status failed (1) -- /srv/dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd is not a mountpoint
Sep 11 09:30:46 omv monit[1685]: 'filesystem_srv_dev-disk-by-uuid-9fb82f8b-c118-45d7-a9fb-f3a0103299cf' space usage 89.6% matches resource limit [space usage > 85.0%]
Sep 11 09:30:46 omv postfix/pipe[15934]: EA37430865CA: to=<openmediavault-notification@localhost.localdomain>, relay=omvnotificationfilter, delay=0.06, delays=0.05/0/0/0.01, dsn=2.0.0, status=sent (delivered via omvnotificationfilter service)
Sep 11 09:30:46 omv kernel: [ 412.911734] EXT4-fs (sdd1): mounted filesystem with ordered data mode. Quota mode: journalled.
Sep 11 09:30:46 omv systemd[1]: srv-dev\x2ddisk\x2dby\x2duuid\x2da9329ad3\x2dff7c\x2d4ed7\x2d902d\x2d5105eea856dd.mount: Unit is bound to inactive unit dev-disk-by\x2duuid-a9329ad3\x2dff7c\x2d4ed7\x2d902d\x2d5105eea856dd.device. Stopping, too.
Sep 11 09:30:46 omv systemd[1]: Unmounting /srv/dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd...
Sep 11 09:30:46 omv kernel: [ 412.963764] EXT4-fs (sdd1): unmounting filesystem.
Sep 11 09:30:46 omv systemd[1]: srv-dev\x2ddisk\x2dby\x2duuid\x2da9329ad3\x2dff7c\x2d4ed7\x2d902d\x2d5105eea856dd.mount: Succeeded.
Sep 11 09:30:46 omv systemd[1]: Unmounted /srv/dev-disk-by-uuid-a9329ad3-ff7c-4ed7-902d-5105eea856dd.
Then I tried to fsck.ext4 with -vf parameters.
root@omv:~# fsck.ext4 -v -f /dev/sdd1
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
52522 inodes used (0.02%, out of 244191232)
186 non-contiguous files (0.4%)
192 non-contiguous directories (0.4%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 47558/4956
1731005380 blocks used (88.61%, out of 1953506385)
0 bad blocks
892 large files
50942 regular files
1571 directories
0 character device files
0 block device files
0 fifos
0 links
0 symbolic links (0 fast symbolic links)
0 sockets
------------
52513 files
Display More
After that I mounted manually and it succeeded.
As far as I understand the filesystem had no errors in it, so fsck wouldn't have been needed.
So what's happening here?
Note that it's not always the same disk that gets unmounted and AFAIR is the monit system restart that triggers this behaviour.
Any idea of what is causing this?
Thanks,
Gianpaolo
Maybe https://www.openmediavault.org/?p=3492 is the reason.
I think so ![]()
Quite a bug, though.
Now I'm back with static IP (they were static anyway, but I prefer to manage IP on my DHCP server).
Thanks a lot.
Disabling grains is not a good idea because hostname is used e.g. when postfix is deployed. You should do a omv-salt deploy run systemd-networkd hosts. There were some changes in resolving DNS info, so every getaddr is now much faster which speeds up Salt startup process.
I tried your command and my server become unreachable. Luckily I have kvm interface and I connected through that and I see that my network interfaces are down. I tried to reissue your command and I got this (sorry for the jpeg but I couldn't copy and paste from the kvm app).ù
I also tried omv-firstaid to reconfigure at least one of the two network interfaces but I still have the error.
Now I'm stuck...
Just a little update: I rebooted and the first interface I reconfigured with omv-firstaid came up correctly.
I re-run the omv-salt command you suggested and I still get errors (see the second error screenshot)
Another update: the web interface gave me some pending configuration changes.
So I checked the dirtymodules.json
root@omv:~# cat /var/lib/openmediavault/dirtymodules.json
[
"nut",
"avahi",
"collectd",
"halt",
"hosts",
"issue",
"systemd-networkd",
"task"
So I issued a
I had another netplan error and my interface went offline again... I rebooted through KVM and I ran the single modules
I tried and succeded with nut, avahi, collectd, halt, issue, hosts.
I tried with 'task' and I had ERROR: The state 'task' does not exist
I tried finally with systemd-networkd and it took waaaay longer than the other modules. My network interface went down again and I had to reboot through the KVM console to have my server back online.
Last resort: I configured my network interface through the web interface and gave it a static address. Every config update that was pending in the interface ran successfully. I even tried to re-run your omv-salt deploy run hosts systemd-networkd: it succeeded and completed in a short amount of time.
Than I configured my bridge interface on the second network adapter. and it seems ok. Let's wait and see.
Is there a reason why configuring the network adapter through omv-firstaid led to a netplan error?
Sorry for the long post but I really would want to fix this.
What kind of system and what kind of storage is the OS on?
omv-upgrade is literally just apt-get commands. If it is slow, you either have a lot of updates or your system is very, very slow. I just updated one of my RPi4 that I forgot about. So, it hadn't been updated for 3 months. It took 5 mins at most to update including a kernel update.
My system is on a Xeon processor, 32 Gb RAM and a ssd disk.
It's not apt commands or kernel module building that are very slow, but the various salt command at the end of the update process.
To be clearer, it's when I have the "Pending changes" notification in the GUI or, when I use the cli, the various salt passages after the packages are installed.
This morning I had two packages to be installed (openmediavault and openmediavault-nut).
I edited /etc/salt/minion file and I added debug log.
I also added another option that helped when I had the same problem in the past which is
This time the update took a reasonable amount of time (under a minute).
thanks.
I'm sorry to add something to a thread this old, but applying changes, from the GUI or from the cli with omv-upgrade command still take a *LOT* of time (like half an hour or so). I'm open to try something to debug what's going on on my system or to open a new thread if it's better.
Thanks.
Hi, for backup purposes (I need to upload only image files), I would like to export my picture folder without video files.
Some time ago I achieved this via duplicating my picture share in samba and veto files option to exclude extensions related to video files.
With OMV6 though it seems to me that I can't give a different name to a share and this prevent me to export the same directory with different names and different options.
I tried to create a new Storage -> shared folder named PicturesOnly that has the same directory containing my pictures but OMV complains that you cant have two different shared folders on the same directory.
So, Is there a way to achieve this? I could edit smb.conf by hand but it will be overwritten.
Sorry for bringing up an old thread but I'm in the same situation...
Any hint? I already tried to lower the log level as found on some other bug tracking system but with no luck.
None of that surprises me. How long did it take to apply changes on that last step once isc-dhcp-client was removed? Did you reboot after removing the client?
It took very little (not measured).
I haven't rebooted yet.
Ok. Let's see how it goes and what happens when I will be able to purge isc-dhscp-client.
I'll let you know
ok. today I managed to update openmediavault package in order to purge isc-dhcp-client.
I did an omv-update from CLI and the terminal remained for a *lot* at the point where it was saying "Setting up salt environment..."
In another terminal I did a ps aux |grep salt and I got this process that was started 13 minutes before.
/usr/bin/python3 /usr/bin/salt-call --local --retcode-passthrough --no-color state.orchestrate omv.stage.prepare
The process eventually ended without errors.
I even managed to purge isc-dhcp-client.
If I check dirtymodules though I get this:
So I checked in the GUI and there where some pending changes.
I applied them and in a bit the process ended and now my dirtymodules.json is empty.
libvirtd daemon just starts qemu-system. It isn't needed to keep VMs running. If you would have rebooted, they wouldn't have come up without libvirtd daemon.
Ah ok. Now I know why. Thanks!
I only noticed because I try to connect to my libvirtd remotely with virt-manager and I couldn't connect
I can confirm that now I can install openmediavault-kvm without issues.
In fact I had it removed some how in the past days when I was struggling with my netplan issue and found my machine with a virtual machine running but no libvirtd daemon running (?!?).
I did an apt install openmediavault-kvm and I got all back working.
I think your system only has problems when the network module is dirty and runs the saltstack code for it. So, if you don't change any network settings and nothing else marks the network module as dirty, you probably won't see the problem.
Ok. Let's see how it goes and what happens when I will be able to purge isc-dhscp-client.
I'll let you know
Hi, just a little update here.
Today I realized that I hadn’t configured samba shares yet.
So I opened web GUI and it had some pending changes.
I enabled samba, created two shares and then applied the changes.
I was ready to lose network again like all previous attempt, but the apply update not only took very little time, but I didn’t lose any network connection and my dirty modules.json is now empty.
So, problem solved? I’m not sure since I didn’t do any changes except doing some updates in the past days.
Let’s wait and see what happens.
PS. Sorry I've cluttered your thread with my sh$%!
no worries. Maybe our problems are somehow related ![]()
Yes. With openmediavault 6.0.41 you should be able to uninstall it.
This is still unreleased, right? I checked periodically but no openmediavault deb package update yet.