In the end the filesystem was corrupt running fsck /f in recovery fixed it.
Posts by rwijnhov
-
-
This is the first error it throws during bootup
monit[1126]: Lookup for '/srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' filesystem failed -- not found in /proc/self/mounts 2 minutes while omv is up. it will suddenly mount and is available as file system. -
No i have a different issue. somehow my disk mount way to late. 1 minute after booting so not sure how to fix that.
-
I have a really strange issue After upgrading to 6.2.0-2 one of my file systems take a minute to come up. My sda1 and sdb1 are up after booting. When in login in omv i see my sdc1 is online but not up. After waiting for about 1 minute it will be available.So something is going wrong while booting U guess. is see this in the syslog
monit[1273]: 'mountpoint_srv_dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' status failed (1) -- /srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253
monit[1273]: 'filesystem_srv_dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' unable to read filesystem '/srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' state
5-2-2023 09:41:05
monit[1273]: Filesystem '/srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' not mounted5-2-2023 09:40:35
monit[1273]: Lookup for '/srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' filesystem failed -- not found in /proc/self/mounts5-2-2023 09:40:35
monit[1273]: 'openmediavault' Monit 5.27.2 started5-2-2023 09:40:35
monit[1273]: Filesystem '/srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' not mounted5-2-2023 09:40:35
monit[1273]: 'filesystem_srv_dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' unable to read filesystem '/srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' state5-2-2023 09:40:35
monit[1273]: 'filesystem_srv_dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' trying to restartAnd then in the end it does mount it:
monit[1273]: 'mountpoint_srv_dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' status succeeded (0) -- /srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466 is a mountpoint
5-2-2023 09:42:35
monit[1273]: 'mountpoint_srv_dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466' status succeeded (0) -- /srv/dev-disk-by-uuid-0538fe17-2b2c-41ab-a7fc-eca253859466 is a mountpoint -
Ok i enableb apparmor and docker is working again. Only one strange error remains. After booting portainer will only find the inveronment with de portainer install. If i the restart docker again it will find the envronment wilth all my cotainers installed. And i will still only show 1 envoronment.
It looks as if portainer is up and running before my file system is up.
-
I have a strange issue after I upgraded last night to the latest version my pools no longer auto start. After reboot I manualy have to start the pools for my vm's to work. Any ideas?
-
I followed manual exact. But when I add BR0 as nic. The vm boots fine but it just won't give me an IP adress. Not sure what could be wrong.
If i add brigde br0 i don't get ip.
If I add the bro under macvtap I get IP only can't connect to the host pc.
So the bridge is working, but won't give an ip. Any ideas?
-
Also tried sudo quotaoff -v /srv/* still same error. I run version 5.5.21-1 (Usul)
-
Hi i have the issue that I can't delete een nfs share in omv 5. I have searched a lot but can't find the error. I get:
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run --no-color quota 2>&1' with exit code '1': Openmediavault.local: ---------- ID: quota_off_0538fe17-2b2c-41ab-a7fc-eca253859466 Function: cmd.run Name: quotaoff --group --user /dev/disk/by-label/Data
true Result: True Comment: Command "quotaoff --group --user /dev/disk/by-label/Data
true" run Started: 14:38:34.282483 Duration: 108.97 ms Changes: ---------- pid: 23068 retcode: 0 stderr: stdout: ---------- ID: quota_check_0538fe17-2b2c-41ab-a7fc-eca253859466 Function: cmd.run Name: quotacheck --user --group --create-files --try-remount --use-first-dquot --verbose /dev/disk/by-label/Data Result: True Comment: Command "quotacheck --user --group --create-files --try-remount --use-first-dquot --verbose /dev/disk/by-label/Data" run Started: 14:38:34.391695 Duration: 4484.861 ms Changes: ---------- pid: 23070 retcode: 0 stderr: quotacheck: Scanning /dev/sdb1 [/srv/dev-disk-by-label-Data] quotacheck: Checked 6367 directories and 108127 files stdout:
Pls how do i fix this. I need to delete the nfs so i can remove the disk.