Thank you!!!
I have already found the error, or so I think, it seems that one of the disks is giving an error.
Thank you very much for the help
Thank you!!!
I have already found the error, or so I think, it seems that one of the disks is giving an error.
Thank you very much for the help
Hello people!!!,
I am having a problem with my btrfs volume, when an indeterminate time passes, for example 24 hours, I receive a monit alert email with the following content:
Quote[NAS.local] Monitoring alert -- Filesystem flags changed filesystem_srv_dev-disk-by-label-poolFS
Host: \NAS.local
Date: Mon, 17 Aug 2020 11:51:17
Service: filesystem_srv_dev-disk-by-label-poolFS
Event: Filesystem flags changed
Description: filesystem flags changed to ro,noatime,compress=zstd:3,space_cache,subvolid=5,subvol=/
This triggered the monitoring system to: alert
From this moment the file system appears as readonly and my containers that I have on this volume begin to fail. When I reboot the system returns to normal.
I scrubbed the volume and got the following message:
root@NAS:/srv/dev-disk-by-label-poolFS/@homeUsers/mlillo# btrfs scrub status /srv/dev-disk-by-label-poolFS/
scrub status for 0e696343-a445-481b-9f9f-0449d8ad65c3
scrub started at Sun Aug 16 13:31:35 2020 and was aborted after 22:19:19
total bytes scrubbed: 6.98TiB with 2 errors
error details: super=1 csum=1
corrected errors: 1, uncorrectable errors: 0, unverified errors: 0
Can anybody help me?
This has me a little desperate.
Thank you
Brilliant!!!!!!
Thank you. It is already working properly
Hi macom,
Thanks for amswering, monit has stopped giving the error but I still have the problem with the docker containers. the docker service continues to start before the filesystem is available and I really don't know how I can fix it.
Thank you
Hello colleagues,
I have reinstalled version 5.3.9 and the problem persists. I have been able to observe that mounting the btrfs pool takes a few seconds and I think this is the problem. Is it possible to make any changes to the mounting time so that the monitor returns these errors?
Hello colleagues,
I am having problems with my omv5.4.2-1 since the last update. When the system starts, I receive monit alerts about the availability of two services:
The system monitoring needs your attention.
Host: \NAS.local
Date: Mon, 04 May 2020 08:16:58
Service: filesystem_srv_dev-disk-by-label-poolFS
Event: Does not exist
Description: unable to read filesystem '/srv/dev-disk-by-label-poolFS' state
This triggered the monitoring system to: restart
Display More
The system monitoring needs your attention.
Host: \NAS.local
Date: Mon, 04 May 2020 08:16:57
Service: proftpd
Event: Does not exist
Description: process is not running
This triggered the monitoring system to: restart
In a minute or so I re-receive alerts that the file system is already available:
The system monitoring needs your attention.
Host: \NAS.local
Date: Mon, 04 May 2020 08:17:58
Service: mountpoint_srv_dev-disk-by-label-poolFS
Event: Status succeeded
Description: status succeeded (0) -- /srv/dev-disk-by-label-poolFS is a mountpoint
This triggered the monitoring system to: alert
The system monitoring needs your attention.
Host: \NAS.local
Date: Mon, 04 May 2020 08:17:28
Service: filesystem_srv_dev-disk-by-label-poolFS
Event: Exists
Description: succeeded getting filesystem statistics for '/srv/dev-disk-by-label-poolFS'
This triggered the monitoring system to: alert
The system monitoring needs your attention.
Host: \NAS.local
Date: Mon, 04 May 2020 08:16:57
Service: proftpd
Event: Does not exist
Description: process is not running
This triggered the monitoring system to: restart
I understand that it is for the time that elapses since the mount is launched until the filesystem is ready.
The one that worries me is the filesystem because it causes my docker containers to be corrupted because when they start up they do not have access to the volumes located in the poolFS that I have with btrfs and raid1.
I'm pretty lost with this, can anyone help me?
I add some information that I have collected:
blkid
/dev/sda: LABEL="poolFS" UUID="0e696343-a445-481b-9f9f-0449d8ad65c3" UUID_SUB="de0668a0-116a-420e-ba18-120e256be965" TYPE="btrfs"
/dev/sdc: LABEL="poolFS" UUID="0e696343-a445-481b-9f9f-0449d8ad65c3" UUID_SUB="514ce6bf-77c8-466c-803e-a00406203080" TYPE="btrfs"
/dev/sdb1: UUID="440A-716A" TYPE="vfat" PARTUUID="8416d791-32ac-4e22-97ac-87c9359f1d24"
/dev/sdb2: UUID="720c65b3-b6ad-4fd8-a29d-7a71ea7bf347" TYPE="ext4" PARTUUID="c4da7b72-06da-4f84-b08f-682cf65648fd"
/dev/sdb3: UUID="804397ea-1c5c-48ea-bf6d-8cf1a4e62040" TYPE="swap" PARTUUID="22a373c9-efa1-4f47-9ad7-c47987444487"
/dev/sdd: LABEL="poolFS" UUID="0e696343-a445-481b-9f9f-0449d8ad65c3" UUID_SUB="19f398c6-8a42-4d2c-83be-55f7b74bc192" TYPE="btrfs"
btrfs check /dev/sda
Opening filesystem to check...
Checking filesystem on /dev/sda
UUID: 0e696343-a445-481b-9f9f-0449d8ad65c3
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 3290363412480 bytes used, no error found
total csum bytes: 3195120756
total tree bytes: 6469697536
total fs tree bytes: 2712141824
total extent tree bytes: 218742784
btree space waste bytes: 745331284
file data blocks allocated: 18503100166144
referenced 4940372434944
Display More
btrfs device stats /srv/dev-disk-by-label-poolFS/
[/dev/sdd].write_io_errs 0
[/dev/sdd].read_io_errs 0
[/dev/sdd].flush_io_errs 0
[/dev/sdd].corruption_errs 0
[/dev/sdd].generation_errs 0
[/dev/sdc].write_io_errs 0
[/dev/sdc].read_io_errs 0
[/dev/sdc].flush_io_errs 0
[/dev/sdc].corruption_errs 0
[/dev/sdc].generation_errs 0
[/dev/sda].write_io_errs 0
[/dev/sda].read_io_errs 0
[/dev/sda].flush_io_errs 0
[/dev/sda].corruption_errs 0
[/dev/sda].generation_errs 0
Display More
cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdb2 during installation
UUID=720c65b3-b6ad-4fd8-a29d-7a71ea7bf347 / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sdb1 during installation
UUID=440A-716A /boot/efi vfat umask=0077 0 1
# swap was on /dev/sdb3 during installation
UUID=804397ea-1c5c-48ea-bf6d-8cf1a4e62040 none swap sw 0 0
# >>> [openmediavault]
/dev/disk/by-label/poolFS /srv/dev-disk-by-label-poolFS btrfs noatime,compress,nofail 0 2
# <<< [openmediavault]
Display More
thanks in advance and sorry for my english