Unable to Read Filesystem Error

  • I'm just now trying to investigate this but I'm not reproducing the error. Not sure but it may just be an issue with the name you used for the VG or LV. Please tell me the names you used for both.


    Did the problem begin right when you created the VG and LV? Or did it happen later?


    I'll use LVM in my vm for a while. Please give some feed back on questions above.


    Keep names simple... I used VG1 and LV1.

  • I have 1 VG: vg1 and then have 3 LV: nas, own and test


    nas and own were created before my upgrade to .5, test was after, I don't seem to get any alerts on that FS. These are all on a raid 5 volume made of 4 disks.


    Here's the error:


    One thing I was wondering is how these FS make it into the monit configuration, the test LV/file system does not show up in monitrc but the other two do:


    # Alert if filesystem is missing or disk space gets low
    check filesystem fs_dev_disk_by-uuid_a6bbdfec-a0e2-4543-a002-9ff460b15227 with path /dev/disk/by-uuid/a6bbdfec-a0e2-4543-a002-9ff460b15227
    if space usage > 80% for 5 times within 15 cycles
    then alert else if succeeded for 10 cycles then alert
    #Check requires monit 5.4 (included in Wheezy).
    #check program mp_media_a6bbdfec-a0e2-4543-a002-9ff460b15227 with path "mountpoint -q '/media/a6bbdfec-a0e2-4543-a002-9ff460b15227'"
    # if status == 1 then alert


    # Alert if filesystem is missing or disk space gets low
    check filesystem fs_dev_disk_by-uuid_804353ef-0266-4da3-bdf9-09af5da2c9c8 with path /dev/disk/by-uuid/804353ef-0266-4da3-bdf9-09af5da2c9c8
    if space usage > 80% for 5 times within 15 cycles
    then alert else if succeeded for 10 cycles then alert
    #Check requires monit 5.4 (included in Wheezy).
    #check program mp_media_804353ef-0266-4da3-bdf9-09af5da2c9c8 with path "mountpoint -q '/media/804353ef-0266-4da3-bdf9-09af5da2c9c8'"
    # if status == 1 then alert




    Please let me know any other info you want me to gather, I am trying to dig into it as I can but I am not familiar with monit and just haven't had much time to really look into it.


    I also wondered if it was related to this bug from the other thread linked earlier: http://bugtracker.openmediavault.org/view.php?id=818 however that seems to be a slightly different issue, or at least error message.

  • Zitat von "votdev"

    Will be fixed in openmediavault 0.5.9.


    Awesome! Thank you for the update, glad it wasn't something I screwed up myself. Just getting into OMV and loving it, thanks for all your hard work.

  • I just applied 0.5.9 & rebooted but the alerts persist.


    tail of syslog:
    Sep 11 21:50:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_a6bbdfec-a0e2-4543-a002-9ff460b15227' unable to read filesystem /dev/dm-0 state
    Sep 11 21:50:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_804353ef-0266-4da3-bdf9-09af5da2c9c8' unable to read filesystem /dev/dm-1 state
    Sep 11 21:50:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_1561dec2-2421-4437-889a-d03e5eb1ca07' unable to read filesystem /dev/dm-2 state
    Sep 11 21:50:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_a6bbdfec-a0e2-4543-a002-9ff460b15227' unable to read filesystem /dev/dm-0 state
    Sep 11 21:50:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_804353ef-0266-4da3-bdf9-09af5da2c9c8' unable to read filesystem /dev/dm-1 state
    Sep 11 21:50:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_1561dec2-2421-4437-889a-d03e5eb1ca07' unable to read filesystem /dev/dm-2 state
    Sep 11 21:51:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_a6bbdfec-a0e2-4543-a002-9ff460b15227' unable to read filesystem /dev/dm-0 state
    Sep 11 21:51:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_804353ef-0266-4da3-bdf9-09af5da2c9c8' unable to read filesystem /dev/dm-1 state
    Sep 11 21:51:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_1561dec2-2421-4437-889a-d03e5eb1ca07' unable to read filesystem /dev/dm-2 state
    Sep 11 21:51:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_a6bbdfec-a0e2-4543-a002-9ff460b15227' unable to read filesystem /dev/dm-0 state
    Sep 11 21:51:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_804353ef-0266-4da3-bdf9-09af5da2c9c8' unable to read filesystem /dev/dm-1 state
    Sep 11 21:51:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_1561dec2-2421-4437-889a-d03e5eb1ca07' unable to read filesystem /dev/dm-2 state
    Sep 11 21:52:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_a6bbdfec-a0e2-4543-a002-9ff460b15227' unable to read filesystem /dev/dm-0 state
    Sep 11 21:52:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_804353ef-0266-4da3-bdf9-09af5da2c9c8' unable to read filesystem /dev/dm-1 state
    Sep 11 21:52:09 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_1561dec2-2421-4437-889a-d03e5eb1ca07' unable to read filesystem /dev/dm-2 state
    Sep 11 21:52:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_a6bbdfec-a0e2-4543-a002-9ff460b15227' unable to read filesystem /dev/dm-0 state
    Sep 11 21:52:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_804353ef-0266-4da3-bdf9-09af5da2c9c8' unable to read filesystem /dev/dm-1 state
    Sep 11 21:52:39 fileyx64 monit[1463]: 'fs_dev_disk_by-uuid_1561dec2-2421-4437-889a-d03e5eb1ca07' unable to read filesystem /dev/dm-2 state



    Happy to provide any details, configs, logs, whatever that would help.

  • I had filled a bug report at the end of August regarding that. See http://bugtracker.openmediavault.org/view.php?id=818
    I also told votdev that the 0.5.9 fix wasn't enough. He has take care of everything and this will be fixed in the next release.
    If you've got questions on that bug, feel free to ask me :)


    Pierre


    PS: By the way, I'm amazed by your quick reactivity votdev ! Well done !

  • Hi,


    Just applied 0.5.10, rebooted and still same errors...


    Sep 20 08:50:53 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:51:23 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:51:53 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:52:23 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:52:53 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:53:23 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:53:53 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:54:23 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:54:53 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:55:23 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:55:53 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:56:23 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state
    Sep 20 08:56:53 NAS monit[1281]: 'fs_dev_disk_by-uuid_ac4e3432-9df6-44d0-9e67-032635890830' unable to read filesystem /dev/dm-0 state


    My boot partition is located on an external USB harddisk. Don't know if it is related.
    Thanks for any help !

  • Just upgrading to OMV 0.5.10 does not do the trick.
    You have to re-generate monit configuration.
    You can do that by unmounting and remounting a file system, then applying configuration changes.
    You can also do that by running the monit's mkconf script and restarting monit like that:

    Code
    root@izanami:~# ls -l /etc/monit/monitrc 
    -rw------- 1 root root 5003 Sep 20 10:08 /etc/monit/monitrc
    root@izanami:~# /usr/share/openmediavault/mkconf/monit
    root@izanami:~# ls -l /etc/monit/monitrc 
    -rw------- 1 root root 5003 Sep 20 10:09 /etc/monit/monitrc
    root@izanami:~# service monit restart
    Stopping daemon monitor: monit.
    Starting daemon monitor: monit.
    root@izanami:~#


    Note:

    • the two ls commands are here to show that monitrc has been updated;
    • applying configuration changes calls the mkconf scripts and reload services, the two proposals are the same.


  • That solved my problem!
    Thank you very much!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!