System Logs - monit, unable to read file system - repeats every few seconds

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • System Logs - monit, unable to read file system - repeats every few seconds

      OMV 3.0.81

      Earlier today, I used the GUI to remove a folder that I was sharing via Samba. Then, through the command line I destroyed the zpool and created a new one. This worked and I proceed to create a new folder which I shared through Samba.

      I looked at the System Logs. Every few seconds there is an alert from monit about the old zpool tank0 (which was destroyed) and the old filesystem - vol0 which was also destroyed.

      monit[2729]: Device /tank0/vol0 not found in /etc/mtab
      monit[2729]: 'fs_tank0_vol0' unable to read filesystem '/tank0/vol0' state

      How do I correct this so that the alerts stop and also to update the system to remove these old, no-longer-applicable references?
      Thanks.
    • It is a bad idea to administrate various issues via CLI. You need to remove the previous file system via UI to update the database and the monitoring backend to get rid of the messages. Especially filesystems must be managed via UI because all monitoring services and others are based on the database settings that are triggered when a file system is created/deleted.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • votdev wrote:

      It is a bad idea to administrate various issues via CLI. You need to remove the previous file system via UI to update the database and the monitoring backend to get rid of the messages. Especially filesystems must be managed via UI because all monitoring services and others are based on the database settings that are triggered when a file system is created/deleted.
      @votdev
      I have a similar issue but how to fix it. I did remove the it via the UI but the system had crash. when i restarted the entry was not in the UI anymore nor was it in the config.xml or fstab.
    • daveinfla wrote:

      I have the same issue, how did you fix it?
      The "fix" might involve looking for old ZFS entries in etc/mtab and removing them. Then it would be looking for the dead ZFS array name, pool and filesystem UUID's and for ZFS filesystems <fsname>, directories <dir>, and other entries in OMV's config file (/etc/openmediavault/config.xml) that reference the dead array.
      An easier path might be to remove the ZFS plugin in the GUI, then reinstall it. (See what happens.)

      In the bottom line - fixing the configuration by manual edits is not something that could be considered "easy".
      If your current array has data on it, I'd consider exporting it, rebuild OMV, import the array, and call it a lesson learned.
      ___________________________________________________

      There's a reason why OMV stops users from simply deleting a file system or array, with shared folders and Samba shares still active. With shared folders or Samba shares still configured, why would a user delete the entire file system? Denying the action until the process is completed in correct order makes sense, and prevents numerous "mouse click" errors.

      To keep the OMV database in a consistent state:
      The create process must be done in reverse order down to the level needed - in the GUI.

      While it doesn't cover all scenarios, the following generally applies:

      1. Delete SMB/CIF shares
      2. Delete Shared folders
      3. Unmount and Delete the file system
      4. Delete the RAID array (3 & 4 are the same with ZFS)
      4.1 Delete LVM (if used)
      ____________________________
      5. Under Storage, S.M.A.R.T, Devices, turn monitoring off for the physical disk(s) you want to delete. Delete schedule tests, if any.
      6. Wipe and/or remove the physical disks
      ________________________________________________

      If the above removal process is skipped straight down to 3 or 4, or any part of it is done on the command line, OMV has no way to keep track of the changes. Error messages of some type are inevitable and it's possible that the result may not be fixable.

      ((Since mistakes can be made, this is but one of a good number of reasons why users should consider cloning their boot drive.))
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.99 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 1 time, last by flmaxey: edit ().

    • This was a standalone SATA drive with a NTFS partition on it that I temporally connected to copy (rsync) data off of and onto my zfs pool.

      The above steps were completed, however the system is still looking for it.

      Here's the exact errors:

      monit[2551]: 'fs_srv_dev-disk-by-label-Backup' unable to read filesystem '/srv/dev-disk-by-label-Backup' state

      monit[2551]: Device /srv/dev-disk-by-label-Backup not found in /etc/mtab
    • The thread made it seem as if you destroyed a ZFS pool on the command line.


      If setup in OMV, an Rsync job usually sits on top of a share.
      In your case reversing the process would have been;

      1. Delete Rsync Job(s)
      2. Delete SMB/CIF share (if used)
      2. Delete the shared folder (associated with this particular drive)
      3. Unmount the file system
      **After the drive's file system is unmounted, OMV won't look for it anymore." At this point, you could delete the files system, wipe it and use it again.
      4. Remove the physical disks.
      _________________________________________________

      Your issue might be fixable.

      If you have WinSCP installed on a Windows PC, you can edit /etc/mtab in place. (Right click on mtab, select Edit and Notepad.)



      When notepad is open, go into Format and make sure Word Wrap is not checked. (You don't want the lines to auto wrap.) Stretch the width so lines are not cut off. Do Edit, Find, and copy your string in; /srv/dev-disk-by-label-Backup

      Find and remove the entire line.

      Below is an example from my mtab file and one of my device/drive entries .



      Repeat the above in etc/fstab as well, to see if this drive /srv/dev-disk-by-label-Backup has an entry. If you're not going to use this drive with OMV again (with this label and file system), it doesn't need a line entry in fstab either.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.99 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • No luck, I putty'd into the server and used nano.

      The referenced device does show up in either file. If you look at the second line it states it can't find it in mtab so something else must be referencing it.

      Unfortunately I can't simply re-install the drive and back track, it had bad sectors and I tossed it!

      Based on the messages above it's looking for a file system (NTFS) that no longer exists.

      Any other files to check?

      BTW, I tried disabling SMART and SMB, rebooted, then re-enabled them. The errors returned.
    • Before starting down this road:

      Depending on how far you're into your configuration, importing your new pool into a fresh OMV build is a nice clean option.
      Let me put that out there. :)

      ______________________________________________________________________

      The main config file for OMV is: /etc/openmediavault/config.xml

      It's obvious that it's a script/DB defining parameters so it's important to differentiate between Headings (which must remain) and the entries under them. [Note - depending on how you set up your ZFS pool there may numerous drive/filesystem entries.]

      ***Before deleting entries or doing any editing, save a backup copy to config.xml.bck or something.***
      While I haven't deleted drive entrees from config.xml (I haven't had your issue) the following is what I might try to patch things.

      ___________________________________________________________

      In config.xml there are physical drive entries under:

      <storage>
      <mntent>

      I think you'll be looking for an entry that starts and ends like the following. ((The UUID number shown is for one of my drives. Yours will be different and you should write it down for reference. Also, I don't know if ntfs would show up in the "type" field. My drive is formatted EXT4. Other items may vary.))

      <mntent>
      <uuid>044759e3-a412-484e-a2b6-2e7ab81445ec</uuid>
      <fsname>/dev-disk-by-label-Backup</fsname>
      <dir>/srv/dev-disk-by-label-Backup</dir>
      <type>ntfs</type>
      <opts>defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl</opts>
      <freq>0</freq>
      <passno>2</passno>
      <hidden>0</hidden>
      </mntent>

      Using your UUID noted above, search for another entry as follows. If it exists, it needs to go.
      (I don't think you'd see this unless it's a ZFS filesystem, but that's a guess.)

      <notification>
      <uuid>044759e3-a412-484e-a2b6-2e7ab81445ec</uuid>
      <id>monitfilesystems</id>
      <enable>0</enable>
      </notification>

      Finally, and I don't know if this is really necessary if directly editing this file, on the command line do
      omv-mkconf This command compiles the configuration. Then, I give it a few minutes, and reboot..

      ________________________________________________________

      For the process, while it's your call, WinSCP and using notepad to search for strings is a heck of a lot easier than using nano. WinSCP is real useful for managing any SSH box and I've found, using WINE, that it will run on Linux Desktops as well.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.99 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • flmaxey wrote:

      Depending on how far you're into your configuration, importing your new pool into a fresh OMV build is a nice clean option.
      Let me put that out there.
      SO, next!

      I searched high and low, nothing referencing the NTFS drive at all.

      How do I do a fresh load and import my existing ZFS pool without losing my data?

      You many have mentioned this but what utility do you use to duplicate your flashdrive? I have a second one waiting to be copied to, I guess I should have done that BEFORE adding and/or removing the drive...
    • Of drive related files, you've looked at the 3 main config files.

      To continue along the original line, I'd look closely at log files (syslog) for something that would indicate what the source of the error might be, but I couldn't tell you what exactly to look for in the file or where that might lead. You can look in the GUI under Diagnostics, System Logs.

      While your pool is still intact, do you have back up?
      As it seems, you pitched the old NTFS drive that had your data on it so the question is; if something goes wrong, do you still have another source for your data? If not, before changing anything, maybe you should write your pools data to another location.
      __________________________

      The one experience I had with a pool import was in my ZFS testing phase, and it was in a VM:

      First, I didn't export the pool.
      I shut down and disconnected the drives. I rebuilt OMV from scratch and reinstalled ZFS VIA the plugin. Then I shut down again and reconnected the drives. When the boot up completed, the pool automatically imported. This was a basic pool with regular sub-directories, not child filesystems, but I don't believe that detail would matter.

      The utility I use is Wind32Diskimager. It will read flash media (USB and other) to a *.img file and write from a *img file to flash media. (There are read and write buttons - take care of which one you're using.) After reading or writing, I use the verify option to confirm that the source and destination are the same.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.99 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • What are you using as a boot drive? (Hopefully it's a USB drive.)

      If you have a spare, you could set aside your current boot drive, do a clean new build on the spare and see if your pool imports cleanly into the spare.
      __________________________________________________________
      On the other hand:

      With nothing else found in OMV's config files (the important ones), this seems to be an issue with monit. (If you removed the drive correctly, some monit config file didn't update?) Based on the syslog, I looked at monit and its config files. It seemed to start at etc/monit/monitrc but from there, monit's configuration branches out to several locations and it seems to have been modified for OMV. As it seems, it wouldn't be easy to run down the cause.

      But,,, there may be another path. It appears, for some reason, that monit is continuing to monitor a nonexistent file system. It also appears that the collection of items that monit monitors is in its' state file -> /var/lib/monit/state I looked at it but the format is not recognizable so direct editing is probably not a good idea.

      So.... Monit has a GUI and some of generic items it collects on can be turned off, in the GUI.
      Go here -> howtoforge.com/tutorial/how-to…figure-monit-on-debian-9/

      Since monit is already installed, pick it up at set #3 and configure monit to open a web page.

      ((In my case, after I restarted the service and used netstat -ant | grep :2812 to see if the ports were listening, I got nothing. A reboot fixed it.))

      These are screens from the Monit Web page.


      In this one, at the top level, you'll see that files systems are monitored.





      Going into one of the file systems, at the bottom, you'll see that monitoring can be turned OFF.




      With any luck, your NTFS file system will be among those monitored. (And can be turned off.)
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.99 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • FIXED!!!

      Your not going to believe this...

      First, let me tell you I wouldn't have been able to fix this without your help and troubleshooting everything else first to get to this point, but in the end it was as simple as....

      Turning Monitoring off under the Monitoring menu, saving it, re-enabling it and saving it again. I no longer get any errors related to the ghosted filesystem.

      I stumbled upon this fix by accident. I followed the Monit article you listed above but wasn't paying attention based on the article I thought the only lines I had to deal with were the set httpd and allow admin, however the default on OMV has everything except the allow admin line un-commented which means the "use address localhost" is active, only allowing localhost connectivity. We'll about the time I figured that out I decided to turn monitoring on and off and it fixed the issue. Not to mention it automatically re-wrote the monitrc file back to it's defaults. So if you screw up the file, turning monitoring off and back on will fix that too!

      Off to clone my thumb drive and test it!

      Next, I'll be looking at adding a second NIC and enabling a Team...

      Thanks for everything
    • Well, this development vindicates you because,, umm,, I thought you were guilty.... :D
      Since I've seen this sort of thing before when others modify part of the OMV config on the command line,, well... ;) (Not this error exactly.)

      So, really, it boiled down to a monit config changed that didn't take place - and a reset updated the config. I'll tell you, as I was tracing through the paths, scripts and configs associated with monit, in the OMV system, I was amazed. It would take hours and a whiteboard flow chart, to begin to understand it. (On the other hand, I'm a monit NOOB.) As it seems, monit has been heavily extended in the OMV setup, to where it's like an octopus with a tendril into everything.

      Hum, weird monit errors? Off and ON again - easy enough.
      ___________________________________________________

      The multiple boot drive approach is, in my opinion, a very good idea. I use three (3), two working drives and a master. With two in a working rotation and a one in a drawer, and a bit of thought given to a process, you'll be able to gracefully recover from most software issues to include corruption.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.99 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.99 Erasmus - Rsync'ed Backup
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119