I am seeing the following errors on my syslog for every files system setup:
Any ideas on how to fix that?
monit[1548]: 'fs_dev_disk_by-uuid unable to read filesystem /dev/dm-0 state
I am seeing the following errors on my syslog for every files system setup:
Any ideas on how to fix that?
monit[1548]: 'fs_dev_disk_by-uuid unable to read filesystem /dev/dm-0 state
A couple of questions to help solve this.
What kind of drive is being used as the OMV system drive?
What version of OMV is being used?
I am running 0.5.6 and the drivers are 4x Seagate ST2000DM001 plus a couple of random discs in USB enclosures
Got the version.
Still need to know what type of drive OMV is installed on?
When you installed OMV 0.5.6, was it an upgrade from a previous version or was it a fresh install from an ISO?
If it was a fresh install, the only HD/SSD drive that should have been connected to the motherboard should have been the one OMV was being installed to.
It was an upgrade from a previous version. The drives are just your typical Seagate SATA drives in a RAID 5 array.
Does that help?
Dave wants to know on what kind of drive did you install OMV. 2,5" HDD, 3,5"HDD, SSD, USB Stick... ?
Greetings
David
Ahh, I installed OMV on a SATA SSD drive.
Good, OMV is on a SSD drive. Some have installed OMV to thumbsticks and they WILL cause problems. See my signature.
Sometimes we ask questions which may not seem to have anything to do with the problem, but they may. Thank you for understanding.
Since it was an upgrade, did you use the upgrade script before you upgraded? And if so do you recall which version?
How long has OMV run on the SSD drive? Months, year or more?
How old are the 2Tb drives that have the read error? If they are more than 2yrs old (Seagates warranty) they may have unfortunately reached the end of their life.
This is more advanced.
smartctl will give the power on hours of each drive if they are SMART capable along with a lot of other data. To use it SSH into your OMV box and enter;
smartctl --all /dev/sda
change the sda to the proper one for each of your drives - this can be found with;
blkid
and review the responses.
What is the file size of you syslog?
Syslog 3.6Mb
I think if the syslog file is to big it will not display. I would clear the log and then see if it displays properly. Look for what is filling it up. That is what is causing this error. Try to figure out what is reporting to much and fix it.
Kosiak, your links do not work.
Have you taken hard drives, usb sticks or external hard drives out of the system without unmounting them first? Give more info. on your setup and if you have done anything with removing devices. Is this a vm setup??? Is this LVM??? These filesystems are not showing as normal devices. dm = device mapper. You need to explain more guys.
>>Have you taken hard drives, usb sticks or external hard drives out of the system without unmounting them first?
No. NAS is a black box installed in the storage room and only available on the network. But errors can occur because a power outage. I dont know.
>>Give more info
For example?
Hardware:
AMD A4-3400 / ASRock A75 PRO4 / RAM 4Gb / 7 physical SATA disks (5 HDD 2.0Tb / 1 HDD 3.0Tb / 1 HDD 300Gb) - LVM2
>>>Is this a vm setup???
No
Is this LVM???
Yes.
Did you import a volume group from another system or did you create the volume group under OMV? If you created under OMV we may have to file a bug report.
There is some information on this thread regarding LVM and monit:
>>Did you import a volume group from another system or did you create the volume group under OMV?
I am create the volume group under OMV
Also having this problem on my newly setup system. I was running latest 0.4 for several days and then upgraded to .5 when it came out. OMV is installed to a regular IDE drive, the DM devices that give me these warnings are LVM volumes on a 4 disk raid 5 array. Created the array & setup LVM in 0.4 though OMV. I created a new LVM volume after the 0.5 upgrade and that one does not seem to have this problem, also does not seem to show up in the monit configuration as the other older volumes do. These volumes are working & accessible just fine I just get these warnings. The other interesting thing I have noticed is that I am unable to unmount these volumes (button is grayed out) however the new volume I just created does not do this.
I noticed this issue after the 0.5 upgrade however I don't know for sure that it was not there before the upgrade. I could restore the clonezilla image I made before the upgrade to test that though I think.
Also possibly worthy to note is that I had a power outage the day before I did my upgrade to 0.5 and had not yet setup my UPS so it did lose power while the system was running so I suspected these errors could have possibly been from this rather than the upgrade, or some other misconfiguration. I have not started migrating data to my system in mass yet because of this issue so I can do some troubleshooting as needed if it would help to track this behavior down. Seems like there are at least a few of us having the same problem at this point.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!