Unable to Read Filesystem Error

  • A couple of questions to help solve this.


    What kind of drive is being used as the OMV system drive?


    What version of OMV is being used?

  • Got the version.


    Still need to know what type of drive OMV is installed on?


    When you installed OMV 0.5.6, was it an upgrade from a previous version or was it a fresh install from an ISO?


    If it was a fresh install, the only HD/SSD drive that should have been connected to the motherboard should have been the one OMV was being installed to.

  • Dave wants to know on what kind of drive did you install OMV. 2,5" HDD, 3,5"HDD, SSD, USB Stick... ?


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Good, OMV is on a SSD drive. Some have installed OMV to thumbsticks and they WILL cause problems. See my signature.


    Sometimes we ask questions which may not seem to have anything to do with the problem, but they may. Thank you for understanding.


    Since it was an upgrade, did you use the upgrade script before you upgraded? And if so do you recall which version?


    How long has OMV run on the SSD drive? Months, year or more?


    How old are the 2Tb drives that have the read error? If they are more than 2yrs old (Seagates warranty) they may have unfortunately reached the end of their life.


    This is more advanced.


    smartctl will give the power on hours of each drive if they are SMART capable along with a lot of other data. To use it SSH into your OMV box and enter;


    smartctl --all /dev/sda


    change the sda to the proper one for each of your drives - this can be found with;


    blkid


    and review the responses.

  • I think if the syslog file is to big it will not display. I would clear the log and then see if it displays properly. Look for what is filling it up. That is what is causing this error. Try to figure out what is reporting to much and fix it.

  • Kosiak, your links do not work.

    Homebox: Bitfenix Prodigy Case, ASUS E45M1-I DELUXE ITX, 8GB RAM, 5x 4TB HGST Raid-5 Data, 1x 320GB 2,5" WD Bootdrive via eSATA from the backside
    Companybox 1: Standard Midi-Tower, Intel S3420 MoBo, Xeon 3450 CPU, 16GB RAM, 5x 2TB Seagate Data, 1x 80GB Samsung Bootdrive - testing for iSCSI to ESXi-Hosts
    Companybox 2: 19" Rackservercase 4HE, Intel S975XBX2 MoBo, C2D@2200MHz, 8GB RAM, HP P212 Raidcontroller, 4x 1TB WD Raid-0 Data, 80GB Samsung Bootdrive, Intel 1000Pro DualPort (Bonded in a VLAN) - Temp-NFS-storage for ESXi-Hosts

  • Have you taken hard drives, usb sticks or external hard drives out of the system without unmounting them first? Give more info. on your setup and if you have done anything with removing devices. Is this a vm setup??? Is this LVM??? These filesystems are not showing as normal devices. dm = device mapper. You need to explain more guys.

  • >>Have you taken hard drives, usb sticks or external hard drives out of the system without unmounting them first?
    No. NAS is a black box installed in the storage room and only available on the network. But errors can occur because a power outage. I dont know.


    >>Give more info
    For example?


    Hardware:
    AMD A4-3400 / ASRock A75 PRO4 / RAM 4Gb / 7 physical SATA disks (5 HDD 2.0Tb / 1 HDD 3.0Tb / 1 HDD 300Gb) - LVM2

  • Also having this problem on my newly setup system. I was running latest 0.4 for several days and then upgraded to .5 when it came out. OMV is installed to a regular IDE drive, the DM devices that give me these warnings are LVM volumes on a 4 disk raid 5 array. Created the array & setup LVM in 0.4 though OMV. I created a new LVM volume after the 0.5 upgrade and that one does not seem to have this problem, also does not seem to show up in the monit configuration as the other older volumes do. These volumes are working & accessible just fine I just get these warnings. The other interesting thing I have noticed is that I am unable to unmount these volumes (button is grayed out) however the new volume I just created does not do this.


    I noticed this issue after the 0.5 upgrade however I don't know for sure that it was not there before the upgrade. I could restore the clonezilla image I made before the upgrade to test that though I think.


    Also possibly worthy to note is that I had a power outage the day before I did my upgrade to 0.5 and had not yet setup my UPS so it did lose power while the system was running so I suspected these errors could have possibly been from this rather than the upgrade, or some other misconfiguration. I have not started migrating data to my system in mass yet because of this issue so I can do some troubleshooting as needed if it would help to track this behavior down. Seems like there are at least a few of us having the same problem at this point.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!