OMV3 to OMV4 upgrade + kernel upgrade => EXT4 issue

  • Hi all,

    First of all, as this is my first post so it gives me the opportunity to thank the project leader and the community for the great work and enthousiasm around OMV. :thumbsup:

    Now i've just come across a big headache that I apprently manage to solve and would like to share with you.

    My system has been running Debian 8 + OMV 3 for a while very nicely and i prepared to upgrade to OMV4 + Debian 9.
    16 drives in software RAID6 + LVM + ext4 + NFS presented to an ESXi cluster.

    Here is what I did :

    • Make sure that the system is up-to-date

    apt update
    apt upgrade
    apt dist-upgrade

    So far so good.

    • Upgrade OMV and distrib and reboot


    So far so good. At this point the system booted and was running nicely although i didn't check that all my ext4 FS were mounted. I only checked that my RAID6 was online.

    • Make sure (again) that the system is up-to-date

    apt update
    apt upgrade

    This upgraded the kernel from 4.9.0-0.bpo.6-amd64 to 4.9.0-7-amd64

    • Rebooted and discovered that one of my ext4 FS was not mounting due to what appeared to be FS errors :

    Jul 22 13:10:03 OPENMEDIAVAULT kernel: [ 91.478165] EXT4-fs (dm-3): ext4_check_descriptors: Block bitmap for group 0 overlaps block group descriptors
    Jul 22 13:10:03 OPENMEDIAVAULT kernel: [ 91.478249] EXT4-fs (dm-3): group descriptors corrupted!

    Of course, big stress, is my data lost (?)...

    I looked around and ran e2fsck which corrected a minor error but yet the volume would not mount.
    Mouting the FS read-only i could check that folder and files were there so no apparent damage.
    mount --readonly /dev/disk/by-id/dm-name-VG1-esx_nfs3

    After a few hours of searching I came across a post related to ext4 bug in latest kernel versions ( which pointed me to rolling back to backport kernel and... that was it ! The FS would mount properly.

    My guess is that maybe kernel version could/should be pinned in APT to prevent upgrading one step too many.

    In my configuration only the 13+ Tb FS would not mount. The others were fine.
    LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
    esx_nfs1 VG1 -wi-ao---- 304,65g
    esx_nfs3 VG1 -wi-ao---- 13,11t
    esx_nfs5 VG1 -wi-ao---- 301,48g
    esx_nfs6 VG1 -wi-ao---- 6,13t
    esx_nfs7 VG1 -wi-ao---- 4,10t

    To set default kernel on boot i edited /etc/default/grub

    Then ran :

    Am I the only one with such an issue ?
    Maybe this helps someone else ;)

    Thank you for reading :)

  • hello all and happy new year!!

    Just a small update on my issue.
    I migrated a few weeks ago my aging physical intel q6600 omv build to a virtual one (vmware esxi 6.7 on ryzen 2600 build).

    Basically here are my steps :
    - Fresh debian 9 stable install (vm)
    - install kernel backport
    - omv 4 install from depot as instructed on omv doc + omv init
    - install lvm plugin
    - passthrough my m1015 controller and 16 disks (md raid6) to the vm

    At that point raid and lvs were detected and up to date in omv

    I manually had to mount the fs and share again + nfs and that s it. Backported kernel works like a charm.

    Thanks for reading.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!