Force mount FS in GUI with specific fstype?

  • About 3 years ago i migrated from FreeNAS to OMV 2.0 and as part of that migration shifted my data from ZFS to EXT4 via a long and arduous format and copy pattern with N+1 drives setup.


    Fast forward to last week where we had a power outage and the USB stick running my N54L decided to call it quits and didn't come back on restart. No matter, i had a backup.


    But i decided to take the opportunity to migrate to OMV 4, and downloaded 4.1.11 ISO earlier tonight and installed it on a new USB stick. After having to do the /dev/sda and sdb grub shuffle I am successfully in the GUI.


    However, while the GUI sees all 4 drives in the NAS, and happily mounts one of the partitions (the newest), the other 3 are nowhere to be seen.


    fdisk -l reports the existence of Linux file systems on sdb1, sdc1, and sdd1; none of which show up in OMV.


    Manually mounting them in the console gives the error 'more filesystems detected', but they happily mount fine with -t ext4.


    Using blkid can see that the file system type is still listed as "zfs_member" even though it is happily formatted as ext4. I suspect when they were migrated across from the FreeNAS machine and formatted as EXT4 the OMV 2.0 install for some reason failed to overwrite the file table properly, leading to them still being thought of as gpt ZFS drives.


    Notably the only drive that doesn't do this is the newest 4TB i bought to do the N+1 transfer. It was partitioned and formatted from scratch, hence no ZFS/GPT leftovers.


    Given that i can happily mount them using '-t ext4' is there any way to force OMV4 to recognise the partitions as EXT4 in the GUI? Or am I going to have to find another 4tb drive to do N+1 transfers around on so i can wipe the partition tables?

    • Offizieller Beitrag

    This seems to be a common problem (although I can't reproduce it) when using a previously zfs formatted drive (and not properly wiped before re-formatting with something other than zfs) and moving to OMV 4.x. Take a look at this thread on how to fix - Cant mount XFS

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This seems to be a common problem (although I can't reproduce it) when using a previously zfs formatted drive (and not properly wiped before re-formatting with something other than zfs) and moving to OMV 4.x. Take a look at this thread on how to fix - Cant mount XFS

    That does sound like the issue, and does sound like a huge pain of the type "pray that this isn't accidentally destructive"

    • Offizieller Beitrag

    wipefs is pretty good when specifying an exact signature to remove but you should have backups :)

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Yeah, data is backed up, but offshore; and recovering 4tb over an Australian DSL link is less than ideal...

    wipefs has the -n flag that will do everything but write it to disk. So, you could make sure it is doing the right thing. I really see this as low risk.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Well final outcome of this (now that im back from holidays) is N+1. One of the older 3TB drives spat a few SMART errors, so using it as an opportunity (albeit a long drawn out one) to migrate all the data across to a new 4TB, drive by drive.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!