Force mount FS in GUI with specific fstype?

    • Force mount FS in GUI with specific fstype?

      About 3 years ago i migrated from FreeNAS to OMV 2.0 and as part of that migration shifted my data from ZFS to EXT4 via a long and arduous format and copy pattern with N+1 drives setup.

      Fast forward to last week where we had a power outage and the USB stick running my N54L decided to call it quits and didn't come back on restart. No matter, i had a backup.

      But i decided to take the opportunity to migrate to OMV 4, and downloaded 4.1.11 ISO earlier tonight and installed it on a new USB stick. After having to do the /dev/sda and sdb grub shuffle I am successfully in the GUI.

      However, while the GUI sees all 4 drives in the NAS, and happily mounts one of the partitions (the newest), the other 3 are nowhere to be seen.

      fdisk -l reports the existence of Linux file systems on sdb1, sdc1, and sdd1; none of which show up in OMV.

      Manually mounting them in the console gives the error 'more filesystems detected', but they happily mount fine with -t ext4.

      Using blkid can see that the file system type is still listed as "zfs_member" even though it is happily formatted as ext4. I suspect when they were migrated across from the FreeNAS machine and formatted as EXT4 the OMV 2.0 install for some reason failed to overwrite the file table properly, leading to them still being thought of as gpt ZFS drives.

      Notably the only drive that doesn't do this is the newest 4TB i bought to do the N+1 transfer. It was partitioned and formatted from scratch, hence no ZFS/GPT leftovers.

      Given that i can happily mount them using '-t ext4' is there any way to force OMV4 to recognise the partitions as EXT4 in the GUI? Or am I going to have to find another 4tb drive to do N+1 transfers around on so i can wipe the partition tables?
    • This seems to be a common problem (although I can't reproduce it) when using a previously zfs formatted drive (and not properly wiped before re-formatting with something other than zfs) and moving to OMV 4.x. Take a look at this thread on how to fix - Cant mount XFS
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • wipefs is pretty good when specifying an exact signature to remove but you should have backups :)
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • takai wrote:

      Yeah, data is backed up, but offshore; and recovering 4tb over an Australian DSL link is less than ideal...
      wipefs has the -n flag that will do everything but write it to disk. So, you could make sure it is doing the right thing. I really see this as low risk.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!