OMV dont recognize existing partitions on HDDs

  • Hi there,


    I want to upgrade my OMV 3 System to an actual OMV5 system (although it's still in Beta Phase)
    Therefore i removed my system SSD (with OMV3 installed) and added a new drive for OMV5. I plugged all data drives and installed the new version OMV5. Everything worked well. Except OMV don't recognize some of my data partitions anymore.


    I found out that the partitions are not mounted and tried to add the fstab entries from the old system manually but that doesn't seem to help either.


    When i switch back to the old SSD with OMV3 everything works well again.


    here are some additional infos:


    Maybe you have a hint for me what I am missing?


    HDDS:

    lsblk:


    blkid:

    • Offizieller Beitrag

    Yes, you missed there is no upgrade path from 3-5. You would need to upgrade 3-4. I'm not sure there is a supported upgrade path from 4-5 yet.


    Regardless, upgrading 3-4 at this point is a pain due to closed repositories, etc (search the forum). If you really want 5. Disconnect your data drives and clean install omv 5.

    • Offizieller Beitrag

    I just realized you already installed omv 5...


    Are these by chance USB drives that are configured in a raid? This could be done in omv 3, but using raid on USB drives was specifically removed from omv 4, and I'm assuming 5 as well.

  • Thank you for the replies. I agree the word "Update" is a bit confusing. Indeed I did a fresh install of OMV5 to a new SSD. My old OMV3 was also on another SSD. No Raids included. I only used the snapraid plugin for the data drives. All drives are EXT 4 format.


    I totally understand that I have to reconfigure everything. But i fail at mounting my data drives in the UI. As shown in the starting post, the new OMV 5 installation recognizes all HDDs (sda-sdi; sde is system SSD) but only 3 of my Data partitions (sdc1, sdd1 and sdg1) via the blkid command.


    I can mount all other filesystems manually via console (as checked via lsbkl after manually mounting them), so they didn't get destroyed. They still exist on the drives. But for some reason, the OMV5 UI (and blkid command) don't recognize the remaining 5 data partitions. So I cannot select them in the UI to mount them.
    I can also switch my new OMV5 SSD back to my old OMV3 SSD and boot my old system and everything is still working fine. I kept the old system as a backup!

    • Offizieller Beitrag

    But for some reason, the OMV5 UI (and blkid command) don't recognize the remaining 5 data partitions.

    That didn't make sense to me, looking at the two images I'm assuming OMV sees them under Storage -> Disks.


    Let's look at one drive that stands out /dev/sdi what's the output of wipefs -n /dev/sdi

  • To be honest I simplified a bit:



    Here is exactly what my drives are acutally supposed to do:


    DiskPartitionsUsageBehaviuor in OMV5
    /dev/sda/dev/sda1Data (ext4)not shown in blkid and UI
    /dev/sdb/dev/sdb1Data (ext4)not shown in blkid and UI
    /dev/sdc/dev/sdc1Data (ext4)shown in blkid and UI
    /dev/sdd/dev/sdd1Data (ext4)shown in blkid and UI
    /dev/sde/dev/sde1,2,5System Partitions (SWAP, Boot, System, etc)shown in blkid and UI
    /dev/sdf/dev/sdf1Data (ext4)not shown in blkid and UI
    /dev/sdg/dev/sdg1Data (ext4)shown in blkid and UI
    /dev/sdhunclearSpare disk (FS unclear)not shown in blkid and UI
    /dev/sdiunclearSpare disk (FS unclear, maybe btrfs for testing)not shown in blkid and UI




    All the filesystems are recognized and mounted in my OMV3 installation, Only the ones listed abve are shown in the UI of OMV5 (But as geaves mentions: All disks are listed under Storage --> Disks.
    So to me sda1 sdb1 and sdg1 seems like the outstanding candidates (mabye also sdh1 and sdi1, but I have to doublecheck their status).



    In the evening I will check and post the output of wipefs -n /dev/sdx for all drives when I am at home later on.

  • So here is the output of wipefs -n /dev/sdx: It seems were getting closer...



    Some of the HDDs were used in a Freenas ZFS environment before they where put into my OMV3 system


    The disappeared disks sda, sdb, sdf and sdi are the ones with zfs residues. sdh is empty so no wonder that it doesn't show up. sdc, sdd and sdg don't have any zfs residues and work fine.


    Now my Questions:
    Why does it work in OMV3?
    Can I fix that without formatting and moving all my data?

    2 Mal editiert, zuletzt von KJaneway () aus folgendem Grund: Sorry wrote in german. Switched to english.

    • Offizieller Beitrag

    Can I fix that without formatting and moving all my data?

    Yes, but there isn't a way to do this in one go you will have to remove each offset one at a time the use wipefs -n to check.


    So for /dev/sda the first drive you've posted wipefs --offset 0x57541e3f000 /dev/sda then check if it has been removed.


    Wash, Rinse, Repeat :) once those are removed the drive should be usable :thumbup:

  • Thanks for the help. I found in the man pages of Wipe FS the solution to erase all signatures with
    wipefs --all --backup /dev/sdx
    and after that restoring the 2 gpt and the PMBR Entry
    with dd if=~/wipefs-sdb-0x00000438.bak of=/dev/sdb seek=$((0x00000438)) bs=1 conv=notrunc()

    But I feel like this option is much more risky

    • Offizieller Beitrag

    Thanks for the help. I found in the man pages of Wipe FS the solution to erase all signatures with

    :thumbup:


    Removing the offsets would surely be enough, I would do that first, but you seem to know what you're doing anyway.


    EDIT: I see now what you are doing with the dd option, TBH I've only ever suggested removing the signatures one by one, having also read the man pages I can see your thinking>>>>>>>would I do it? No, I would do it the boring way :)


    Why does it work in OMV3?

    No idea, but I have seen this come up so many times recently where users have upgraded from 3 to 4, a user had his raid come up as active read only, further investigation showed residual zfs signatures. Removing them one by one got the raid back up and running with no data loss.


    This is your system so your choice really, it's what you feel confident with, as I said I would go with what I know works.

  • Let me say it in this way: I am a noob who knows google youtube and man pages.
    I wouldn't have found the solution with your help.


    Thank you a lot. Now it works well and the UI recognizes all drives.


    EDIT: I changed the thread title to something more informative. maybe it's useful for someone else

  • It says somewhere FreeNAS ZFS arrays cannot be imported into an OMV build


    Best way is to go into the drives tab and wipe em...then re-pool the drives using the OMV ZFS plugin


    (if you looked when it was compiling the OMV ZFS modules it said that the FreeBSD ZFS and OMV ZFS are under two different opensource licenses so that is why they are (or will never be unless the licenses are matched)


    Worse comes to worse...get a dariks boot and nuke USB and run a quick and wipe 0's across all the drives

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!