Import OMV 2 raid1 in OMV4: can't mount filesystem

  • Installed OMV 4 (on top of Debian 9) today on my new rig and wanted to import my raid 1 array from my OMV 2 install. OMV is installed on an USB drive and I have the flash memory plugin running.


    I can see the two disk (/dev/sda & /dev/sdb) and s.m.a.r.t. is OK. Also I can see the array (naserwin:data) as /dev/md127 state:clean level:mirror with both /dev/sda and /dev/sdb listed as devices.


    However I can not see the file system when I open the file systems tab. Any ideas? I'm no linux pro and tried fixing this for 3 hours now ||


    When I manually mount the array I get the following error:


    Code
    mount /dev/md127 /mnt                                                                                                                                                                                
    mount: /dev/md127: more filesystems detected. This should not happen,                                                                                                                                              
           use -t <type> to explicitly specify the filesystem type or                                                                                                                                                  
           use wipefs(8) to clean up the device.

    I've tried:

    Code
    root@vault:~# mdadm --readwrite /dev/md127                                                              
    root@vault:~# omv-mkconf mdadm                                                                          
    update-initramfs: Generating /boot/initrd.img-4.9.0-6-amd64

    fdisk-l



    blkid



    cat /proc/mdstat


    Code
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]                   
    md127 : active (auto-read-only) raid1 sdb[1] sda[0]                                                     
          3906887360 blocks super 1.2 [2/2] [UU]


    cat /ect/fstab


  • Update:


    I can mount the array using mount /dev/md127 /raid -t ext4 and am able to view the files on it ls /raid.


    However, the array doesn't show up as mounted in OMV webgui (I tried omv-mkconf mdadm)


    Update 2:


    cat /etc/mdadm/mdadm.conf  shows ARRAY /dev/md/naserwin:data metadata=1.2 name=naserwin:data UUID=9213aed7:c464cfd9:ed54dc39:4e35e717


    Update 2:
    I mounted the array as /vaultdata and then added the following to /etc/openmediavault/config.xml


    Code
    <mntent>
            <uuid>9213aed7:c464cfd9:ed54dc39:4e35e717</uuid>
            <fsname>naserwin:data</fsname>
            <dir>/vaultdata</dir>
            <type>ext4</type>
            <opts>defaults,nofail</opts>
            <freq>0</freq>
            <passno>2</passno>
            <hidden>0</hidden>
          </mntent>


    Also added the following to /etc/fstab UUID=9213aed7-c464cfd9-ed54dc39-4e35e717 /vaultdata ext4 defaults,nofail 0 2


    Now I can list the contents (ls /vaultdata). In OMV webgui under file systems a new file system shows up but it's non available (all options say n/a).

  • After some help on IRC, I think it might have to do with the following:

    There are two filesystems on the disk or at least two descriptions of a filesystem. Any idea on how to fix this?

    • Offizieller Beitrag

    Please search the forum for posts with the same issue. There seems to be a problem in Debian 9 with mdadm devices and not detected filesystems. This is maybe related to the kernel or in the userland tools. Currently i do not know about how to fix that without going back to the version where it works, backup the data to another device and reintsall OMV4.

  • Please search the forum for posts with the same issue. There seems to be a problem in Debian 9 with mdadm devices and not detected filesystems. This is maybe related to the kernel or in the userland tools. Currently i do not know about how to fix that without going back to the version where it works, backup the data to another device and reintsall OMV4.

    Yes, I saw that there are quite a few posts about it. With the help from fromport at IRC it was fixed in the following way:

    • Remove /dev/sdb from the raid1
    • wipe /dev/sdb
    • Partition /dev/sdb
    • Make a new raid 1 array with one disk missing and add /dev/sdb to it
    • Copy all the contents of /dev/sda to /dev/sdb
    • Remove /dev/sda from it's raid array
    • Wipe /dev/sda
    • Partition /dev/sda
    • Add /dev/sda to the raid array which has /dev/sdb
    • Sync/recover the raid array

    Disclaimer: I may have got a step wrong, since fromport did all the work.

    • Offizieller Beitrag

    I added a mirror'd raid array to an OMV 2.x VM and then upgraded it to 3.x then 4.x. I couldn't get the array to stop working. Really have no idea what is wrong.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    You mean start working?

    No. I mean it was always working. Nothing I did broke it.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It seems that ovm2 raid setup was done with a 1:1 mirror of sda/sdb in his case with an ext4 filesystem directly on the md device.
    The conversion basically meant creating a raid partition on the drives.
    After that, omv4 was happy again.
    nettozzie did a great job describing what I did on his system.
    There were some complications because the drives had both ext4 & zfs signatures on it, had to remove all of those.
    Hope it helps other people who are struggling with this issue.

    • Offizieller Beitrag

    OMV 2 makes a WHOLE disk raid, while OMV4 uses a linux raid PARTITION. That's why OMV4 doesn't understand the raid array made in OMV2.

    No. That is incorrect. ALL OMV versions use the entire disk when you create an array from the OMV web interface. Also, ALL OMV versions will "understand" an array using the entire disk or raid partitions.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!