Import OMV 2 raid1 in OMV4: can't mount filesystem

    • OMV 4.x
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Import OMV 2 raid1 in OMV4: can't mount filesystem

      Installed OMV 4 (on top of Debian 9) today on my new rig and wanted to import my raid 1 array from my OMV 2 install. OMV is installed on an USB drive and I have the flash memory plugin running.

      I can see the two disk (/dev/sda & /dev/sdb) and s.m.a.r.t. is OK. Also I can see the array (naserwin:data) as /dev/md127 state:clean level:mirror with both /dev/sda and /dev/sdb listed as devices.

      However I can not see the file system when I open the file systems tab. Any ideas? I'm no linux pro and tried fixing this for 3 hours now ||

      When I manually mount the array I get the following error:

      Source Code

      1. mount /dev/md127 /mnt
      2. mount: /dev/md127: more filesystems detected. This should not happen,
      3. use -t <type> to explicitly specify the filesystem type or
      4. use wipefs(8) to clean up the device.
      I've tried:

      Source Code

      1. root@vault:~# mdadm --readwrite /dev/md127
      2. root@vault:~# omv-mkconf mdadm
      3. update-initramfs: Generating /boot/initrd.img-4.9.0-6-amd64
      fdisk-l

      Source Code

      1. Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      2. Units: sectors of 1 * 512 = 512 bytes
      3. Sector size (logical/physical): 512 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      6. Units: sectors of 1 * 512 = 512 bytes
      7. Sector size (logical/physical): 512 bytes / 4096 bytes
      8. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      9. Disk /dev/md127: 3.7 TiB, 4000652656640 bytes, 7813774720 sectors
      10. Units: sectors of 1 * 512 = 512 bytes
      11. Sector size (logical/physical): 512 bytes / 4096 bytes
      12. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      13. Disk /dev/sdc: 14.9 GiB, 16004415488 bytes, 31258624 sectors
      14. Units: sectors of 1 * 512 = 512 bytes
      15. Sector size (logical/physical): 512 bytes / 512 bytes
      16. I/O size (minimum/optimal): 512 bytes / 512 bytes
      17. Disklabel type: gpt
      18. Disk identifier: 0B6085E9-3C5D-4EA6-806D-77B8DC090390
      19. Device Start End Sectors Size Type
      20. /dev/sdc1 2048 1050623 1048576 512M EFI System
      21. /dev/sdc2 1050624 15208447 14157824 6.8G Linux filesystem
      22. /dev/sdc3 15208448 31256575 16048128 7.7G Linux swap
      Display All

      blkid

      Source Code

      1. Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      2. Units: sectors of 1 * 512 = 512 bytes
      3. Sector size (logical/physical): 512 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      6. Units: sectors of 1 * 512 = 512 bytes
      7. Sector size (logical/physical): 512 bytes / 4096 bytes
      8. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      9. Disk /dev/md127: 3.7 TiB, 4000652656640 bytes, 7813774720 sectors
      10. Units: sectors of 1 * 512 = 512 bytes
      11. Sector size (logical/physical): 512 bytes / 4096 bytes
      12. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      13. Disk /dev/sdc: 14.9 GiB, 16004415488 bytes, 31258624 sectors
      14. Units: sectors of 1 * 512 = 512 bytes
      15. Sector size (logical/physical): 512 bytes / 512 bytes
      16. I/O size (minimum/optimal): 512 bytes / 512 bytes
      17. Disklabel type: gpt
      18. Disk identifier: 0B6085E9-3C5D-4EA6-806D-77B8DC090390
      19. Device Start End Sectors Size Type
      20. /dev/sdc1 2048 1050623 1048576 512M EFI System
      21. /dev/sdc2 1050624 15208447 14157824 6.8G Linux filesystem
      22. /dev/sdc3 15208448 31256575 16048128 7.7G Linux swap
      23. root@vault:~# blkid
      24. -bash: blkidblkid: command not found
      25. root@vault:~# blkid
      26. /dev/sda: UUID="9213aed7-c464-cfd9-ed54-dc394e35e717" UUID_SUB="9b869989-36cf-bea2-1090-f83460b01d79" LA
      27. BEL="naserwin:data" TYPE="linux_raid_member"
      28. /dev/sdb: UUID="9213aed7-c464-cfd9-ed54-dc394e35e717" UUID_SUB="6842526b-0f3c-b42d-fe85-3e3e637e737c" LA
      29. BEL="naserwin:data" TYPE="linux_raid_member"
      30. /dev/sdc1: LABEL="MYLINUXLIVE" UUID="7A07-9138" TYPE="vfat" PARTUUID="9edba208-b524-4fc4-85f3-7d10bb62af
      31. 34"
      32. /dev/sdc2: UUID="0d5571e3-a708-4941-9d0f-1bdb00bf2b58" TYPE="ext4" PARTUUID="769c70a4-243f-4b0a-a1a2-b5d
      33. 1f5ed9bf0"
      34. /dev/sdc3: UUID="2d301b23-51dd-4df2-91ad-6dd77957866e" TYPE="swap" PARTUUID="db22d9c6-6907-4dcc-b300-457
      35. 561692284"
      Display All

      cat /proc/mdstat

      Source Code

      1. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      2. md127 : active (auto-read-only) raid1 sdb[1] sda[0]
      3. 3906887360 blocks super 1.2 [2/2] [UU]

      cat /ect/fstab

      Source Code

      1. # /etc/fstab: static file system information.
      2. #
      3. # Use 'blkid' to print the universally unique identifier for a
      4. # device; this may be used with UUID= as a more robust way to name devices
      5. # that works even if disks are added and removed. See fstab(5).
      6. #
      7. # <file system> <mount point> <type> <options> <dump> <pass>
      8. # / was on /dev/sdb2 during installation
      9. UUID=0d5571e3-a708-4941-9d0f-1bdb00bf2b58 / ext4 noatime,nodiratime,errors=remount-ro 0 1
      10. # /boot/efi was on /dev/sdb1 during installation
      11. UUID=7A07-9138 /boot/efi vfat umask=0077 0 1
      12. # swap was on /dev/sdb3 during installation
      13. #UUID=2d301b23-51dd-4df2-91ad-6dd77957866e none swap sw 0 0
      14. tmpfs /tmp tmpfs defaults 0 0
      15. # >>> [openmediavault]
      16. # <<< [openmediavault]
      Display All

      The post was edited 1 time, last by nettozzie ().

    • Update:

      I can mount the array using mount /dev/md127 /raid -t ext4 and am able to view the files on it ls /raid.

      However, the array doesn't show up as mounted in OMV webgui (I tried omv-mkconf mdadm)

      Update 2:

      cat /etc/mdadm/mdadm.conf shows ARRAY /dev/md/naserwin:data metadata=1.2 name=naserwin:data UUID=9213aed7:c464cfd9:ed54dc39:4e35e717

      Update 2:
      I mounted the array as /vaultdata and then added the following to /etc/openmediavault/config.xml

      Source Code

      1. <mntent>
      2. <uuid>9213aed7:c464cfd9:ed54dc39:4e35e717</uuid>
      3. <fsname>naserwin:data</fsname>
      4. <dir>/vaultdata</dir>
      5. <type>ext4</type>
      6. <opts>defaults,nofail</opts>
      7. <freq>0</freq>
      8. <passno>2</passno>
      9. <hidden>0</hidden>
      10. </mntent>

      Also added the following to /etc/fstab UUID=9213aed7-c464cfd9-ed54dc39-4e35e717 /vaultdata ext4 defaults,nofail 0 2

      Now I can list the contents (ls /vaultdata). In OMV webgui under file systems a new file system shows up but it's non available (all options say n/a).

      The post was edited 1 time, last by nettozzie ().

    • After some help on IRC, I think it might have to do with the following:

      Brainfuck Source Code

      1. wipefs /dev/md127
      2. offset type
      3. ----------------------------------------------------------------
      4. 0x3a37977f000 zfs_member [filesystem]
      5. 0x438 ext4 [filesystem]
      6. LABEL: data
      7. UUID: dbb0fa51-526d-4b60-a37c-4edb12a05f34
      There are two filesystems on the disk or at least two descriptions of a filesystem. Any idea on how to fix this?
    • Please search the forum for posts with the same issue. There seems to be a problem in Debian 9 with mdadm devices and not detected filesystems. This is maybe related to the kernel or in the userland tools. Currently i do not know about how to fix that without going back to the version where it works, backup the data to another device and reintsall OMV4.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • votdev wrote:

      Please search the forum for posts with the same issue. There seems to be a problem in Debian 9 with mdadm devices and not detected filesystems. This is maybe related to the kernel or in the userland tools. Currently i do not know about how to fix that without going back to the version where it works, backup the data to another device and reintsall OMV4.
      Yes, I saw that there are quite a few posts about it. With the help from fromport at IRC it was fixed in the following way:
      • Remove /dev/sdb from the raid1
      • wipe /dev/sdb
      • Partition /dev/sdb
      • Make a new raid 1 array with one disk missing and add /dev/sdb to it
      • Copy all the contents of /dev/sda to /dev/sdb
      • Remove /dev/sda from it's raid array
      • Wipe /dev/sda
      • Partition /dev/sda
      • Add /dev/sda to the raid array which has /dev/sdb
      • Sync/recover the raid array
      Disclaimer: I may have got a step wrong, since fromport did all the work.

      The post was edited 2 times, last by nettozzie ().

    • I added a mirror'd raid array to an OMV 2.x VM and then upgraded it to 3.x then 4.x. I couldn't get the array to stop working. Really have no idea what is wrong.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • nettozzie wrote:

      You mean start working?
      No. I mean it was always working. Nothing I did broke it.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • It seems that ovm2 raid setup was done with a 1:1 mirror of sda/sdb in his case with an ext4 filesystem directly on the md device.
      The conversion basically meant creating a raid partition on the drives.
      After that, omv4 was happy again.
      nettozzie did a great job describing what I did on his system.
      There were some complications because the drives had both ext4 & zfs signatures on it, had to remove all of those.
      Hope it helps other people who are struggling with this issue.
    • nettozzie wrote:

      OMV 2 makes a WHOLE disk raid, while OMV4 uses a linux raid PARTITION. That's why OMV4 doesn't understand the raid array made in OMV2.
      No. That is incorrect. ALL OMV versions use the entire disk when you create an array from the OMV web interface. Also, ALL OMV versions will "understand" an array using the entire disk or raid partitions.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!