OMV dont recognize existing partitions on HDDs

    • OMV 5.x (beta)
    • Resolved
    • Update
    • OMV dont recognize existing partitions on HDDs

      Hi there,

      I want to upgrade my OMV 3 System to an actual OMV5 system (although it's still in Beta Phase)
      Therefore i removed my system SSD (with OMV3 installed) and added a new drive for OMV5. I plugged all data drives and installed the new version OMV5. Everything worked well. Except OMV don't recognize some of my data partitions anymore.

      I found out that the partitions are not mounted and tried to add the fstab entries from the old system manually but that doesn't seem to help either.

      When i switch back to the old SSD with OMV3 everything works well again.

      here are some additional infos:

      Maybe you have a hint for me what I am missing?

      HDDS:

      lsblk:


      blkid:
    • Yes, you missed there is no upgrade path from 3-5. You would need to upgrade 3-4. I'm not sure there is a supported upgrade path from 4-5 yet.

      Regardless, upgrading 3-4 at this point is a pain due to closed repositories, etc (search the forum). If you really want 5. Disconnect your data drives and clean install omv 5.
      Air Conditioners are a lot like PC's... They work great until you open Windows.

    • Thank you for the replies. I agree the word "Update" is a bit confusing. Indeed I did a fresh install of OMV5 to a new SSD. My old OMV3 was also on another SSD. No Raids included. I only used the snapraid plugin for the data drives. All drives are EXT 4 format.

      I totally understand that I have to reconfigure everything. But i fail at mounting my data drives in the UI. As shown in the starting post, the new OMV 5 installation recognizes all HDDs (sda-sdi; sde is system SSD) but only 3 of my Data partitions (sdc1, sdd1 and sdg1) via the blkid command.

      I can mount all other filesystems manually via console (as checked via lsbkl after manually mounting them), so they didn't get destroyed. They still exist on the drives. But for some reason, the OMV5 UI (and blkid command) don't recognize the remaining 5 data partitions. So I cannot select them in the UI to mount them.
      I can also switch my new OMV5 SSD back to my old OMV3 SSD and boot my old system and everything is still working fine. I kept the old system as a backup!

      The post was edited 1 time, last by KJaneway ().

    • To be honest I simplified a bit:


      Here is exactly what my drives are acutally supposed to do:

      DiskPartitionsUsageBehaviuor in OMV5
      /dev/sda/dev/sda1Data (ext4)not shown in blkid and UI
      /dev/sdb/dev/sdb1Data (ext4)not shown in blkid and UI
      /dev/sdc/dev/sdc1Data (ext4)shown in blkid and UI
      /dev/sdd/dev/sdd1Data (ext4)shown in blkid and UI
      /dev/sde/dev/sde1,2,5System Partitions (SWAP, Boot, System, etc)shown in blkid and UI
      /dev/sdf/dev/sdf1Data (ext4)not shown in blkid and UI
      /dev/sdg/dev/sdg1Data (ext4)shown in blkid and UI
      /dev/sdhunclearSpare disk (FS unclear)not shown in blkid and UI
      /dev/sdiunclearSpare disk (FS unclear, maybe btrfs for testing)not shown in blkid and UI




      All the filesystems are recognized and mounted in my OMV3 installation, Only the ones listed abve are shown in the UI of OMV5 (But as geaves mentions: All disks are listed under Storage --> Disks.
      So to me sda1 sdb1 and sdg1 seems like the outstanding candidates (mabye also sdh1 and sdi1, but I have to doublecheck their status).


      In the evening I will check and post the output of wipefs -n /dev/sdx for all drives when I am at home later on.
    • So here is the output of wipefs -n /dev/sdx: It seems were getting closer...


      Some of the HDDs were used in a Freenas ZFS environment before they where put into my OMV3 system

      Source Code

      1. root@OMV:~# wipefs -n /dev/sda
      2. DEVICE OFFSET TYPE UUID LABEL
      3. sda 0x57541e3f000 zfs_member
      4. sda 0x57541e3e000 zfs_member
      5. sda 0x57541e3d000 zfs_member
      6. sda 0x57541e3c000 zfs_member
      7. sda 0x57541e3b000 zfs_member
      8. sda 0x57541e3a000 zfs_member
      9. sda 0x57541e39000 zfs_member
      10. sda 0x57541e38000 zfs_member
      11. sda 0x57541e37000 zfs_member
      12. sda 0x57541e36000 zfs_member
      13. sda 0x57541e35000 zfs_member
      14. sda 0x57541e34000 zfs_member
      15. sda 0x57541e33000 zfs_member
      16. sda 0x57541e7f000 zfs_member
      17. sda 0x57541e7e000 zfs_member
      18. sda 0x57541e7d000 zfs_member
      19. sda 0x57541e7c000 zfs_member
      20. sda 0x57541e7b000 zfs_member
      21. sda 0x57541e7a000 zfs_member
      22. sda 0x57541e79000 zfs_member
      23. sda 0x57541e78000 zfs_member
      24. sda 0x57541e77000 zfs_member
      25. sda 0x57541e76000 zfs_member
      26. sda 0x57541e75000 zfs_member
      27. sda 0x57541e74000 zfs_member
      28. sda 0x57541e73000 zfs_member
      29. sda 0x57541e72000 zfs_member
      30. sda 0x57541e71000 zfs_member
      31. sda 0x57541e70000 zfs_member
      32. sda 0x200 gpt
      33. sda 0x57541e95e00 gpt
      34. sda 0x1fe PMBR
      35. root@OMV:~#
      36. root@OMV:~# wipefs -n /dev/sdb
      37. DEVICE OFFSET TYPE UUID LABEL
      38. sdb 0x57541e3f000 zfs_member
      39. sdb 0x57541e3e000 zfs_member
      40. sdb 0x57541e3d000 zfs_member
      41. sdb 0x57541e3c000 zfs_member
      42. sdb 0x57541e3b000 zfs_member
      43. sdb 0x57541e3a000 zfs_member
      44. sdb 0x57541e39000 zfs_member
      45. sdb 0x57541e38000 zfs_member
      46. sdb 0x57541e37000 zfs_member
      47. sdb 0x57541e36000 zfs_member
      48. sdb 0x57541e35000 zfs_member
      49. sdb 0x57541e34000 zfs_member
      50. sdb 0x57541e33000 zfs_member
      51. sdb 0x57541e7f000 zfs_member
      52. sdb 0x57541e7e000 zfs_member
      53. sdb 0x57541e7d000 zfs_member
      54. sdb 0x57541e7c000 zfs_member
      55. sdb 0x57541e7b000 zfs_member
      56. sdb 0x57541e7a000 zfs_member
      57. sdb 0x57541e79000 zfs_member
      58. sdb 0x57541e78000 zfs_member
      59. sdb 0x57541e77000 zfs_member
      60. sdb 0x57541e76000 zfs_member
      61. sdb 0x57541e75000 zfs_member
      62. sdb 0x57541e74000 zfs_member
      63. sdb 0x57541e73000 zfs_member
      64. sdb 0x57541e72000 zfs_member
      65. sdb 0x57541e71000 zfs_member
      66. sdb 0x57541e70000 zfs_member
      67. sdb 0x200 gpt
      68. sdb 0x57541e95e00 gpt
      69. sdb 0x1fe PMBR
      70. root@OMV:~#
      71. root@OMV:~# wipefs -n /dev/sdc
      72. DEVICE OFFSET TYPE UUID LABEL
      73. sdc 0x200 gpt
      74. sdc 0x57541e95e00 gpt
      75. sdc 0x1fe PMBR
      76. root@OMV:~#
      77. root@OMV:~# wipefs -n /dev/sdd
      78. DEVICE OFFSET TYPE UUID LABEL
      79. sdd 0x200 gpt
      80. sdd 0x57541e95e00 gpt
      81. sdd 0x1fe PMBR
      82. root@OMV:~#
      83. root@OMV:~# wipefs -n /dev/sdf
      84. DEVICE OFFSET TYPE UUID LABEL
      85. sdf 0x57541e3f000 zfs_member
      86. sdf 0x57541e3e000 zfs_member
      87. sdf 0x57541e3d000 zfs_member
      88. sdf 0x57541e3c000 zfs_member
      89. sdf 0x57541e3b000 zfs_member
      90. sdf 0x57541e3a000 zfs_member
      91. sdf 0x57541e39000 zfs_member
      92. sdf 0x57541e38000 zfs_member
      93. sdf 0x57541e37000 zfs_member
      94. sdf 0x57541e36000 zfs_member
      95. sdf 0x57541e35000 zfs_member
      96. sdf 0x57541e34000 zfs_member
      97. sdf 0x57541e33000 zfs_member
      98. sdf 0x57541e7f000 zfs_member
      99. sdf 0x57541e7e000 zfs_member
      100. sdf 0x57541e7d000 zfs_member
      101. sdf 0x57541e7c000 zfs_member
      102. sdf 0x57541e7b000 zfs_member
      103. sdf 0x57541e7a000 zfs_member
      104. sdf 0x57541e79000 zfs_member
      105. sdf 0x57541e78000 zfs_member
      106. sdf 0x57541e77000 zfs_member
      107. sdf 0x57541e76000 zfs_member
      108. sdf 0x57541e75000 zfs_member
      109. sdf 0x57541e74000 zfs_member
      110. sdf 0x57541e73000 zfs_member
      111. sdf 0x57541e72000 zfs_member
      112. sdf 0x57541e71000 zfs_member
      113. sdf 0x57541e70000 zfs_member
      114. sdf 0x200 gpt
      115. sdf 0x57541e95e00 gpt
      116. sdf 0x1fe PMBR
      117. root@OMV:~#
      118. root@OMV:~# wipefs -n /dev/sdg
      119. DEVICE OFFSET TYPE UUID LABEL
      120. sdg 0x200 gpt
      121. sdg 0x57541e95e00 gpt
      122. sdg 0x1fe PMBR
      123. root@OMV:~#
      124. root@OMV:~# wipefs -n /dev/sdh
      125. root@OMV:~#
      126. root@OMV:~# wipefs -n /dev/sdi
      127. DEVICE OFFSET TYPE UUID LABEL
      128. sdi 0x57541e3f000 zfs_member
      129. sdi 0x57541e3e000 zfs_member
      130. sdi 0x57541e3d000 zfs_member
      131. sdi 0x57541e3c000 zfs_member
      132. sdi 0x57541e3b000 zfs_member
      133. sdi 0x57541e3a000 zfs_member
      134. sdi 0x57541e39000 zfs_member
      135. sdi 0x57541e38000 zfs_member
      136. sdi 0x57541e37000 zfs_member
      137. sdi 0x57541e36000 zfs_member
      138. sdi 0x57541e35000 zfs_member
      139. sdi 0x57541e34000 zfs_member
      140. sdi 0x57541e33000 zfs_member
      141. sdi 0x57541e7f000 zfs_member
      142. sdi 0x57541e7e000 zfs_member
      143. sdi 0x57541e7d000 zfs_member
      144. sdi 0x57541e7c000 zfs_member
      145. sdi 0x57541e7b000 zfs_member
      146. sdi 0x57541e7a000 zfs_member
      147. sdi 0x57541e79000 zfs_member
      148. sdi 0x57541e78000 zfs_member
      149. sdi 0x57541e77000 zfs_member
      150. sdi 0x57541e76000 zfs_member
      151. sdi 0x57541e75000 zfs_member
      152. sdi 0x57541e74000 zfs_member
      153. sdi 0x57541e73000 zfs_member
      154. sdi 0x57541e72000 zfs_member
      155. sdi 0x57541e71000 zfs_member
      156. sdi 0x57541e70000 zfs_member
      157. root@OMV:~#
      Display All


      The disappeared disks sda, sdb, sdf and sdi are the ones with zfs residues. sdh is empty so no wonder that it doesn't show up. sdc, sdd and sdg don't have any zfs residues and work fine.

      Now my Questions:
      Why does it work in OMV3?
      Can I fix that without formatting and moving all my data?

      The post was edited 2 times, last by KJaneway: Sorry wrote in german. Switched to english. ().

    • KJaneway wrote:

      Can I fix that without formatting and moving all my data?
      Yes, but there isn't a way to do this in one go you will have to remove each offset one at a time the use wipefs -n to check.

      So for /dev/sda the first drive you've posted wipefs --offset 0x57541e3f000 /dev/sda then check if it has been removed.

      Wash, Rinse, Repeat :) once those are removed the drive should be usable :thumbup:
      Raid is not a backup! Would you go skydiving without a parachute?
    • Thanks for the help. I found in the man pages of Wipe FS the solution to erase all signatures with
      wipefs --all --backup /dev/sdx
      and after that restoring the 2 gpt and the PMBR Entry
      with dd if=~/wipefs-sdb-0x00000438.bak of=/dev/sdb seek=$((0x00000438)) bs=1 conv=notrunc()

      But I feel like this option is much more risky
    • KJaneway wrote:

      Thanks for the help. I found in the man pages of Wipe FS the solution to erase all signatures with
      :thumbup:

      Removing the offsets would surely be enough, I would do that first, but you seem to know what you're doing anyway.

      EDIT: I see now what you are doing with the dd option, TBH I've only ever suggested removing the signatures one by one, having also read the man pages I can see your thinking>>>>>>>would I do it? No, I would do it the boring way :)

      KJaneway wrote:

      Why does it work in OMV3?
      No idea, but I have seen this come up so many times recently where users have upgraded from 3 to 4, a user had his raid come up as active read only, further investigation showed residual zfs signatures. Removing them one by one got the raid back up and running with no data loss.

      This is your system so your choice really, it's what you feel confident with, as I said I would go with what I know works.
      Raid is not a backup! Would you go skydiving without a parachute?

      The post was edited 1 time, last by geaves ().

    • Let me say it in this way: I am a noob who knows google youtube and man pages.
      I wouldn't have found the solution with your help.

      Thank you a lot. Now it works well and the UI recognizes all drives.

      EDIT: I changed the thread title to something more informative. maybe it's useful for someone else