Possible to create an LVM across 2 x RAID5 and retain the info?

    • OMV 0.5
    • Possible to create an LVM across 2 x RAID5 and retain the info?

      Hi guys

      I already have 2 x RAID 5 Arrays configured on the same OMV Machine. Does anyone know if it is possible to create an LVM span across the 2 (to make one big drive) and retain the information already on the 2 x RAID 5 Arrays?

      Thanks
    • Yes, use AuFS instead.

      Greetings
      David
      "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"

      Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.


      Upload Logfile via WebGUI/CLI
      #openmediavault on freenode IRC | German & English | GMT+1
      Absolutely no Support via PM!

      I host parts of the omv-extras.org Repository, the OpenMediaVault Live Demo and the pre-built PXE Images. If you want you can take part and help covering the costs by having a look at my profile page.
    • ok, installed and read everything, kinda understand how it works now.

      The part i am struggling with is i already have directories (i.e. SERIES, MOVIES, PERSONAL, etc...) on the RAID, so how do I set up the branch? Must I move all the current folders into a new folder called d1 and then use that as branch 1 on the one RAID and do the same on the other RAID as branch 2?
    • Create a shared folder at the root of each array. Use those as the branches.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • No need to move anything. Basically, it is just telling aufs which drive to use.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • any bright ideas why i would get this message:

      Failed to execute command 'omv-mkraid /dev/md8 -l raid5 -n 3 -N RAID2 /dev/sdc /dev/sde /dev/sdf 2>&1': mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: super1.x cannot open /dev/sde: Device or resource busy mdadm: /dev/sde is not suitable for this array. mdadm: layout defaults to left-symmetric mdadm: super1.x cannot open /dev/sdf: Device or resource busy mdadm: /dev/sdf is not suitable for this array. mdadm: create aborted
    • That array looks ok. What about: fdisk -l
      I take it there is nothing on the second array?
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Yup, that is my current working array, no problems there.

      Yup, I am creating another new array from scratch, here are the results of fdisk -l:

      Source Code

      1. ​Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
      2. 255 heads, 63 sectors/track, 243201 cylinders
      3. Units = cylinders of 16065 * 512 = 8225280 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disk identifier: 0x00000000
      7. Disk /dev/sda doesn't contain a valid partition table
      8. Disk /dev/sdb: 2000.4 GB, 2000397852160 bytes
      9. 255 heads, 63 sectors/track, 243201 cylinders
      10. Units = cylinders of 16065 * 512 = 8225280 bytes
      11. Sector size (logical/physical): 512 bytes / 512 bytes
      12. I/O size (minimum/optimal): 512 bytes / 512 bytes
      13. Disk identifier: 0x60d30435
      14. Device Boot Start End Blocks Id System
      15. /dev/sdb1 1 243201 1953512001 5 Extended
      16. /dev/sdb5 1 243201 1953511969+ 8e Linux LVM
      17. Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
      18. 255 heads, 63 sectors/track, 243201 cylinders
      19. Units = cylinders of 16065 * 512 = 8225280 bytes
      20. Sector size (logical/physical): 512 bytes / 512 bytes
      21. I/O size (minimum/optimal): 512 bytes / 512 bytes
      22. Disk identifier: 0x346bd15a
      23. Device Boot Start End Blocks Id System
      24. /dev/sdd1 1 243201 1953512001 5 Extended
      25. /dev/sdd5 1 243201 1953511969+ 8e Linux LVM
      26. Disk /dev/sdf: 1000.2 GB, 1000203804160 bytes
      27. 255 heads, 63 sectors/track, 121601 cylinders
      28. Units = cylinders of 16065 * 512 = 8225280 bytes
      29. Sector size (logical/physical): 512 bytes / 4096 bytes
      30. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      31. Disk identifier: 0x00000000
      32. Disk /dev/sdf doesn't contain a valid partition table
      33. Disk /dev/sdc: 1000.2 GB, 1000203804160 bytes
      34. 255 heads, 63 sectors/track, 121601 cylinders
      35. Units = cylinders of 16065 * 512 = 8225280 bytes
      36. Sector size (logical/physical): 512 bytes / 512 bytes
      37. I/O size (minimum/optimal): 512 bytes / 512 bytes
      38. Disk identifier: 0x00000000
      39. Disk /dev/sdc doesn't contain a valid partition table
      40. Disk /dev/sde: 1000.2 GB, 1000203804160 bytes
      41. 255 heads, 63 sectors/track, 121601 cylinders
      42. Units = cylinders of 16065 * 512 = 8225280 bytes
      43. Sector size (logical/physical): 512 bytes / 512 bytes
      44. I/O size (minimum/optimal): 512 bytes / 512 bytes
      45. Disk identifier: 0x00000000
      46. Disk /dev/sde doesn't contain a valid partition table
      47. Disk /dev/sdg: 250.1 GB, 250059350016 bytes
      48. 255 heads, 63 sectors/track, 30401 cylinders
      49. Units = cylinders of 16065 * 512 = 8225280 bytes
      50. Sector size (logical/physical): 512 bytes / 512 bytes
      51. I/O size (minimum/optimal): 512 bytes / 512 bytes
      52. Disk identifier: 0x00080f7b
      53. Device Boot Start End Blocks Id System
      54. /dev/sdg1 * 1 29478 236775424 83 Linux
      55. /dev/sdg2 29478 30402 7420929 5 Extended
      56. /dev/sdg5 29478 30402 7420928 82 Linux swap / Solaris
      57. Disk /dev/md127: 4000.8 GB, 4000792444928 bytes
      58. 2 heads, 4 sectors/track, 976755968 cylinders
      59. Units = cylinders of 8 * 512 = 4096 bytes
      60. Sector size (logical/physical): 512 bytes / 4096 bytes
      61. I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
      62. Disk identifier: 0x00000000
      63. Disk /dev/md127 doesn't contain a valid partition table
      64. Disk /dev/dm-0: 2000.4 GB, 2000406183936 bytes
      65. 255 heads, 63 sectors/track, 243202 cylinders
      66. Units = cylinders of 16065 * 512 = 8225280 bytes
      67. Sector size (logical/physical): 512 bytes / 4096 bytes
      68. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      69. Disk identifier: 0x00000000
      70. Disk /dev/dm-0 doesn't contain a valid partition table
      Display All



      I see there is an issue with dm-0 from my old lvm that i thought i had removed completely, think that might be causing the problem?
    • I have never used lvm but I would guess that is the problem. Everything looks right.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Try: dmsetup remove dm-0

      If that name isn't right, try: dmsetup list to get the right name.
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!