OMV create raid5 without partitions - bad or good practice ?

    • OMV create raid5 without partitions - bad or good practice ?

      When I have started to work with linux raid, there was in that time recommendation to create equal partitions on all disks that will participate in Software RAID (1, 5, 6 ...) ans after that you had to assemble raid using example command bellow:

      mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1

      On OMV I can see that is creating raid without using partitions, so it is using similar command as bellow.


      mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

      Reason for my question is in case that we have done change of some failed disk with different manufacturer of disk (like we are replacing WD RED with Seagate), on sticker it is a "same size" but in real world (I am a former HPE engineer) I have seen that it is not a case to have same size (can diff in couple of bytes ...) ...

      Because of that it is possible that there will be issue with assembly of raid, if there is diff size of disks.

      So my question is did you have any similar issues, and how did you resolve it ?
      omv 5.0.14 (Usul) | 64 bit | omvextrasorg 5.1.4 | Docker 5:19.03.4~3-0~debian-buster
      HP Microserver Gen8 | Xeon E3-1260L | 8 GB memory | 4x 3TB WD Red | Samsung 840 120GB EVO (OS) | Intel NVMe 256GB SSD (Docker)

      The post was edited 1 time, last by mkomac ().

    • Hello,

      I am not an expert in the creation / management of RAID, but currently I am in the same configuration as you (a RAID5 on 4 disks with partitions)

      I ordered new disks to remake my RAID 5 and I will do it on LVM (so without partition) because it's really much more flexible if you want to add volume in your VG and your LV.

      Currently I have this configuration, but as I have a project to redo everything under VMware I think about how to do

      Source Code

      1. root@openmediavault:~# lsblk
      2. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
      3. sda 8:0 0 931G 0 disk
      4. └─sda1 8:1 0 931G 0 part /srv/dev-disk-by-label-SavDD
      5. sdb 8:16 0 1,8T 0 disk
      6. ├─sdb1 8:17 0 2,4G 0 part
      7. ├─sdb2 8:18 0 2G 0 part
      8. ├─sdb3 8:19 0 1K 0 part
      9. └─sdb5 8:21 0 1,8T 0 part
      10. └─md2 9:2 0 5,5T 0 raid5
      11. └─vg1000-lv 253:0 0 5,5T 0 lvm /srv/dev-disk-by-label-1.42.6-5644
      12. sdc 8:32 0 1,8T 0 disk
      13. ├─sdc1 8:33 0 2,4G 0 part
      14. ├─sdc2 8:34 0 2G 0 part
      15. ├─sdc3 8:35 0 1K 0 part
      16. └─sdc5 8:37 0 1,8T 0 part
      17. └─md2 9:2 0 5,5T 0 raid5
      18. └─vg1000-lv 253:0 0 5,5T 0 lvm /srv/dev-disk-by-label-1.42.6-5644
      19. sdd 8:48 0 1,8T 0 disk
      20. ├─sdd1 8:49 0 2,4G 0 part
      21. ├─sdd2 8:50 0 2G 0 part
      22. ├─sdd3 8:51 0 1K 0 part
      23. └─sdd5 8:53 0 1,8T 0 part
      24. └─md2 9:2 0 5,5T 0 raid5
      25. └─vg1000-lv 253:0 0 5,5T 0 lvm /srv/dev-disk-by-label-1.42.6-5644
      26. sde 8:64 0 1,8T 0 disk
      27. ├─sde1 8:65 0 2,4G 0 part
      28. ├─sde2 8:66 0 2G 0 part
      29. ├─sde3 8:67 0 1K 0 part
      30. └─sde5 8:69 0 1,8T 0 part
      31. └─md2 9:2 0 5,5T 0 raid5
      32. └─vg1000-lv 253:0 0 5,5T 0 lvm /srv/dev-disk-by-label-1.42.6-5644
      33. sdf 8:80 1 931,5G 0 disk
      34. ├─sdf1 8:81 1 255,2G 0 part /
      35. ├─sdf2 8:82 1 16,2G 0 part [SWAP]
      36. └─sdf3 8:83 1 660G 0 part /srv/dev-disk-by-label-VM
      37. sdg 8:96 0 119,2G 0 disk
      38. └─sdg1 8:97 0 119,2G 0 part /srv/dev-disk-by-label-SSD
      Display All
      AMD Ryzen 5 2400G on Asus TUF B450M-PLUS - 8Gb RAM - 3 * 3To RAID5 on LSI Megaraid SAS 9260-8i and 3 SSD in Fractal Design Node 804 Black
      OS: OMV 4.1.26-1
    • diego wrote:

      @Methy, I don't really see the point of using LVM+RAID+{your filesystem of choice} instead of ZFS. Have you considered it?
      My tree is quite old and this raid came from a former Synology NAS
      The LVM is relatively convenient to extend a Volume Group then Logical Volume easily into a VM when your FS is full.

      Currently I have not tested the ZFS, I do not know what gives performance level compared to a physical raid5 on a RAID card.
      But I have a friend who uses it and who is happy. I will try one day :)
      I have my old disks of my raid software that are available, I will do a ZFS raid to compare.
      AMD Ryzen 5 2400G on Asus TUF B450M-PLUS - 8Gb RAM - 3 * 3To RAID5 on LSI Megaraid SAS 9260-8i and 3 SSD in Fractal Design Node 804 Black
      OS: OMV 4.1.26-1