raid5 with 3 hard drives (12To) create 4 partitions...!?

    • OMV 4.x
    • Resolved
    • ryecoaaron wrote:

      piet wrote:

      wipefs -a /dev/sdb
      wipefs -a /dev/sdc
      wipefs -a /dev/sdd
      You could've used the actual command I posted :)
      The output of wipefs indicated it didn't do anything. So, that is probably why it didn't fix anything. What is the output of:

      parted /dev/sdb mklabel gpt
      parted /dev/sdb print

      Repeat for each drive.
      Hello,

      ok I did what you asked (with no raid actually)

      Source Code

      1. root@OMV:~# parted /dev/sdb mklabel gpt
      2. Information: You may need to update /etc/fstab.
      3. root@OMV:~# parted /dev/sdb print
      4. Model: ATA ST12000NE0007-2G (scsi)
      5. Disk /dev/sdb: 12.0TB
      6. Sector size (logical/physical): 512B/4096B
      7. Partition Table: gpt
      8. Disk Flags:
      9. Number Start End Size File system Name Flags
      10. root@OMV:~# parted /dev/sdc mklabel gpt
      11. Information: You may need to update /etc/fstab.
      12. root@OMV:~# parted /dev/sdc print
      13. Model: ATA ST12000NE0007-2G (scsi)
      14. Disk /dev/sdc: 12.0TB
      15. Sector size (logical/physical): 512B/4096B
      16. Partition Table: gpt
      17. Disk Flags:
      18. Number Start End Size File system Name Flags
      19. root@OMV:~# parted /dev/sdc prin^C
      20. root@OMV:~# parted /dev/sdd mklabel gpt
      21. Information: You may need to update /etc/fstab.
      22. root@OMV:~# parted /dev/sdd print
      23. Model: ATA ST12000NE0007-2G (scsi)
      24. Disk /dev/sdd: 12.0TB
      25. Sector size (logical/physical): 512B/4096B
      26. Partition Table: gpt
      27. Disk Flags:
      28. Number Start End Size File system Name Flags
      29. root@OMV:~#
      Display All



      The result are strange, isn't it ?

      Regard.
      Piet
    • piet wrote:

      The result are strange, isn't it ?
      Nope. That is exactly what I was expecting. There are no partitions on the drives now. If cat /proc/mdstat says there is any arrays, reboot. Then go to the mdadm tab and create an array. It should not use partitions. If the drives in the list have numbers or letters after /dev/sdX, then don't create the array.
      omv 4.1.19 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Hello,

      sorry, but I don't understand everything...

      So I do the command cat /proc/mdstat





      But after that I don't know how to create an array with mdadm tab... Do you have documentation about this ? Sorry.

      Thank you.



      Regard.

      Piet


      update: but I'll follow : digitalocean.com/community/tut…ith-mdadm-on-ubuntu-16-04



      and if I want to create array, I said yes. Is that right ?

      Source Code

      1. root@OMV:~# sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
      2. mdadm: layout defaults to left-symmetric
      3. mdadm: layout defaults to left-symmetric
      4. mdadm: chunk size defaults to 512K
      5. mdadm: /dev/sdb appears to be part of a raid array:
      6. level=raid0 devices=0 ctime=Thu Jan 1 01:00:00 1970
      7. mdadm: partition table exists on /dev/sdb but will be lost or
      8. meaningless after creating array
      9. mdadm: /dev/sdc appears to be part of a raid array:
      10. level=raid0 devices=0 ctime=Thu Jan 1 01:00:00 1970
      11. mdadm: partition table exists on /dev/sdc but will be lost or
      12. meaningless after creating array
      13. mdadm: /dev/sdd appears to be part of a raid array:
      14. level=raid0 devices=0 ctime=Thu Jan 1 01:00:00 1970
      15. mdadm: partition table exists on /dev/sdd but will be lost or
      16. meaningless after creating array
      17. mdadm: size set to 11718754304K
      18. mdadm: automatically enabling write-intent bitmap on large array
      19. Continue creating array?
      20. Continue creating array? (y/n) y
      21. mdadm: Defaulting to version 1.2 metadata
      22. mdadm: array /dev/md0 started.
      23. root@OMV:~#
      Display All

      Oh no, it is still the same problem...




      But before it was right, no ?

      The post was edited 7 times, last by piet ().

    • piet wrote:

      But after that I don't know how to create an array with mdadm tab... Do you have documentation about this ? Sorry.
      Sorry, it is the raid management tab. Didn't you create the arrays with that earlier?

      piet wrote:

      Oh no, it is still the same problem...
      I didn't notice until just now that the first array is correct and the three additional ones are marked as false. This means OMV is incorrectly detecting these because they exist in /proc/partitions. This shouldn't be causing a problem and you should be able to create a filesystem on the newly created array /dev/md0

      What is the output of: cat /proc/partitions
      omv 4.1.19 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • This one is really interesting, thanks.

      It shows approx. 50 times longer rebuild times with a busy array.

      Let's take the numbers @piet has provided via screenshot (array creation showing /proc/mdstat). His array is 22888192 MB in size and a rebuild runs with slightly less than 150 MB/s. 22888192/149 --> 153612 seconds or +42.5 hours. Let's say rebuilding a totally idle RAID 5 like his takes 42 hours. If the array later contains data and is accessed we're talking about 42*50 hours: 2100 hours or 87.5 days or 12.5 weeks or almost 3 months. Within these 3 months there's no redundancy at all and the first URE on one of the two remaining disks will stop the rebuild anyway and the whole array is lost.

      I've never seen a more insane waste of a disk than this.

      piet wrote:

      backup
      RAID is not backup. I still don't get what you're trying to achieve...
    • first URE on one of the two remaining disks will stop the rebuild anyway and the whole array is lost.
      why is this the case by the way?
      I think, the rebuild should continue, but notify about the problem

      @piet: I will not help you, because I would find it unethical. With the first problem with this raid you will be in deep trouble or even lose data.
    • piet wrote:

      I post this thread for a solution, not to have some opinions
      With this attitude you might better buy a commercial NAS box and then call their customer support who will help you losing data. This is a community forum where users try to take care of each other. As such we try to protect other users from silly stuff like storage setups that are broken by design (RAID5 in this decade with today's drive sizes).

      henfri wrote:

      why is this the case by the way?
      Neither know nor care (I'm not crazy and use mdraid with large drives in the 2nd most useless mode called RAID5).

      But you (and @piet) might want to check the mdraid wiki, starting e.g. from here and following all the links there. TL;DR: a RAID5 with drive sizes measured in TB will only work until needed, once a drive fails the whole array will be gone soon anyway. It's just a waste of disks and resources.
    • Did you try to put a file system on your /dev/md0 as ryecoaaron has suggested in post no. 24? If you insist in using this setup you could try this first and if it works ignore the other partitions.

      In your post no.23 you show us a screenshot where sdb to sdd have no partition at all. If the mdadm command creates this weird additional partitions every time, it seems mdadm may have a problem or a bug with disks of that size.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • Ok, mdadm may not have problem with disks of that size.

      And now it is working. How ?

      Like said Ryecoaaron, it was partition or something else on disks in fault.
      So to be sure I just use the dd command:

      Shell-Script

      1. # dd if=/dev/zero of=/dev/sdb
      2. # dd if=/dev/zero of=/dev/sdc
      3. # dd if=/dev/zero of=/dev/sdd
      I stop before the end, just after +/- 250Go was writing.



      And restart the creation of the raid 5.





      So it is working now. Just 981 minutes for resyncing, = +/- 15 hours, which is reasonable. YES!


      Regards.

      Piet
    • piet wrote:

      Just 981 minutes for resyncing, = +/- 15 hours, which is reasonable. YES!
      OMG!

      Do you know how HDDs are constructed? They're faster at the outer tracks and get slower once they're filled. The 15 hours you're talking about are a joke. Do you get that you're affected by the 'classical' RAID5 write hole as well? Do you get that resync times will differ dramatically once you start to use the array? Do you get that you're affected by silent data corruption with your anachronistic storage setup?
    • piet wrote:

      @ness1602: yes, thank you. I think the dd command has deleted all partions or other strange bad things from the HDD. One hdd came from a synology NAS.

      Then check what I have written in post no.19:

      cabrio_leo wrote:

      Assumption: Some preexisting metadata must be on the drive.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304
    • piet wrote:

      Yes, but now there is no data at all, isn't it ?
      After your dd command, of course!

      I would assume that the wipefs command or a wipe in the WebUI do remove all partition information of the drive. Did you use the former Synology drive in a SHR setup? Maybe Synology writes a special configuration information to the drive which is not recognized by wipefs and therefore not deleted.

      But dd overwrites everything regardlessly also partition and file system information.
      OMV 3.0.90 (Gray style)
      ASRock Rack C2550D4I - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304