raid5 with 3 hard drives (12To) create 4 partitions...!?

  • You could've used the actual command I posted :)
    The output of wipefs indicated it didn't do anything. So, that is probably why it didn't fix anything. What is the output of:


    parted /dev/sdb mklabel gpt
    parted /dev/sdb print


    Repeat for each drive.

    Hello,


    ok I did what you asked (with no raid actually)




    The result are strange, isn't it ?


    Regard.
    Piet

    • Offizieller Beitrag

    The result are strange, isn't it ?

    Nope. That is exactly what I was expecting. There are no partitions on the drives now. If cat /proc/mdstat says there is any arrays, reboot. Then go to the mdadm tab and create an array. It should not use partitions. If the drives in the list have numbers or letters after /dev/sdX, then don't create the array.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello,


    sorry, but I don't understand everything...


    So I do the command cat /proc/mdstat





    But after that I don't know how to create an array with mdadm tab... Do you have documentation about this ? Sorry.


    Thank you.




    Regard.


    Piet



    update: but I'll follow : https://www.digitalocean.com/c…ith-mdadm-on-ubuntu-16-04




    and if I want to create array, I said yes. Is that right ?


    Oh no, it is still the same problem...




    But before it was right, no ?


    • Offizieller Beitrag

    But after that I don't know how to create an array with mdadm tab... Do you have documentation about this ? Sorry.

    Sorry, it is the raid management tab. Didn't you create the arrays with that earlier?


    Oh no, it is still the same problem...

    I didn't notice until just now that the first array is correct and the three additional ones are marked as false. This means OMV is incorrectly detecting these because they exist in /proc/partitions. This shouldn't be causing a problem and you should be able to create a filesystem on the newly created array /dev/md0


    What is the output of: cat /proc/partitions

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This one is really interesting, thanks.


    It shows approx. 50 times longer rebuild times with a busy array.


    Let's take the numbers @piet has provided via screenshot (array creation showing /proc/mdstat). His array is 22888192 MB in size and a rebuild runs with slightly less than 150 MB/s. 22888192/149 --> 153612 seconds or +42.5 hours. Let's say rebuilding a totally idle RAID 5 like his takes 42 hours. If the array later contains data and is accessed we're talking about 42*50 hours: 2100 hours or 87.5 days or 12.5 weeks or almost 3 months. Within these 3 months there's no redundancy at all and the first URE on one of the two remaining disks will stop the rebuild anyway and the whole array is lost.


    I've never seen a more insane waste of a disk than this.

    backup

    RAID is not backup. I still don't get what you're trying to achieve...

  • If I want to do this, it'll do this. Sprry, but I post this thread for a solution, not to have some opinions.


    So, to be construtive, if someone have a solution ?


    Maybe boot from ubuntu and with gparted format the 3 disks ?

  • Zitat

    first URE on one of the two remaining disks will stop the rebuild anyway and the whole array is lost.

    why is this the case by the way?
    I think, the rebuild should continue, but notify about the problem


    piet: I will not help you, because I would find it unethical. With the first problem with this raid you will be in deep trouble or even lose data.

  • I post this thread for a solution, not to have some opinions

    With this attitude you might better buy a commercial NAS box and then call their customer support who will help you losing data. This is a community forum where users try to take care of each other. As such we try to protect other users from silly stuff like storage setups that are broken by design (RAID5 in this decade with today's drive sizes).

    why is this the case by the way?

    Neither know nor care (I'm not crazy and use mdraid with large drives in the 2nd most useless mode called RAID5).


    But you (and @piet) might want to check the mdraid wiki, starting e.g. from here and following all the links there. TL;DR: a RAID5 with drive sizes measured in TB will only work until needed, once a drive fails the whole array will be gone soon anyway. It's just a waste of disks and resources.

  • Before, I also build the raid 5 completely.


    Reboot


    And still same problem, the 3 bad devices are still there...



    I don't understand.And is there a way to format the drive and remove all data, and partition, and other things ?

  • Did you try to put a file system on your /dev/md0 as ryecoaaron has suggested in post no. 24? If you insist in using this setup you could try this first and if it works ignore the other partitions.


    In your post no.23 you show us a screenshot where sdb to sdd have no partition at all. If the mdadm command creates this weird additional partitions every time, it seems mdadm may have a problem or a bug with disks of that size.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Ok, mdadm may not have problem with disks of that size.


    And now it is working. How ?


    Like said Ryecoaaron, it was partition or something else on disks in fault.
    So to be sure I just use the dd command:


    Bash
    # dd if=/dev/zero of=/dev/sdb
    
    
    # dd if=/dev/zero of=/dev/sdc
    
    
    # dd if=/dev/zero of=/dev/sdd

    I stop before the end, just after +/- 250Go was writing.




    And restart the creation of the raid 5.





    So it is working now. Just 981 minutes for resyncing, = +/- 15 hours, which is reasonable. YES!



    Regards.


    Piet

  • Just 981 minutes for resyncing, = +/- 15 hours, which is reasonable. YES!

    OMG!


    Do you know how HDDs are constructed? They're faster at the outer tracks and get slower once they're filled. The 15 hours you're talking about are a joke. Do you get that you're affected by the 'classical' RAID5 write hole as well? Do you get that resync times will differ dramatically once you start to use the array? Do you get that you're affected by silent data corruption with your anachronistic storage setup?

  • ness1602: yes, thank you. I think the dd command has deleted all partions or other strange bad things from the HDD. One hdd came from a synology NAS.


    Then check what I have written in post no.19:

    Assumption: Some preexisting metadata must be on the drive.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Yes, but now there is no data at all, isn't it ?

    After your dd command, of course!


    I would assume that the wipefs command or a wipe in the WebUI do remove all partition information of the drive. Did you use the former Synology drive in a SHR setup? Maybe Synology writes a special configuration information to the drive which is not recognized by wipefs and therefore not deleted.


    But dd overwrites everything regardlessly also partition and file system information.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • I encountered the same problem.


    I use 4 10TB hard disks to build RAID5. Two faild partition.


    According to the tips of this post, I use dd to clear the hard disk information. The difference is that for each hard disk dd command I just wait 5 minutes to cancel the dd command

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!