How to create normal RAID5?

  • Hi all.
    I have HP microserver n54l with 3x3TB WD CaviarGreen HDD. Could somebody help me with partitioning this HDDs?
    I need to know how to make correct create GPT and correct format disks before build RAID5?
    So, I did some actions, but I'm not sure, that's correct.
    First of all, I deleted all partiotions from HDDs through gdisk utility, and created a new empty GPT. After that I created RAID5 in WebGUI.
    Now I checked my disks and see this:


    Is it normal?
    My raid5 status:


    P.S. Sorry for my english.

    • Offizieller Beitrag

    While everything looks normal, why didn't you just create it all in the web interface? The Wipe button in the physical devices tab will clear the drive and make it ready for the raid array. Then just create the array in the Raid tab.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat von "ryecoaaron"

    While everything looks normal, why didn't you just create it all in the web interface? The Wipe button in the physical devices tab will clear the drive and make it ready for the raid array. Then just create the array in the Raid tab.


    Does WebGui create GPT correct? What is the better way to create raid partition? Through webgui or shell?

    • Offizieller Beitrag

    I would assume it creates everything correctly because it works. The idea of OMV is to not have to use the shell. The web interface is running shell commands so the end result should be the same.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat von "ryecoaaron"

    I would assume it creates everything correctly because it works. The idea of OMV is to not have to use the shell. The web interface is running shell commands so the end result should be the same.


    I understand.
    So, what I did step by step with 3TB disks (AdvancedFormat):


    1. Create GPT



    Did the same with other 2 HDDs.


    2. Create RAID 5 array


    Code
    root@beast:~# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1


    After that I found the problem with status of my array:

    Code
    root@beast:~# mdadm --detail /dev/md0
    ........
        State : clean, degraded
        Number   Major   Minor   RaidDevice State
           0       8       17        0      active sync   /dev/sdb1
           1       8       33        1      active sync   /dev/sdc1
           2       0        0        2      removed
           3       8       49        -      spare   /dev/sdd1


    Third disk /dev/sdd1 was removed and mark spare. So I fixed it like this:

    Code
    root@beast:~# mdadm /dev/md0 --fail /dev/sdd1
    root@beast:~# mdadm /dev/md0 --remove /dev/sdd1
    root@beast:~# mdadm /dev/md0 --add /dev/sdd1


    Check status again:

    Code
    root@beast:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : active raid5 sdd1[3] sdc1[1] sdb1[0]
          5860527104 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
          [>....................]  recovery =  0.0% (2295012/2930263552) finish=1113.6min speed=43819K/sec


    3. Increase performance


    Code
    root@beast:~# echo 8192 > /sys/block/md0/md/stripe_cache_size


    Changed /etc/rc.local settings

    Code
    echo "/bin/echo 16384 > /sys/block/md0/md/stripe_cache_size" >> /etc/rc.local


    4. Optimal format


    Zitat

    1. chunk size = 512kB (see chunk size advise above)
    2. block size = 4kB (recommended for large files, and most of time)
    3. stride = chunk / block in this example 512kB / 4k = 128
    4. stripe-width = stride * ( (n disks in raid5) - 1 ) this example: 128 * ( (3) - 1 ) = 256


    Code
    mkfs.ext4 -b 4096 -E stride=128,stripe-width=256 /dev/md0


    After that, I've mount array through WebGUI. That's all.


    5. Final test performance


    Write test

    Code
    root@beast:~# dd if=/dev/zero of=/storage/test.hdd bs=1G count=40
    40+0 records in
    40+0 records out
    42949672960 bytes (43 GB) copied, 286.039 s, 150 MB/s


    Read test

    Code
    root@beast:/storage# dd of=/dev/zero if=/storage/test.hdd bs=1G count=40
    40+0 records in
    40+0 records out
    42949672960 bytes (43 GB) copied, 198.99 s, 216 MB/s
    • Offizieller Beitrag

    No need to make partitions for raid. mdadm can use the entire drive. Your format options would be the one thing not available on the web interface.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat von "ryecoaaron"

    No need to make partitions for raid. mdadm can use the entire drive. Your format options would be the one thing not available on the web interface.


    I have some experince with FreeBSD and for me easier make operations with hdd in shell. So, I think this way is right to that type of hdd.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!