How to make a advanced 4k format?

  • Hello,
    so, I have bought 4 new disks, each 3 TB, to build a new raid5. Over night the new raid was build and now it's time to format the raid. But I cannot find any checkpoint to format the raid with 4k-sektors. Will this be automatically or must I format manually and how????
    mike

    Zotac H55ITX-C-E Mainboard,
    8 GB RAM,
    i3-540 CPU,
    64GB Samsung 830 SSD,
    4x WD-GP 3TB (WD30EURS) Raid5,
    Chenbro ES34069 Server Chassis
    OMV 3.x

    • Offizieller Beitrag

    OMV is supposed to do this automatically.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hmmh,
    are you really sure???
    I have done the format with ext4 and now have very bad write-performance. With my "old" raid5 consisting of 4x1.5TB disks I have had 80 - 90 MB/s and now I only have 25 - 30 MB/s :shock: . I know, that 4k-disks have very poor write-performance, when they are not formated right.


    So the question the second time.....
    Does OMV recognizes the 4k-disks automatically and format them in the right way, or must I format them manually, e.g. with sfdisk -uS -f /dev/sdx << EOF
    2048,,L,-
    EOF


    Perhaps Volker can give the right answer????


    mike

    Zotac H55ITX-C-E Mainboard,
    8 GB RAM,
    i3-540 CPU,
    64GB Samsung 830 SSD,
    4x WD-GP 3TB (WD30EURS) Raid5,
    Chenbro ES34069 Server Chassis
    OMV 3.x

    • Offizieller Beitrag

    Yep, pretty sure. Volker went through this on his blogs long before he released OMV. All disks are formatted for 4k sectors because it doesn't hurt older 512 byte drives. This is a quote from Volker in his blogs:


    Zitat

    The partitions are optimal aligned and the block size of the filesystem is set to 4096b (useless in most cases because the manpage tells that 4k is the default).


    E.g. it can be manipulated if necessary via /etc/defaults/openmediavault:


    OMV_INITFS_OPTIONS_EXT3=”-b 4096″
    OMV_INITFS_OPTIONS_EXT4=”-b 4096″
    OMV_INITFS_OPTIONS_JFS=”-q”
    OMV_INITFS_OPTIONS_XFS=”-b size=4096 -f”

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ah ok,
    that calms me now :D . So I have to search for the bad performance on an other place.
    Thanks ryecoaaron

    Zotac H55ITX-C-E Mainboard,
    8 GB RAM,
    i3-540 CPU,
    64GB Samsung 830 SSD,
    4x WD-GP 3TB (WD30EURS) Raid5,
    Chenbro ES34069 Server Chassis
    OMV 3.x

    • Offizieller Beitrag

    If you are still worried the OMV is setting it different, you could always setup the raid with a bootable distro like SystemRescueCD and then just mount them in OMV. What OS/distro were you running when you were getting 80-90 MB/s? That is what I am getting on my N40L. Maybe the OMV kernel is too old for your hardware and you were something newer before??

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • You've misunderstood.
    If have had the 80 - 90 MB/s with the same Server, but the "older" HDs. Before I have had 1.5 TB WD HDs, now I have the 3TB HDs from WD. So I came to the presumption, that there could be a problem with the 4k-format, which is now excluded. I will test more this weekend and sign up again.

    Zotac H55ITX-C-E Mainboard,
    8 GB RAM,
    i3-540 CPU,
    64GB Samsung 830 SSD,
    4x WD-GP 3TB (WD30EURS) Raid5,
    Chenbro ES34069 Server Chassis
    OMV 3.x

  • I think you mixed some topics.


    1. The issue with the write performance (and read to a limited extend) is not block size, but file system alignment.
    2. The Settings in /etc/defaults is the blocksize of the filesystem.


    What you should check with fdisk -l is the starting sector of your partition. If it is sector 63 and the disk is one of those "Advanced Formatting" disks, then you are in trouble.


    Additionally, the block size of the file system and the raid stripe size should be somehow aligned. Writing 4k blocks to a raid stripe size of 64k*3 raid volume will also result in multiple writes ... That is the same issue with all the filesystems, which are not aware of the underlying raid technology. This is not a real issue with large files and the corresponding writes, as the cache handles it then correctly.


    the last thing is, that LVM itself writes a 192k header at the beginning of the disk/partiton/md-dev, which is aligned to your 4k sektor size but maybe not aligned to your raid stripe size. therefore you may write to uneven blocks on your raid.


    I recommend reading this article: http://askubuntu.com/questions…-partition-table-properly .
    At this time, I have no additional time to create the correct setting ... I need to figure it out. I will post it, if I found it myself.

    Everything is possible, sometimes it requires Google to find out how.

    • Offizieller Beitrag

    The file system should be aligned according to my quote from Volker's blog. I don't remember if OMV uses parted or not but it defaults to an optimal starting sector.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Okay I researched the whole stuff and came to the following conclusion:


    1. The Raidsetup with the mdadm tools used is automatically okay and the raid data area starts at 2MB. I only do not like the default chunk size of 512k. If you want to create it on your own, use the following commands:


    (assume u have a 4 disk raid5 (/dev/sdb to /dev/sde and you want to name it "storage".)

    Code
    mdadm --create /dev/md0 -e 1.2 -n 4 -c 128k -l 5 -N storage /dev/sdb /dev/sdc /dev/sdd missing


    This creates a degraded array with 128k chunk size, which should be a good setting between small files and large files. It creates 90+MB/s throughput end to end. If you use 512k it will still work, but maybe slower on smaller files.


    After you created the degraded array, you need then to add the last disk to create the protected array:

    Code
    mdadm /dev/md0 --add /dev/sde


    At this point, the raid array will resync the parity information. You can monitor the status or the reconstruct with cat /proc/mdstat.


    Then create LVM physical disks (which is also using a 1MB offset for the data area) on the mirror device from the WebGui.
    Create one volume containing the whole space.


    Afterwards create an ext4 file system also consuming the whole space (you can also choos to use less, whatever is applicable and appropriate to your situation). For each ext4 we now need to inform ext4 about the underlying raid architecture, so that it can optimize the writes. This is an imporant tuning step.


    We will tune two options:

    • stride
    • stripe-width


    Stride tells how many ext4 blocks (a 4096 byte) will fit into one chunk. So it is chunksizeKB/4=stride. In our example it is 128/4=32
    The stripe-width tells ext4 how many strides will fit into the full raid array. That means how many blocks ext4 needs to write to write one chunk on every physical and active disk. So in a raid5 array, we need to multiply the Stride value by the number of active disks. The number of active disks is the number of disks in raid - 1. So it is 3 in our example here. The stripe-width then is 32*3=96.


    The following command will set the parameters to the filesystem:


    Code
    tune2fs -E stride=32,stripe-width=96 -Odir_index /dev/mapper/UUIDofext4fspartition


    If you use the default 512k block size, then the following command line will tune your filesystem correctly:


    Code
    tune2fs -E stride=128,stripe-width=384 -Odir_index /dev/mapper/UUIDofext4fspartition


    Okay and now lets tune the mount options of your FS.


    Open /etc/fstab with whatever editor you want to use (nano) and add to the mount options the following:


    data=writeback,noatime, nouser_xattr


    This options should be used for home users. It will avoid journaling the data (only meta data journaling), avoid writing metadata on every read of a file (noatim) and avoid extended attributes. Most likely you will never use the later one. If you want to have a absolute rock solid data integrety, then you should not enable the data=writeback stuff. If you using it at home as your home NAS, then the maximum that can happen is, that the last files written can be corrupted in case of a power failure. The filesystem is still intact, but the data itself may be corrupted. So that is normally not an issue for home users, as the data directly written during a power failure are simply recoverable from other sources.


    After all that, do a final reboot and your performance should be good :)

    Everything is possible, sometimes it requires Google to find out how.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!