Okay I researched the whole stuff and came to the following conclusion:
1. The Raidsetup with the mdadm tools used is automatically okay and the raid data area starts at 2MB. I only do not like the default chunk size of 512k. If you want to create it on your own, use the following commands:
(assume u have a 4 disk raid5 (/dev/sdb to /dev/sde and you want to name it "storage".)
This creates a degraded array with 128k chunk size, which should be a good setting between small files and large files. It creates 90+MB/s throughput end to end. If you use 512k it will still work, but maybe slower on smaller files.
After you created the degraded array, you need then to add the last disk to create the protected array:
At this point, the raid array will resync the parity information. You can monitor the status or the reconstruct with cat /proc/mdstat.
/Edit: Find another post about tuning the RAID stripe_cache_size http://forums.openmediavault.org/viewtopic.php?f=11&t=1417
Then create LVM physical disks (which is also using a 1MB offset for the data area) on the mirror device from the WebGui. You need to install the LVM2 plugin first.
Create one volume containing the whole space.
Afterwards create an ext4 file system also consuming the whole space (you can also choos to use less, whatever is applicable and appropriate to your situation). For each ext4 we now need to inform ext4 about the underlying raid architecture, so that it can optimize the writes. This is an imporant tuning step.
We will tune two options:
- stride
- stripe-width
Stride tells how many ext4 blocks (a 4096 byte) will fit into one chunk. So it is chunksizeKB/4=stride. In our example it is 128/4=32
The stripe-width tells ext4 how many strides will fit into the full raid array. That means how many blocks ext4 needs to write to write one chunk on every physical and active disk. So in a raid5 array, we need to multiply the Stride value by the number of active disks. The number of active disks is the number of disks in raid - 1. So it is 3 in our example here. The stripe-width then is 32*3=96.
The following command will set the parameters to the filesystem:
If you use the default 512k block size, then the following command line will tune your filesystem correctly:
Okay and now lets tune the mount options of your FS.
Open /etc/fstab with whatever editor you want to use (nano) and add to the mount options the following:
data=writeback,noatime, nouser_xattr
This options should be used for home users. It will avoid journaling the data (only meta data journaling), avoid writing metadata on every read of a file (noatim) and avoid extended attributes. Most likely you will never use the later one. If you want to have a absolute rock solid data integrety, then you should not enable the data=writeback stuff. If you using it at home as your home NAS, then the maximum that can happen is, that the last files written can be corrupted in case of a power failure. The filesystem is still intact, but the data itself may be corrupted. So that is normally not an issue for home users, as the data directly written during a power failure are simply recoverable from other sources.
After all that, do a final reboot and your performance should be good