RAID5 with SSD Cache

  • Hi Together,


    for me 4 new hdd (4tb wd red plus), want to use a raid 5 system.

    Additional it got an 250gb ssd in the system which I don't use for anything. So my idea was to use this ssd as cache for the raid 5.


    Before I used madam for the raid. But If I read it right in the internet, mdadm is not supporting this feature.

    So far I found out that LVM ist support the ssd cache with dm-cache.


    Did anyone already use this setup in production and give me some advice?

    Btrfs is not an option because there is no stable support for RAID5.


    Cheers Robert

    OMV 5.x always up to date.
    Modded dell t20 into 19" rack case with Pearl LCD Display (Status Display!)

    xeon e3-1225v3 / 32GB RAM / 1x500GB WD Blue SSD (OS) / 1x250 SSD (not used) / 1x1 TB Toshiba HHD (MultiDisk) / 4x 4TB WD40EFRX (Raid5)

  • Just for your interest. As I learned here the RAID 5 write hole on btrfs is not btrfs specific but exists on every raid solution. It is just the fact that devs of other solutions do not warn about that anymore.

  • HannesJo the write hole is not a big problem for me.

    In the past I used mdadm without any bigger problem. Also I don't have a ups running.

    But also at the btrfs is raid5 also marked as unstable. That is the reason why I don't want to use it.


    As far as I read in the internet ist the lvm raid not a good performing solution.

    And also there are some information that under lvm raid is also an mdadm working.

    I found some benchmarking from a German guy but the most problem here is that you don't know when the test was done.

    So with some further code changes it can be that the performance is not problem anymore.

    How knows?


    I also found some solutions where the guys use first the mdadm raid and then they used lvm.

    But there is a also a raid ssd for the caching used.


    At the moment I think I will not use the ssd cache option. I will start with an mdadm raid5.

    But any suggestion for the filesystem? Ext4 or anything else?


    Cheers Robert

    OMV 5.x always up to date.
    Modded dell t20 into 19" rack case with Pearl LCD Display (Status Display!)

    xeon e3-1225v3 / 32GB RAM / 1x500GB WD Blue SSD (OS) / 1x250 SSD (not used) / 1x1 TB Toshiba HHD (MultiDisk) / 4x 4TB WD40EFRX (Raid5)

  • the write hole is not a big problem for me.

    In the past I used mdadm without any bigger problem. Also I don't have a ups running.

    But also at the btrfs is raid5 also marked as unstable. That is the reason why I don't want to use it.



    But as far as I can see*, the unstable flag for btrfs raid5 relies on write hole only.


    * https://btrfs.wiki.kernel.org/index.php/Status

    Zitat


    [...] The write hole is the last missing part [...]

  • But as far as I can see*, the unstable flag for btrfs raid5 relies on write hole only.


    * https://btrfs.wiki.kernel.org/index.php/Statu

    There is an other point which makes me a little bit confused:

    Code
    Some fixes went to 4.12, namely scrub and auto-repair fixes. Feature marked as mostly OK for now.

    OMV 5.x always up to date.
    Modded dell t20 into 19" rack case with Pearl LCD Display (Status Display!)

    xeon e3-1225v3 / 32GB RAM / 1x500GB WD Blue SSD (OS) / 1x250 SSD (not used) / 1x1 TB Toshiba HHD (MultiDisk) / 4x 4TB WD40EFRX (Raid5)

  • to do it easily in omv

    raid with mdadm, put in lvm . build your physical volume, group and logical volumes without your cache

    after, add ssd on lvm and extend your group with your ssd

    on terminal

    lvcreate --type cache --cachemode writethrough --size 100%free --name cache volumename/logicalname /dev/physicalvolume

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!