Slow performance on ZFS vs MDADM on the same hardware 2 different RAIDs

  • Hello people. I got a PowerEdge T320 with 2 Raids on it.

    One mirror 2x2TB

    One Raidz 2 x 4TB.


    I've noticed that when I write something on ZFS I can't get more than 85mb/s and the iowait is 10-20-30% and CPU load 1+.

    When I write something on the mdadm raid I get 110-120-130MB/s. No load or iowait.


    There is no sata cabled or defective mobo as a few suggests. Everything is on Dell's backplane.

    I've switched places though. The same.


    E5-2403 0 @ 1.80GHz

    36GB RAM

    Disks are Red NAS WD. No EFAX. Disks are all EFRX (CMR).


    I got data in there, if not I could re-format the 3x4TB to xfs/mdadm for testing.

    But any suggestions before I reach this are welcome.


    Also tried rsync --progress between RAIDs and on network. Same result. It's not network issue.


    Anything to mdadm/xfs I am getting 120MB+/s


    Anything to zfs is ~85 MB/s max. Sometimes is getting worst, playing around 40-50-60-80 but never more than 85-86MB/s.


    pve kernel. 5.4.78-2-pve


    Am I missing something here ?


    root@vault:~# zpool get all

    NAME PROPERTY VALUE SOURCE

    vault size 10.9T -

    vault capacity 33% -

    vault altroot - default

    vault health ONLINE -

    vault guid 2601074807953605617 -

    vault version - default

    vault bootfs - default

    vault delegation on default

    vault autoreplace off default

    vault cachefile - default

    vault failmode wait default

    vault listsnapshots off default

    vault autoexpand off default

    vault dedupditto 0 default

    vault dedupratio 1.00x -

    vault free 7.22T -

    vault allocated 3.69T -

    vault readonly off -

    vault ashift 0 default

    vault comment - default

    vault expandsize - -

    vault freeing 0 -

    vault fragmentation 0% -

    vault leaked 0 -

    vault multihost off default

    vault checkpoint - -

    vault load_guid 2237394498164054881 -

    vault autotrim off default

    vault feature@async_destroy enabled local

    vault feature@empty_bpobj enabled local

    vault feature@lz4_compress active local

    vault feature@multi_vdev_crash_dump enabled local

    vault feature@spacemap_histogram active local

    vault feature@enabled_txg active local

    vault feature@hole_birth active local

    vault feature@extensible_dataset active local

    vault feature@embedded_data active local

    vault feature@bookmarks enabled local

    vault feature@filesystem_limits enabled local

    vault feature@large_blocks enabled local

    vault feature@large_dnode enabled local

    vault feature@sha512 enabled local

    vault feature@skein enabled local

    vault feature@edonr enabled local

    vault feature@userobj_accounting active local

    vault feature@encryption enabled local

    vault feature@project_quota active local

    vault feature@device_removal enabled local

    vault feature@obsolete_counts enabled local

    vault feature@zpool_checkpoint enabled local

    vault feature@spacemap_v2 active local

    vault feature@allocation_classes enabled local

    vault feature@resilver_defer enabled local

    vault feature@bookmark_v2 enabled local

    • Offizieller Beitrag

    I've noticed that when I write something on ZFS I can't get more than 85mb/s and the iowait is 10-20-30% and CPU load 1+
    When I write something on the mdadm raid I get 110-120-130MB/s

    1. A COW file system is going to be, inherently, slower than zombie (mdadm) raid. 25 to even 45% slower doesn't seem unreasonable to me when one considers the metadata attached to all files, making hard links and snapshots possible.

    2. Also, you have encryption enabled. Are you comparing encrypted ZFS to unencrypted mdadm?
    (While it's a personal opinion; I've never understood the fascination surrounding drive encryption, especially since most users/admins are interested in I/O performance and it's highly unlikely that a burglar would break in and steal a server or it's drives.)


    Since ZFS has a lot more going on than mdadm RAID, you might consider setting up an SSD as a ZIL drive. (With a ZIL drive, you'll need to be on an UPS.) Also, since you have plenty of ram, I believe it's possible to set up a "RAM" ZIL drive. There's also the possibility of using an SSD as L2ARC. There are references and how-to's for doing these things on the net. Here's an example.

    Otherwise, if absolute top speed is what you're after, ZFS may not be for you. As you suggested, mdadm + XFS and a good drive controller, and high RPM drives or (better yet) SSD's will give top I/O performance.

    _____________________________________________________________


    Since the network is almost always the bottle neck (1GB) for a NAS server, what is the need for top speed about? What's the use case for this server?

  • Do You use ZFS by a Hardware-RAID? This is strongly not recommended, since You might loose data and ZFS does not need any h-raid.

    Peter

    Proxmox PVE 7.2-11 + OMV 5.6.26-1 + extras
    Mainboard Fujitsu D3417-B; CPU Xeon E3-1245v5; RAM 32GB

  • Flashed controller. LSI firmware on the PERC. raidz1-0, no hardware raid!


    Since T320 is on dual PSUs, both PSUs on different UPS,

    I attached a tiny intel 80GB as log. Seems it take off!


    From

    20971520000 bytes (21 GB, 20 GiB) copied, 189.293 s, 111 MB/s


    to

    20971520000 bytes (21 GB, 20 GiB) copied, 71.7563 s, 292 MB/s


Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!