Samsung PM871 slow write speeds

  • Hello there,


    I got an issue utilizing a set of Samsung PM871 SSDs in Openmediavault. The system is a a dual Xeon 2680v2 build with 8*16gb ram, using two Broadcom 2308 SAS controllers in jbod mode and two supermicro BPN-SAS2-826EL1 expander backplanes.
    I am succesfully running 12 HGST Ultrastar 7k6000 in Raid z2 using zol, which gives better than expected performance.
    However, I wanted to add a 6 disk mdadm raid0 with ssds for fast operations to do some quick multi node mpi work on them, where data consitency is less of a concern. As clients are all connected via FDR Infiniband I can make use of faster speeds, although the SAS Controller is becoming a bottleneck.
    Unfortunally the SSDs are slower than the spinning disks in this particular system, which seems to be a software issue. Even with large block sizes the maximum write speeds are consistently at 82mb/s, while reads behaving as expected. Even with small block sizes the spinning disks are faster. I dont have this issue with a set of Samsung 850 pros i tested and not with some cheaper intenso ssds. Testing the drives in another system gives the expected write speeds and booting an arch linux from usb stick on the openmediavault system I get much faster writes too.
    I am kind of clueless at the moment. Any idea what to do about that?
    I am running Kernel 4.19.16-1, the arch stick I tested was also build with a 4.19.x Kernel. The system was set up utilizing a basic debian installation and the openmediavault repo.


    Thank you in advance!

    Einmal editiert, zuletzt von getName() () aus folgendem Grund: minor correction

    • Offizieller Beitrag

    I don't think this is a software issue. I have seen plenty of HP raid controllers that are slower with SSDs because of write and/or read cache being enabled. Look at your raid controller settings.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Well, the very same setup is fast using another os, as i tested running arch with a quite close kernel. There are others having this issue using samsung ssds and debian.
    Also every other ssd i place in the very same spots are fast.
    edit: no caching in controller active, just checked

    • Offizieller Beitrag

    Well, the very same setup is fast using another os, as i tested running arch with a quite close kernel. There are others having this issue using samsung ssds and debian.
    Also every other ssd i place in the very same spots are fast.

    You can try the proxmox (ubuntu 18.04 LTS) kernel. omv-extras has an install button on the kernel tab.


    I still get what Debian would do wrong to make it slow. Arch doesn't do magic things with the kernel and I don't think a more optimized compile would cause this big of a speed difference.

    Also every other ssd i place in the very same spots are fast.

    This make it seem more like a samsung issue. Have you updated the firmware on them? I use lots of samsung SSDs on raid controllers with Debian though.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I agree that it is supprising to see that different behaviour between arch and debian here and I agree, that compilation should not be cause this.
    There are no firmware updates for those drives available.
    It is totally wierd and only this particular combination of ssd, system and os is showing this. If I change anything of the three, its working just as expected.
    Its also not about mount options as I use dd on the device.

    • Offizieller Beitrag

    If I change anything of the three, its working just as expected.

    I would love to see if the proxmox kernel fixes this.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    I am not sure about the zol kernel module dependency here and if it will be broken.

    Not sure what you mean? ZoL is builtin to the proxmox kernel and works well.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Proxmox Kernel is indeed faster, but still very very slow. 87mb/s instead of 82mb/s.


    I am absolutly helpless at the moment. Maybe I will need to trace dd and have a look for some strange bottle necks.


    • Offizieller Beitrag

    I am absolutly helpless at the moment

    Maybe it is the scheduler? cat /sys/block/*/queue/scheduler


    Looks like Debian is using cfq while Ubuntu is using deadline. The arch wiki seems to recommend noop for SSDs.


    echo noop > /sys/block/sda/queue/scheduler

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Tested all three, the differences in write speed are within the statistical error.

    Have you diff'd the kernel config between arch and debian? This makes no sense to me how it could be that different.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I got a lot of meatings out of office today, so I will be able to create the diff in the evening.
    I could use a custom kernel in the OMV installtion if that would solve the issue, but I am not sure if the answer is within the kernel config.



    This makes no sense to me how it could be that different.

    I absolutly agree and surely hope it is just a small config I somehow missed.

  • Both kernels use the same mpt3sas module.
    I did find others having this issue with some ssds and this controller, but no solution yet.
    I cant find any newer firmware for this controller.
    Again, read speeds are absolutly fine.

    Code
    dd if=/dev/sdb of=/dev/zero bs=1G count=1
    1+0 records in
    1+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.14369 s, 501 MB/s


    • Offizieller Beitrag

    Again, read speeds are absolutly fine.

    The read speeds are using OMV?

    I cant find any newer firmware for this controller.

    Actually, some raid controllers (like the LSI 9211-8i) work better on Linux with older firmware.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What do you mean by using OMV?

    I wasn't sure if that was from Arch or Debian/OMV. Do you have write cache enabled in the Physical Disks tab under properties?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It is disabled, but I tried both.
    Interpreting my post as brainfuck is a funny decission by the hl engine btw. I am not sure if it is kind of an assault or not, as it is actually rather strange and therefore not easy to learn.

    • Offizieller Beitrag

    It is disabled, but I tried both.
    Interpreting my post as brainfuck is a funny decission by the hl engine btw. I am not sure if it is kind of an assault or not, as it is actually rather strange and therefore not easy to learn.

    Not sure why the board labels it like that.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I just decided to keep this mystery unresolved, it starts to eat up too much time. I could have easily purchased a new set of ssds calculating working costs already, and this is exactly what I am going to do now. I will find a use for the old PM871 at some point.
    Thank you for your help!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!