Samsung PM871 slow write speeds

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Samsung PM871 slow write speeds

      Hello there,

      I got an issue utilizing a set of Samsung PM871 SSDs in Openmediavault. The system is a a dual Xeon 2680v2 build with 8*16gb ram, using two Broadcom 2308 SAS controllers in jbod mode and two supermicro BPN-SAS2-826EL1 expander backplanes.
      I am succesfully running 12 HGST Ultrastar 7k6000 in Raid z2 using zol, which gives better than expected performance.
      However, I wanted to add a 6 disk mdadm raid0 with ssds for fast operations to do some quick multi node mpi work on them, where data consitency is less of a concern. As clients are all connected via FDR Infiniband I can make use of faster speeds, although the SAS Controller is becoming a bottleneck.
      Unfortunally the SSDs are slower than the spinning disks in this particular system, which seems to be a software issue. Even with large block sizes the maximum write speeds are consistently at 82mb/s, while reads behaving as expected. Even with small block sizes the spinning disks are faster. I dont have this issue with a set of Samsung 850 pros i tested and not with some cheaper intenso ssds. Testing the drives in another system gives the expected write speeds and booting an arch linux from usb stick on the openmediavault system I get much faster writes too.
      I am kind of clueless at the moment. Any idea what to do about that?
      I am running Kernel 4.19.16-1, the arch stick I tested was also build with a 4.19.x Kernel. The system was set up utilizing a basic debian installation and the openmediavault repo.

      Thank you in advance!

      The post was edited 1 time, last by getName(): minor correction ().

    • I don't think this is a software issue. I have seen plenty of HP raid controllers that are slower with SSDs because of write and/or read cache being enabled. Look at your raid controller settings.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • getName() wrote:

      Well, the very same setup is fast using another os, as i tested running arch with a quite close kernel. There are others having this issue using samsung ssds and debian.
      Also every other ssd i place in the very same spots are fast.
      You can try the proxmox (ubuntu 18.04 LTS) kernel. omv-extras has an install button on the kernel tab.

      I still get what Debian would do wrong to make it slow. Arch doesn't do magic things with the kernel and I don't think a more optimized compile would cause this big of a speed difference.

      getName() wrote:

      Also every other ssd i place in the very same spots are fast.
      This make it seem more like a samsung issue. Have you updated the firmware on them? I use lots of samsung SSDs on raid controllers with Debian though.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I agree that it is supprising to see that different behaviour between arch and debian here and I agree, that compilation should not be cause this.
      There are no firmware updates for those drives available.
      It is totally wierd and only this particular combination of ssd, system and os is showing this. If I change anything of the three, its working just as expected.
      Its also not about mount options as I use dd on the device.
    • getName() wrote:

      If I change anything of the three, its working just as expected.
      I would love to see if the proxmox kernel fixes this.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • getName() wrote:

      I am not sure about the zol kernel module dependency here and if it will be broken.
      Not sure what you mean? ZoL is builtin to the proxmox kernel and works well.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Proxmox Kernel is indeed faster, but still very very slow. 87mb/s instead of 82mb/s.

      I am absolutly helpless at the moment. Maybe I will need to trace dd and have a look for some strange bottle necks.

      Source Code

      1. ATA device, with non-removable media
      2. Model Number: SAMSUNG SSD PM871 2.5 7mm 128GB
      3. Serial Number: S1ZUNXAG863897
      4. Firmware Revision: EMT02D0Q
      5. Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0
      6. Standards:
      7. Used: unknown (minor revision code 0x0039)
      8. Supported: 9 8 7 6 5
      9. Likely used: 9
      10. Configuration:
      11. Logical max current
      12. cylinders 16383 16383
      13. heads 16 16
      14. sectors/track 63 63
      15. --
      16. CHS current addressable sectors: 16514064
      17. LBA user addressable sectors: 250069680
      18. LBA48 user addressable sectors: 250069680
      19. Logical Sector size: 512 bytes
      20. Physical Sector size: 512 bytes
      21. Logical Sector-0 offset: 0 bytes
      22. device size with M = 1024*1024: 122104 MBytes
      23. device size with M = 1000*1000: 128035 MBytes (128 GB)
      24. cache/buffer size = unknown
      25. Form Factor: 2.5 inch
      26. Nominal Media Rotation Rate: Solid State Device
      27. Capabilities:
      28. LBA, IORDY(can be disabled)
      29. Queue depth: 32
      30. Standby timer values: spec'd by Standard, no device specific minimum
      31. R/W multiple sector transfer: Max = 1 Current = 1
      32. DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6
      33. Cycle time: min=120ns recommended=120ns
      34. PIO: pio0 pio1 pio2 pio3 pio4
      35. Cycle time: no flow control=120ns IORDY flow control=120ns
      36. Commands/features:
      37. Enabled Supported:
      38. * SMART feature set
      39. Security Mode feature set
      40. * Power Management feature set
      41. Write cache
      42. * Look-ahead
      43. * Host Protected Area feature set
      44. * WRITE_BUFFER command
      45. * READ_BUFFER command
      46. * NOP cmd
      47. * DOWNLOAD_MICROCODE
      48. SET_MAX security extension
      49. * 48-bit Address feature set
      50. * Device Configuration Overlay feature set
      51. * Mandatory FLUSH_CACHE
      52. * FLUSH_CACHE_EXT
      53. * SMART error logging
      54. * SMART self-test
      55. * General Purpose Logging feature set
      56. * WRITE_{DMA|MULTIPLE}_FUA_EXT
      57. * 64-bit World wide name
      58. Write-Read-Verify feature set
      59. * WRITE_UNCORRECTABLE_EXT command
      60. * {READ,WRITE}_DMA_EXT_GPL commands
      61. * Segmented DOWNLOAD_MICROCODE
      62. * Gen1 signaling speed (1.5Gb/s)
      63. * Gen2 signaling speed (3.0Gb/s)
      64. * Gen3 signaling speed (6.0Gb/s)
      65. * Native Command Queueing (NCQ)
      66. * Host-initiated interface power management
      67. * Phy event counters
      68. * READ_LOG_DMA_EXT equivalent to READ_LOG_EXT
      69. * DMA Setup Auto-Activate optimization
      70. Device-initiated interface power management
      71. Asynchronous notification (eg. media change)
      72. * Software settings preservation
      73. Device Sleep (DEVSLP)
      74. * SMART Command Transport (SCT) feature set
      75. * SCT Write Same (AC2)
      76. * SCT Error Recovery Control (AC3)
      77. * SCT Features Control (AC4)
      78. * SCT Data Tables (AC5)
      79. * DOWNLOAD MICROCODE DMA command
      80. * SET MAX SETPASSWORD/UNLOCK DMA commands
      81. * WRITE BUFFER DMA command
      82. * READ BUFFER DMA command
      83. * Data Set Management TRIM supported (limit 8 blocks)
      84. Security:
      85. Master password revision code = 65534
      86. supported
      87. not enabled
      88. not locked
      89. not frozen
      90. not expired: security count
      91. supported: enhanced erase
      92. 2min for SECURITY ERASE UNIT. 8min for ENHANCED SECURITY ERASE UNIT.
      93. Logical Unit WWN Device Identifier: 5002538d40412638
      94. NAA : 5
      95. IEEE OUI : 002538
      96. Unique ID : d40412638
      97. Device Sleep:
      98. DEVSLP Exit Timeout (DETO): 50 ms (drive)
      99. Minimum DEVSLP Assertion Time (MDAT): 30 ms (drive)
      100. Checksum: correct
      Display All
    • getName() wrote:

      I am absolutly helpless at the moment
      Maybe it is the scheduler? cat /sys/block/*/queue/scheduler

      Looks like Debian is using cfq while Ubuntu is using deadline. The arch wiki seems to recommend noop for SSDs.

      echo noop > /sys/block/sda/queue/scheduler
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • getName() wrote:

      Tested all three, the differences in write speed are within the statistical error.
      Have you diff'd the kernel config between arch and debian? This makes no sense to me how it could be that different.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I got a lot of meatings out of office today, so I will be able to create the diff in the evening.
      I could use a custom kernel in the OMV installtion if that would solve the issue, but I am not sure if the answer is within the kernel config.


      ryecoaaron wrote:

      This makes no sense to me how it could be that different.
      I absolutly agree and surely hope it is just a small config I somehow missed.
    • Both kernels use the same mpt3sas module.
      I did find others having this issue with some ssds and this controller, but no solution yet.
      I cant find any newer firmware for this controller.
      Again, read speeds are absolutly fine.

      Source Code

      1. dd if=/dev/sdb of=/dev/zero bs=1G count=1
      2. 1+0 records in
      3. 1+0 records out
      4. 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.14369 s, 501 MB/s


      Brainfuck Source Code

      1. LSI Corporation SAS2 IR Configuration Utility.
      2. Version 16.00.00.00 (2013.03.01)
      3. Copyright (c) 2009-2013 LSI Corporation. All rights reserved.
      4. Read configuration has been initiated for controller 0
      5. ------------------------------------------------------------------------
      6. Controller information
      7. ------------------------------------------------------------------------
      8. Controller type : SAS2308_1
      9. BIOS version : 7.39.02.00
      10. Firmware version : 20.00.07.00
      11. Channel description : 1 Serial Attached SCSI
      12. Initiator ID : 0
      13. Maximum physical devices : 255
      14. Concurrent commands supported : 8192
      15. Slot : 5
      16. Segment : 0
      17. Bus : 3
      18. Device : 0
      19. Function : 0
      20. RAID Support : No
      Display All
    • getName() wrote:

      Again, read speeds are absolutly fine.
      The read speeds are using OMV?

      getName() wrote:

      I cant find any newer firmware for this controller.
      Actually, some raid controllers (like the LSI 9211-8i) work better on Linux with older firmware.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • getName() wrote:

      What do you mean by using OMV?
      I wasn't sure if that was from Arch or Debian/OMV. Do you have write cache enabled in the Physical Disks tab under properties?
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • getName() wrote:

      It is disabled, but I tried both.
      Interpreting my post as brainfuck is a funny decission by the hl engine btw. I am not sure if it is kind of an assault or not, as it is actually rather strange and therefore not easy to learn.
      Not sure why the board labels it like that.
      omv 4.1.22 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!