Which RAID or HBA card

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Sc0rp wrote:

      Further, the ASM1062 is not performing good and have some problems in some setups
      Well, I was believing this until recently too. Then I tested myself and found ASM1062 to perform identical to 88SE9215 used on your SI-PEX40064: forum.armbian.com/index.php?/t…findComment&comment=37740

      And when not used together with SSDs but spinning rust instead I would believe the bottleneck is the host itself and so even an ASM1062 combined with at least two JMB575 (they support FIS-based switching unlike the older JMicron port multipliers) should 'perform' identical with an 8 port Marvell with NAS use cases in mind?
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      Re,

      since i moderate the german Technikaffe.de-forum, i had a lot of readings while searching for the best solution to step up the portcount in NAS builds ... and what i found via GIDF regarding the ASM1062 was even worse, aprox. 30% of use cases got problems with it - in most cases while clue'ed onboard, but others reclaimed problems witch dedicated controllers too.

      I don't read it all until totally deepness ... but i found easily a better solution with Syba. And some members of the forum too ...

      Syba has even a PCIe x1 Controller with 8 Ports - so u can use the "BTC-Boards" to build boxes witch 7x8 SATA-porte extra!
      (or you can lower the energyconsumption, using only one controller for 8 ports ...)

      And besides my readings - my (cheap) ASM1062 controller card does really not performing good at all - AHCI doesn't work well with different drives, and that's really weird, because i didn't get it ... i don't have unlimited time :D

      Sc0rp

      ps: "not performs well" means here not only the port speed or featuritis - the "easy integration" for normal home users is my focus
    • New

      Sc0rp wrote:

      what i found via GIDF regarding the ASM1062 was even worse
      Well, that's 'Hörensagen' in other words :)

      Since I also found a lot of 'reports' telling about bad ASM1062 performance and problems with multiple disks with a queue depth greater than 1 I tested myself and could not confirm. Of course using a pretty recent kernel (4.11 or 4.12) and not that stuff people run on some distros. Following your google suggestion and checking the first three top results:

      1) Obviously a NCQ related problem with a 3.9 (really!) kernel
      2) An update from 2.6 kernel (really!) to 3.10 'fixed it'
      3) Just wrong jumper settings, others stating ASM106x working great

      No, I won't check the other hits since it's obvious that this strategy is useless. I also don't understand why/how problems of the past (no/broken ASM106x driver in 2.6 kernel) should affect my systems in 2017 :)

      I had problems with multiple disks last year with ASM1062 but that was with kernel 4.4, now after switching to mainline kernel the problems are gone so most probably there was a driver fix applied.

      BTW: I don't really recommend ASM106x, I'm just not able to confirm the problems so many people claim (which are obviously very often related to outdated drivers / kernel versions). And as already said: in my testings performance of a 'good' 4-port Marvell and the 'bad' ASM1062 is identical in the same setup (PCIe 2.0 x1 -- might look different with x2 though)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      I think esp. with rust drives its not much of an issue.
      My data grave setup eg (mergerfs on top of ext4 on passthroughed sata drives in a kvm ;) )
      Concurrent Users: max. 3 (if I count automatic sync jobs)

      With these drives even the 1x PCIe3 port has enough bandwidth to host the ASM1062 with 5 drives.

      When powersave/spin down works, my server uses about 70w. In this setup the 10-15w for a RAID card make a difference. I assume a stupid HBA will not use more then 5w.
    • New

      Re,

      you're right with the time issue, i looked into this "hörensagen" :D and remember this site: illumos.org/issues/3797 was the most important for me - all of the bugs were cleared in 2014, yes ... but reading this is horrible for normal home users, even it seems to be over ... i dont read strange things about the marvell's.

      And besides the "Hörensagen" ... the Marvel 88SE9215 is a true 4-Port Chip, while ASM1062 has only two ports.


      Sc0rp
    • New

      Re,

      drdownload wrote:

      With these drives even the 1x PCIe3 port has enough bandwidth to host the ASM1062 with 5 drives.
      Marvel and ASM have only a PCIe v2.0 Interface ... but bandwith is not an issue here, and (of course) the true SATA speed is equal to both.

      drdownload wrote:

      When powersave/spin down works, my server uses about 70w. In this setup the 10-15w for a RAID card make a difference. I assume a stupid HBA will not use more then 5w.
      Same on my NAS box (8 HDDs in 2x RAID5) ... the Marvel 88SE9215 is specified around 1W ... second advantage of HBA's is no waittime while booting (RAID controller firmware) and a better S2D/S2R (ACPI S3/S4) conformity.

      Sc0rp
    • New

      drdownload wrote:

      I assume a stupid HBA will not use more then 5w.
      The link to with kernel 4.4 above was by intention since it contains some consumption numbers. My system now (ARM based Clearfog Pro) with 7 SATA ports in total and only one small but nasty (overheating) SSD connected consumes ~8.5W in idle with 88SE9215 in mPCIe slot and 6.3W without. You should take into account that each added disk to a SATA port will add ~1W (there's a SATA PHY involved that has to maintain the link state) so with 5 disks you end up with approx 5W for the drives alone and then the controller adds to this. The ASM1062 is fine with less than 1W (but useless for you since not enough ports), my 88SE9215 wants 2.2W so maybe an 8-port Marvell is also fine with 2.2W? No idea.

      But if you plan to add 5 disks I would really skip port multiplier experiments (and therefore ASM106x too).
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      Right now I have an Dell PERC 6i and an 3WARE 9690SA in my case. I use incl. onboard 15 sata ports. If I remember correctly from various built stages both controllers use 20-30w together (depending on how many drives I have connected)

      My usage scenario is about 10 of the drives connected for my OMV with mergerfs on top and rest my proxmox drives.

      Mostmostmost of the time everything is idling so hard I could turn it off ;) disk access is normally 2-3 PCs connected to SMB shares and some automatic stuff running periodically like crashplan sync. Incoming duplicati sync etc.

      Since my board has 3 16x and 2x 1x I'm tempted to switch to HBAs. (maybe order them from Amazon and see how it goes)

      amazon.de/Syba-PCI-Express-Con…-Slotblech/dp/B00AZ9T3OU/ 2 of them are € 60 and only use my 1x slots.

      Or the asm based amazon.de/Delock-89384-Innenra…d-Netzteil/dp/B00T2FMEEE/ with 10 ports for 90.
    • New

      Re,

      why not the SD-PEX40104? (Amazon-Link) (it comes with 8 SATA-ports on a PCIe x1 v2.0 interface).
      (but beware, it is build with port multiplier Marvel 88SM9705 ...)

      As i wrote before, i have the SI-PEX40064 in my 24/7 NAS-box (4x6TB WD Red in RAID5) and the "older" SI-PEX40071 in my Test-NAS-box with various constellations. Both are working flawlessly.


      Sc0rp

      The post was edited 1 time, last by Sc0rp ().

    • New

      drdownload wrote:

      If I compare the SD-PEX40104 and the SI-PEX40064 it suspicous that the 8 port no longer supports port multipliers I thought in 2017 that would be a thing of the past

      The 8 port card most probably supports external port multipliers on 3 of the 8 SATA ports since it's a 4 port SATA controller combined with a 1:5 PM (88SM9705). 3 real SATA ports, the 4th connected to the internal PM and you end up with 8 SATA ports, 5 behind a PM already.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • New

      Sc0rp wrote:

      but don't cry about the performance then

      To be honest: This 'problem' is already present on such controllers like the 8-port 88SE9215/88SM9705 combo above since 5 disks are behind one real SATA port. Fortunately all those Marvell PMs support FIS-based switching (so random IO is not totally trashed as when CBS would have to be used) and also fortunately normal 'NAS use cases' aren't affected (that much) but this is something users should keep in mind if concurrent disk access happens regularly and performance could be an issue (as already said: with usual 'NAS use cases' most probably no difference since everything is bottlenecked by Gigabit Ethernet anyway)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • Users Online 1

      1 Guest