An 8 port SATA-III PCIe HBA controller that works with OMV6

  • My LSI 9201-16i decided to take dirt-nap, necessitating a drive controller purchase.

    I went onto the Amazon and found this 8-port drive controller for $35: PUSOKEI PCI-E to SATA 3.0 Card, 8-Port SATA3.0 Interface Expansion Card

    It works right out of the box and the boot rom POST properly detected all my drives. Debian instantly recognized the controller and all my ZFS drives. I'm running OMV6 in a Proxmox 7.2-3 VM. Proxmox runs on Debian so that was the first test that the controller passed. With that working, I set about passing the entire HBA controller into the OMV instance to Proxmox using passthrough and it also works perfectly with ZFS reading and importing all the drives.

    I haven't run any speed tests on it just yet, but it seems okay so far, with about 6 hours of use. I'm moving a bunch of files around and will test it's speed later. And if you don't care about the speed, then it's simply a very reasonably priced controller. I also found a 10 drive controller for almost double the price, but I think I'd rather purchase two of these 8-port controllers if I really needed more than 8 storage drives.

    I have no financial interest in the sale of these cards. It worked for me on an old dual-xeon x58 BIOS mobo and I figure this info might be useful to some of you.


  • ala.frosty

    Hat den Titel des Themas von „An 8 SATA port HBA controller that works with OMV6“ zu „An 8 port SATA-III PCIe HBA controller that works with OMV6“ geändert.
  • Speed update: It's slow!


    I've got six SATA-III drives connected into it using ZFS and it's running at 35.2M/s.

    For contrast, my SAS drives (6Gbs) are also doing a scrub but they're seeing throughput at 126M/s .. Which is roughly five times faster. So, the card works, but I may send it back anyway and use my SAS controller to access the SATA drives. For similar sized ZFS pools, the difference in scrub speed is 8 hours vs around 40 hours!

    I ran some zio test on the SAS ZFS pool and the SATA zfs pool and the SAS pool is about twice as fast. This kinda makes sense as SAS is bidirectional.


    Code
    # fio --loops=5 --size=1000m --filename=/mnt/test_drive/fiotest.tmp --stonewall --ioengine=libaio --direct=1   --name=Seqread --bs=1m --rw=read   --name=Seqwrite --bs=1m --rw=write   --name=512Kread --bs=512k --rw=randread   --name=512Kwrite --bs=512k --rw=randwrite   --name=4kQD32read --bs=4k --iodepth=32 --rw=randread   --name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite>~/test.Sata-controller.txt &

    Note that the above test evaluates the speed writing to the file fiotest.tmp at whatever location you select. So, mount your file system up on /mnt/ and set the directory appropriately, then run the code to see how your system is doing.

  • Curious, why did you choose to pass through the HBA card vs having the ZFS pool managed by Proxmox and assign virtual drives to OMV?


    (I am learning, trying to weigh my options)

  • Portability! This practice makes the drives and HBA portable between computers. I can pick up the HBA and/or the driveset and drop them in a bare metal server running Debian and read everything on the drives. Or boot from a Debian flash USB stick on the same box and read everything for troubleshooting. And yes, I've done this on multiple occasions when I've upgraded my hardware and OS drives, etc. Maybe it's possible to configure this same sort of thing with Virtual drives, but I don't know how to do it reliably.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!