I'm interested in using 2.5" SAS drives in a RAID 5 configuration. Has anyone had success with this type of setup?
SAS RAID 5
-
- OMV 3.x
- KeithGresham
-
-
As long as your Mobo supports SAS, I can't see a problem. Generally SAS drives are faster but have smaller capacity than SATA. However, if you have them lying around doing nothing and you need a little extra performance, why not use them?
-
if you have them lying around doing nothing
That's the important part
If the disks aren't already bought I would strongly recommend to outline first why/how the combination of 2.5", SAS and RAID5 should make any sense. There exist a couple of more satisfying ways to burn money.
-
I have 15 low mileage 2.5" SAS 600 GB 10K drives that were pulls from a Dot Hill SAN. I also have 6 x 600 GB SAS 15K drives but they are 3.5". I'm more concerned with overall cost and expecially idle power consumption so I'm wondering if one of the new ARM boards like the ESPRESSObin, which has mini PCIe would work. I can't seem to find a SAS controller that would work with the mini PCIe slot. A mobo with built in SAS like the Supermicro is too expensive by the time you add cpu, memory, power supply, etc.
-
A mobo with built in SAS like the Supermicro is too expensive by the time you add cpu, memory, power supply, etc.
And as soon as you add a bunch of 2.5" SAS drives in a RAID5 topology you successfully destroyed everything they're made for (that's low seek times, low latency, high IOPS, being great at random and not sequential IO). The only reasonable choices for a bunch of such power hungry drives are RAID0, RAID10 or since it's 2017 in the meantime ZFS with mirrored vdevs.
-
Yeah, RAID 10 would reduce the capacity too much and RAID 0 would scream until one of the drives goes south along with all the data.
Thank you for your insight. I suppose once the m.2 drives drop in price and go up in capacity that will be the way to go. Might just make a wind chime with the SAS platters now.
I'm on the waiting list for the Helios. Seems like the best way forward for my needs. Enjoying the forum very much. Thanks again.
Keith
-
RAID 10 would reduce the capacity too much
And is also not the topology to choose. Using ZFS with mirrored vdevs with your 21 disks you would end up with 5.5TB useable storage showing both very high sequential performance and also amazingly high random IO performance as long as you throw all vdevs in a single pool. Most probably only 2 things missing: appropriate SAS HBA(s) and a use case
(our use case for such setups is virtualization, VMs on such storage compared to RAID6/RAIDZ2 are flying)
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!