Planning New OMV Server build - Question on Drives: Mix SSD & HDD in RAID 0?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Planning New OMV Server build - Question on Drives: Mix SSD & HDD in RAID 0?

      So, like the title mentions, I plan on building a couple of new exactly the same OMV servers (one Main & one Backup). Thinking 18TB total capacity in each, will use 10Gb Ethernet with Mellanox ConnectX-3 cards interconnecting directly with my main PC (for fast file transfers from PC to either server & also server-to-server). Trying to get a nice blend of capacity & performance without spending a ton of $.

      I was thinking each server would have (3) 6TB Seagate Iron Wolf drives (each server will be housed in a Cooler Master Elite 110 case, so not a lot of room for HDDs, 3 max - I want a small server).

      I'm thinking if lucky, I can maybe get 450-500MB/s data transfer rates across this 10GbE network.

      But, what if I decided to splurge a little (alot, actually) and replaced one of the 6TB Iron Wolf's with 6TB of SSD storage? Like (3) Samsung 2TB QVO SSDs in each server (or a faster SSD?).

      *** Is there any way in OMV to do this for performance like a RAID 0, without losing the 4TB of capacity of the HDDs, since RAID 0 total capacity is based on the smallest drives in the array? So I'd be looking at combining somehow (if possible) 2 RAID 0 sets? A RAID 0 set of (3) 2TB SSDs + RAID 0 set of (2) 6TB Iron Wolfs. I'd be looking to do this on both servers.

      Thanks!
    • I haven't tested it, but I assume that you could use bcache with OMV. It would require you to partition and format volumes outside OMV. But then OMV should be able to use them. Then you could have a NVMe drive that works as a cache for the SATA hard drives. Possibly you could pool the hard drives using mergerfs as well, instead of using RAID.

      Install Debian first and configure bcache. Then install OMV on top.

      However this breaks all KISS rules. Could be fun and very performant but also difficult to handle problems.

      Using NVMe should free up a SATA port and/or allow you more space for drives in the case. But using bigger drives would help even more. Why not use 12 TB or 16 TB drives? Bigger drives means fewer drives means less things that can go wrong. Less power. Less noise. Less cooling. You could also save a lot of money by having one very high performance server with a lot of RAM and bcache and so on, but the backup server is just a small SBC with a lot of really big drives. Perhaps you could even have a "cluster" of tiny cheap SBC backup servers, each with a big expensive drive, to distribute the backup traffic. 10GbE from the main server to a switch, 1GbE to each tiny backup server.

      You could also use part of the NVMe drive to cache NFS reads using fscache. It gives a very nice boost to network traffic, but perhaps it is best used on a client PC instead of between servers. I intend to use it to boost NFS reads by an application server with only SSDs from several small SBCs with GbE and big (12TB-16TB) drives. I have used fscache on my laptop. Helped a lot with NFS reads over Wi-Fi. May activate it again.
      OMV 4: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4
    • Thanks Adoby,

      You've given me some things to think about/investigate. bcache? fscache? Looks like I'll have to do some investigating. :)

      I'd love to use large 16TB drives, but $$$ is a concern. Besides, I currently have only about 11TB worth of Data/Files, so don't need massive storage for now. I do want 10GbE everywhere on the network, since I'm an impatient SOB who HATES waiting on GbE file transfers :sleeping:

      Note really too concerned with power consumption. Each server will be 8GB RAM, Pentium G5400, + 10GbE NIC + whatever drives.

      Never tried installing OMV on top of Debian. That could prove interesting. Any guides around for that? I've used Linux Mint, Manjaro a bit, but a Debian noob.

      Thanks.
    • Do a price compare between 3x6TB Ironwolf and one 16TB Exos. I'm not buying any hard drives smaller than 16TB any more.

      10 GbE, bcache and fscache on NVMe SSD should make amazing performance possible. But it might take some effort to get working fully.

      32GB RAM or more should also help disk caches, especially if you slow down write flushes. But then a UPS might be necessary.
      OMV 4: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

      The post was edited 1 time, last by Adoby ().