Posts by ThomasH

    Yes, I'm aware. I've never used striped sets (other than as part of W10 parity storage) and I wouldn't use it for storage. Doubt I'd feel any particular need to use it for anything else either; a 1TB PCIe M.2 disk is not that pricey these days and is more than enough speed for whatever I might need in terms of loading games. The only types of RAID I've ever used is either RAID1 or RAID5 (or whatever W10 parity storage space claims to be). I don't really see any theoretical reason why 2 disks in RAID0 would have better read performance than the same 2 disks in RAID1 in the same system (write is another matter), but if there is one that I'm unaware of then I'd be happy to hear it. Way I see it, you got the same controller, the same disks, and the same data. If you can read half the file from each disk in RAID0, why couldn't you do the same in RAID1?

    I haven't tested it extensively or in any way that'd satisfy the scientific method if that's what you're asking. Several years ago I used Intel ICH9 or something similar on the motherboard for a mirror, and I believe it did increase read performance. I didn't make a note of it or anything, so I can't swear it did, but that's how I remember it. At work I've got a W10 computer with some kind of mirror on it, I assume it's some newer Intel RAID implementation since it's a fairly new MB with an Intel CPU on it, but I don't know. While I haven't tested individual disks, the read performance was faster than what the specs claimed a disk could manage. Not twice as fast, but I don't remember exactly. It was half a year ago, and I just tried it for fun. I don't know about writes for either, but I would imagine it's slower than a single drive, not that it matters much in my case. At home I've got a W10 with a parity storage space, and that's complete garbage in terms of performance, at least writes which are painfully slow to the point where I may never use that kind of setup again.

    The performance gain would be from the mirror, not the 3rd disk for parity. But if SnapRAID doesn't work against a mirrored pool and requires 3+ disks in a RAID4/5/6-like setup, the entire point is moot. The whole point of using a mirror in the first place was to gain that read performance. I was hoping it could be used to basically check two disks in a mirror for errors and copy the data from the good disk to the bad disk if one is detected, while the mirror otherwise behaves as it normally would. If it can't do that, then I'd have to reevaluate the situation.

    Okay, so theoretically, I could have 2x4TB in a mirror for performance, and then 1x4TB for SnapRAID parity? Would there be any complications from using it on a RAID pool rather than on a single disk, e.g. if one disk in the mirror flips a bit but the other one doesn't?

    It was my understanding that SnapRAID was some variation of RAID4, but I could be mistaken. Would that not require me to get another disk, and how would that work with 2 mirrored disks? I suppose I could skip mirrored disks entirely, but my experience with Windows parity storage spaces (RAID5 equivalent, I believe) is quite awful when it comes to write performance and not great with reading either.

    Alright, I'll probably be looking around for a Microserver then and see if I can get a decent deal on it, else just buy consumer parts. I don't think I can wait for good deals on used parts; I'd like to get this done, and I know I'll never get around to it unless I settle for good enough and just do it. Like I said, a lot of my data will be images, video and music, much of which I can replace if lost, but some of it (mainly home-made pictures and video) can't. Most of it will just be cold storage, only read to view or upload to friends/family occasionally, and maybe deleted when I no longer want/need it. If I get a few errors in the data that I can't detect, so be it, and if my hardware dies or the entire file system gets corrupted, hopefully my external backup will still be alive and well. I may be overthinking this, I just want to do it right from the start, you know?


    Thanks for all the help. If nothing else, at least I feel a lot more comfortable with my options now than when I started.

    Okay, I've been dredging the market for used servers, and I've come to a few conclusions. Finding a used pre-built server seems nigh impossible, which means I have to either buy them new or in parts and put it together myself. If I'm going to virtualize things and keep my options open, I'll also need a HBA and better hardware to support it. My options are then:

    • Get an expensive, pre-built tower system.
    • Buy used server parts and build my own system.
    • Get a less expensive pre-built system and just run it as a NAS/file server only.
    • Say screw it and go with consumer-grade parts.

    If I go with 1 or 2, I have room for HBAs, get ECC memory, enough juice to run ESXi and virtualize everything I might want, and some customization options (at least in the latter case). First option is significantly more expensive however, and while the second is managable in that regard, it's still quite expensive and adds the worry of finding parts that are compatible to the pile, not to mention the parts I get may be in any state of repair.
    Option 3 also gives ECC memory, but virtualization is basically out due to limited resources and space for extra cards. If I want more functions, I'll have to buy new hardware for it, but I might be able to work with RPI or similar in that case. Cost varies, but it's generally more in the consumer range of things.
    Option 4 does not give ECC support (probably, maybe, who knows?), but I don't really know if I need it. Sure, it's nice, but I've never lost anything in 20 years of non-ECC RAM storage solutions, and what are the worst-case scenarios anyway? I'm okay with a video frame getting corrupted or a few pixels getting the wrong color in a picture, so unless I've somehow managed to miraculously avoid all the catastrophic doomsday scenarios that the FreeNAS forum tells me are inevitable unless I run ECC RAM, I think it's fine. It has the advantage of being generally cheaper, highly customizable, might even be able to run ESXi if I get a HBA card, and replacement parts are both cheap and common.


    Given the cost and workload involved, I'm leaning towards either 3 or 4 now. The world is my oyster when it comes to the latter, and I can get as much performance and expansion space as I want, so it's really just a question of budget there.


    The former is somewhat trickier, but I've been looking at the HPE Microserver Gen10, which looks somewhat promising. The price is okayish given that it's supposedly pretty good quality stuff, it has 8GB of ECC with option to expand it, a HBA would not be necessary with no virtualization, dual 1GB network ports so I can separate internet and local network traffic, and gives me a PCIe slot for a 10GB network card if 1GB on the local network turns out to be too slow for my taste. With that, I get room for 2 storage disks, 1 SSD scratch drive, 1 SSD OS drive, and a spare drive slot should I need it. It also supposedly runs very quietly and at like 50W under full load. The only question I have there is if the CPU, either AMD Operton X3216 or X3418, is sufficient to run a file server that can shuffle data over the network as fast as a pair of mirrored HDDs can manage (and that's not even taking into account the possibility of future SSD upgrades), as well as potentially handle torrent traffic for a 100/100 Mbit connection. I've heard the CPUs are somewhat similar to Intel Atoms in terms of performance, but that doesn't tell me a lot.



    TLDR version: Would a Microserver Gen10 with OMV be a good option for my use-case? Does it have enough juice to do what I want it to, and should I go for the cheaper dual-core or the quad CPU? Otherwise, I'm probably back to looking at a custom-built consumer PC.


    If you look through the thread (it is long, this is a better -> direct link) flashing a Dell H200 to "IT mode" makes it a JBOD controller. Thereafter, it's software raid capable which includes mdadm, ZFS and BTRFS is possible, along with transparent SMART stat's pass through. There was a risk in flashing it, but for the price of the card on an auction site ($25), I couldn't pass it up and it's working fine.

    I see. I've found a couple of used cards with cables for about $100 with shipping and everything, I might give that a try. I'd rather pay a bit more now than pay for it later.


    ECC is a very good idea for a server. I've seen it correct random hard errors that, otherwise, would have gone unnoticed. Intel ECC capable implementations work with EDAC utilities which allows for checks and stat's. I'm not so sure about AMD's implementations of ECC and I've seen at least one proprietary Intel Mobo implementation that didn't work with EDAC.

    The issue I'm having with the ECC is that either I'm pretty much stuck getting server-grade stuff (Xeon and Supermicro MB) which will be used with no warranty and fairly expensive, or I go with consumer-grade, of which people claim that certain AMD CPUs support it unofficially, and I've found an AM4 MB (ASRock B450 Pro4) that supposedly supports it as well, but whether it actually does so in practice seems to be impossible to get a straight answer to. I talked to someone who had a similar setup who said that a memory tool reported a data width of 64 and total of 128, as well as report that the memory was multi-bit ECC, but whether that implies that it's actually working or if it's just what the memory is capable of I don't know.


    If I do go with server-grade, I'm way out of my depth. I have no idea what Xeon performs like and what I'd need, and it's not terribly easy to look up benchmarks. They also tend to be less concerned about power consumption, heat and noise than consumer-grade. Whatever I get will basically be stuck in my living room, so I'm pretty keen on not getting something that'll try its best to imitate a jet engine.


    While ESXi is flexible, I wouldn't call it simple. I have Proxmox where, since I don't have unlimited resources or a budget, working out the best way to set up the storage back end can be something of a task. (I still haven't settled on a config - I'll work with it more this fall.)
    Any virtualization platform adds complexity. It may be worth it, if you're planning on running more than one server guest and can afford enough storage for OMV and other servers, with room for growth. Along with backup, that much disk space can become "pricey".

    Simple mainly in the sense that it'd let me use any OS if I get the idea to add something that I want and I can backup entire machines. If I don't virtualize, I'd have to have a NAS software that is basically an up-to-date Linux OS. FreeNAS doesn't sound like it'd do that for me, and while OMV on top of Debian sounds better, I'm unsure how Debian-like it actually is and how well it follows developments in the OS and packages. As for virtualization, I have some experience with ESXi, and I know some people who have even more, so hopefully I can figure it out. I don't anticipate that I'll be using much in the way of storage other than the NAS part with the actual data of course, so it's really just a question of how many different devices I want to run. VM storage won't be the point of failure for me. Unfortunately, it's not just the digital storage space that is limited for me.

    Getting a new H200 around where I live seems nigh impossible but I can check the local used market to see if there's one available, but would I then be stuck with running hardware RAID on a card that might break and be hard to replace, or can you use it exclusively as a way to do a passthrough in ESXi and do all the RAID in software on the VM? Like I said, I want to get away from reliance on hardware, but at least if I do RAID on the MB controller, I know that it's not something exotic that will be hard to replace to recover the data. If I don't have to do the RAID on the card, I could also potentially remove the disks and do a physical installation of the NAS software on another machine to get at the data, which is also acceptable. Alternatively, if the mirrored mode on the H200 leaves the disks readable if you take one and put it in another system, that could also make doing the RAID on the card an okay option (I mean, theoretically there should be no need for mirrored drives to contain anything beyond the raw data itself, but you never know with these proprietary solutions).

    Yes, but unfortunately, that doesn't really help me decide on what would be the better way to go with everything. From what I can gather of virtualization and RAID, it would require me to get a SATA HBA card and do a PCI passthrough in ESXi, else the VM would not get access to SMART information that it might need for software RAID to work properly (at least ZFS). My options then seem to be to either cough up at least $100 to get a card and hoping that it's compatible, or do hardware RAID (or whatever the motherboard-based variation is called).


    My current potential setups are as follows:

    • ESXi on an AMD 3200G + ASRock MB with ECC support (supposedly) + 16GB ECC memory, which should give me enough to run any NAS software I want alongside separates VM for any other functions I might want regardless of the OS it runs on. I've got a 100/100 connection, so that should leave 90% of a gigabit interface for the NAS function on my local network. Internal transfer on the host is supposedly quite fast, so I could run VMs that mount the storage space from the NAS VM as a network device without eating up precious network bandwidth. I might at some point get a 10 Gbit adapter for my computer and server to remove that as a potential bottleneck, but that's a later concern. I can also do full backups on the VMs, should the hardware die on me.
    • As above, but skip ECC and get something like a 200GE CPU for lower power consumption, while hopefully still having enough juice to at least run a NAS and a second VM for simple things like handling torrents. An advantage here is that since I'm getting a new desktop too, I can get memory that is compatible with that one too, and then if I change my mind about ECC I can buy a CPU that supports it, some ECC memory, and then move the non-ECC to my desktop without too much money wasted.
    • Get some sort of cheaper build, possibly one of the above, and run NAS software directly on it with no virtualization, getting a second device like a RPI to handle torrents etc. This seems like the messier option, and it'd eat up more bandwidth on my network, but would give me the option to run ZFS.

    I'm leaning towards #1 for the flexibility and simplicity, but that's where the RAID issue comes in. How am I supposed to handle it? Hardware RAID and let the NAS VM just access a single virtual disk (assuming that even works for ESXi)? Try to RDM the drives into the VM, let the VM do software RAID, and hope that it works? I'd prefer to avoid hassle should a disk fail or if the MB dies on me or something. Then there's the ECC. Is it worth the headache for possibly getting a setup that supports it (I can't get a straight answer whether it will or not), or should I just go for regular memory? What kind of hardware do I need to run e.g. OVM at all? I'm badly in need of advice on best-practices and what would work best (or at all).

    Hey everyone. I'm hoping this is in the right place, I couldn't find a better place to post it. This will probably be a bit of TLDR material, but I want to make my situation as clear as possible.


    I've been thinking about getting "serious" about my data storage, but I can't make up my mind on what would be best. I'm hoping to get some insight and opinions on possible solutions for my case. If in doubt, assume I know nothing about anything, and you're probably fairly close to the truth.


    My current situation is that I've got a desktop computer that I'm about to upgrade. Currently it has 3x1TB disks in a Windows 10 storage space (RAID5), but aside from lacking a backup, I'm starting to realize I could use more space. My idea was to buy a new computer and while I'm at it, move most of the data storage to an external, always-on server/NAS, with only applications, games and data that is not relevant to anything else stored locally, and with a 3rd device strictly for backup. I'm counting with about 3TB of storage total on the computer (2x512GB SSD + a 2TB HDD that I've already got), though I doubt I'll use all of that in practice.


    For the backup, I'm just planning on buying a cheap NAS enclosure e.g. 2-slot Zyxel and an 8TB disk, which will only be used to write backups to and retrieve data from if my computer or server for some reason lose their data. I'll do full backups of the computer, so that leaves at least 5TB of free space for the file server backups.


    The file server is where I'm having trouble deciding. My current data consists mostly of images, video and music, both my own and stuff I've bought/downloaded, with some miscellaneous things mixed in. None of it is strictly speaking critical data, but a lot of it are things that'd sting quit a bit to lose. I'm planning on getting 2x4TB NAS disks (e.g. Seagate Ironwolf) and mirror them for some basic protection and read performance, with the option to at some point buy another 8TB disk if I need the space and use in a mirror with the one I'd use for backups, replacing that one with a bigger one. I want to avoid hardware-RAID if possible, since I'd rather not have it all collapse in the event of hardware failure with no option of recovery. In addition, I've already got a 120GB SSD I'm going to use as a system disk and I'm considering getting a cheap SSD scratch drive for downloads and general workload like unpacking files and whatnot, where performance is desired, a lot of space is not required, and I don't care if data is lost since it's just a temporary storage. The latter depends on how I decide to solve everything though. That should give me a total of 7TB of data to backup, which should fit on an 8TB disk with some wiggle room even if I somehow end up using all of that space.



    That's as far as I'm getting though, and I keep getting stuck trying to decide on software and hardware to use. Obviously OMV is something I'm considering since I'm here, but I have no experience with it or anything else like it, so I have no idea if it suits my needs. It's looking promising, but I want to be sure before I throw money at the problem. What I do know is that I want the following:

    • Storage that acts like a regular harddrive as far as my Windows 10 computer is concerned, i.e. it should in practice work the same way and let me manage files as I normally would with a physical drive installed. A big part of why I'm doing this is to get rid of the local storage from my computer after all, but I still need easy access to it.
    • A torrent client, as I tend to download using torrents whenever possible and I've already got a sort of ad-hoc infrastructure in place for sharing with friends/family using it. Multiple options for clients is a big bonus so I can pick one I feel comfortable with without worry.

    Those are the critical ones. In addition, it'd be nice to have the following:

    • Storage that can be accessed by OpenELEC or similar running on a RPI (I haven't gotten around to it yet, but I've got a 3B+ that I'm planning on turning into a smart TV).
    • Storage that can be accessed from outside my network, e.g. on my phone to play music or to download a movie at a friend's house. I've got a public IP, so there are no ISP-level NATs to worry about.
    • Plex server.
    • Possibility to run additional things, like a database and webserver for strictly personal use, i.e. no performance really required.
    • Cheap components, low noise and energy consumption (obviously). I'm planning on running it 24/7, so it's a bonus if it's cheap to run and won't become a really expensive radiator during summer.
    • Ease of setup. I'm not a complete idiot, but I'm far from experienced. The less things I have that I can do wrong, the better.

    My original plan involved a Ryzen 1200 with ECC memory (a 240GE before that, until I learned it didn't support ECC) and FreeNAS for ZFS, but after reading up on jails and the problems getting them to work properly as well as lack of updates, I thought it'd be better to run it virtually on ESXi as a NAS exclusively, and then run additional server(s) for torrents, Plex etc. as VMs that connect to the NAS as well. I can't seem to get a straight answer on if ESXi + RDM with FreeNAS will work though, with some saying it's fine and others claiming that my NAS will murder me in my sleep if I even think about trying it. That, in addition to complete lack of clear information on whether the ECC memory would even work in ECC mode, and I'm having second thoughts on both counts.


    So I guess my question boils down to: what the heck am I doing? Should I go for good hardware and try to virtualize a NAS and a server separately, and if so, is OMV a good choice for my setup? Would it play well with ESXi in regards to running RAID, or should I just go for hardware RAID and give the NAS the resulting pool to manage? Should I not bother with virtualization at all, if OMV supports everything I need it to as it is? Should I do something completely different, like buy bare-minimum hardware for the NAS and give up on ECC and then get something like an RPI4 to run the torrent/server stuff?


    As you might be able to tell, I've got nothing, so any help or advice at all is much appreciated.