General advice for NAS/file server

  • Hey everyone. I'm hoping this is in the right place, I couldn't find a better place to post it. This will probably be a bit of TLDR material, but I want to make my situation as clear as possible.


    I've been thinking about getting "serious" about my data storage, but I can't make up my mind on what would be best. I'm hoping to get some insight and opinions on possible solutions for my case. If in doubt, assume I know nothing about anything, and you're probably fairly close to the truth.


    My current situation is that I've got a desktop computer that I'm about to upgrade. Currently it has 3x1TB disks in a Windows 10 storage space (RAID5), but aside from lacking a backup, I'm starting to realize I could use more space. My idea was to buy a new computer and while I'm at it, move most of the data storage to an external, always-on server/NAS, with only applications, games and data that is not relevant to anything else stored locally, and with a 3rd device strictly for backup. I'm counting with about 3TB of storage total on the computer (2x512GB SSD + a 2TB HDD that I've already got), though I doubt I'll use all of that in practice.


    For the backup, I'm just planning on buying a cheap NAS enclosure e.g. 2-slot Zyxel and an 8TB disk, which will only be used to write backups to and retrieve data from if my computer or server for some reason lose their data. I'll do full backups of the computer, so that leaves at least 5TB of free space for the file server backups.


    The file server is where I'm having trouble deciding. My current data consists mostly of images, video and music, both my own and stuff I've bought/downloaded, with some miscellaneous things mixed in. None of it is strictly speaking critical data, but a lot of it are things that'd sting quit a bit to lose. I'm planning on getting 2x4TB NAS disks (e.g. Seagate Ironwolf) and mirror them for some basic protection and read performance, with the option to at some point buy another 8TB disk if I need the space and use in a mirror with the one I'd use for backups, replacing that one with a bigger one. I want to avoid hardware-RAID if possible, since I'd rather not have it all collapse in the event of hardware failure with no option of recovery. In addition, I've already got a 120GB SSD I'm going to use as a system disk and I'm considering getting a cheap SSD scratch drive for downloads and general workload like unpacking files and whatnot, where performance is desired, a lot of space is not required, and I don't care if data is lost since it's just a temporary storage. The latter depends on how I decide to solve everything though. That should give me a total of 7TB of data to backup, which should fit on an 8TB disk with some wiggle room even if I somehow end up using all of that space.



    That's as far as I'm getting though, and I keep getting stuck trying to decide on software and hardware to use. Obviously OMV is something I'm considering since I'm here, but I have no experience with it or anything else like it, so I have no idea if it suits my needs. It's looking promising, but I want to be sure before I throw money at the problem. What I do know is that I want the following:

    • Storage that acts like a regular harddrive as far as my Windows 10 computer is concerned, i.e. it should in practice work the same way and let me manage files as I normally would with a physical drive installed. A big part of why I'm doing this is to get rid of the local storage from my computer after all, but I still need easy access to it.
    • A torrent client, as I tend to download using torrents whenever possible and I've already got a sort of ad-hoc infrastructure in place for sharing with friends/family using it. Multiple options for clients is a big bonus so I can pick one I feel comfortable with without worry.

    Those are the critical ones. In addition, it'd be nice to have the following:

    • Storage that can be accessed by OpenELEC or similar running on a RPI (I haven't gotten around to it yet, but I've got a 3B+ that I'm planning on turning into a smart TV).
    • Storage that can be accessed from outside my network, e.g. on my phone to play music or to download a movie at a friend's house. I've got a public IP, so there are no ISP-level NATs to worry about.
    • Plex server.
    • Possibility to run additional things, like a database and webserver for strictly personal use, i.e. no performance really required.
    • Cheap components, low noise and energy consumption (obviously). I'm planning on running it 24/7, so it's a bonus if it's cheap to run and won't become a really expensive radiator during summer.
    • Ease of setup. I'm not a complete idiot, but I'm far from experienced. The less things I have that I can do wrong, the better.

    My original plan involved a Ryzen 1200 with ECC memory (a 240GE before that, until I learned it didn't support ECC) and FreeNAS for ZFS, but after reading up on jails and the problems getting them to work properly as well as lack of updates, I thought it'd be better to run it virtually on ESXi as a NAS exclusively, and then run additional server(s) for torrents, Plex etc. as VMs that connect to the NAS as well. I can't seem to get a straight answer on if ESXi + RDM with FreeNAS will work though, with some saying it's fine and others claiming that my NAS will murder me in my sleep if I even think about trying it. That, in addition to complete lack of clear information on whether the ECC memory would even work in ECC mode, and I'm having second thoughts on both counts.


    So I guess my question boils down to: what the heck am I doing? Should I go for good hardware and try to virtualize a NAS and a server separately, and if so, is OMV a good choice for my setup? Would it play well with ESXi in regards to running RAID, or should I just go for hardware RAID and give the NAS the resulting pool to manage? Should I not bother with virtualization at all, if OMV supports everything I need it to as it is? Should I do something completely different, like buy bare-minimum hardware for the NAS and give up on ECC and then get something like an RPI4 to run the torrent/server stuff?


    As you might be able to tell, I've got nothing, so any help or advice at all is much appreciated.

  • Yes, but unfortunately, that doesn't really help me decide on what would be the better way to go with everything. From what I can gather of virtualization and RAID, it would require me to get a SATA HBA card and do a PCI passthrough in ESXi, else the VM would not get access to SMART information that it might need for software RAID to work properly (at least ZFS). My options then seem to be to either cough up at least $100 to get a card and hoping that it's compatible, or do hardware RAID (or whatever the motherboard-based variation is called).


    My current potential setups are as follows:

    • ESXi on an AMD 3200G + ASRock MB with ECC support (supposedly) + 16GB ECC memory, which should give me enough to run any NAS software I want alongside separates VM for any other functions I might want regardless of the OS it runs on. I've got a 100/100 connection, so that should leave 90% of a gigabit interface for the NAS function on my local network. Internal transfer on the host is supposedly quite fast, so I could run VMs that mount the storage space from the NAS VM as a network device without eating up precious network bandwidth. I might at some point get a 10 Gbit adapter for my computer and server to remove that as a potential bottleneck, but that's a later concern. I can also do full backups on the VMs, should the hardware die on me.
    • As above, but skip ECC and get something like a 200GE CPU for lower power consumption, while hopefully still having enough juice to at least run a NAS and a second VM for simple things like handling torrents. An advantage here is that since I'm getting a new desktop too, I can get memory that is compatible with that one too, and then if I change my mind about ECC I can buy a CPU that supports it, some ECC memory, and then move the non-ECC to my desktop without too much money wasted.
    • Get some sort of cheaper build, possibly one of the above, and run NAS software directly on it with no virtualization, getting a second device like a RPI to handle torrents etc. This seems like the messier option, and it'd eat up more bandwidth on my network, but would give me the option to run ZFS.

    I'm leaning towards #1 for the flexibility and simplicity, but that's where the RAID issue comes in. How am I supposed to handle it? Hardware RAID and let the NAS VM just access a single virtual disk (assuming that even works for ESXi)? Try to RDM the drives into the VM, let the VM do software RAID, and hope that it works? I'd prefer to avoid hassle should a disk fail or if the MB dies on me or something. Then there's the ECC. Is it worth the headache for possibly getting a setup that supports it (I can't get a straight answer whether it will or not), or should I just go for regular memory? What kind of hardware do I need to run e.g. OVM at all? I'm badly in need of advice on best-practices and what would work best (or at all).

    • Offizieller Beitrag

    My options then seem to be to either cough up at least $100 to get a card and hoping that it's compatible, or do hardware RAID (or whatever the motherboard-based variation is called).

    If you're willing to do a used card (8 ports) and flash it to IT mode, consider the info in this thread. I got a Dell Perc H200 and 2 each 4 port break out cables for $40 delivered.
    One of the reasons I bought this card was for transparent SMART pass through (along with 6GB), replacing an Adaptec that was RAID only (without transparent SMART pass through and 3GB).

  • Getting a new H200 around where I live seems nigh impossible but I can check the local used market to see if there's one available, but would I then be stuck with running hardware RAID on a card that might break and be hard to replace, or can you use it exclusively as a way to do a passthrough in ESXi and do all the RAID in software on the VM? Like I said, I want to get away from reliance on hardware, but at least if I do RAID on the MB controller, I know that it's not something exotic that will be hard to replace to recover the data. If I don't have to do the RAID on the card, I could also potentially remove the disks and do a physical installation of the NAS software on another machine to get at the data, which is also acceptable. Alternatively, if the mirrored mode on the H200 leaves the disks readable if you take one and put it in another system, that could also make doing the RAID on the card an okay option (I mean, theoretically there should be no need for mirrored drives to contain anything beyond the raw data itself, but you never know with these proprietary solutions).

    • Offizieller Beitrag

    but would I then be stuck with running hardware RAID on a card that might break and be hard to replace,

    If you look through the thread (it is long, this is a better -> direct link) flashing a Dell H200 to "IT mode" makes it a JBOD controller. Thereafter, it's software raid capable which includes mdadm, ZFS and BTRFS is possible, along with transparent SMART stat's pass through. There was a risk in flashing it, but for the price of the card on an auction site ($25), I couldn't pass it up and it's working fine.
    ___________________________________________


    Regarding the rest of your notes on approach:


    ECC is a very good idea for a server. I've seen it correct random hard errors that, otherwise, would have gone unnoticed. Intel ECC capable implementations work with EDAC utilities which allows for checks and stat's. I'm not so sure about AMD's implementations of ECC and I've seen at least one proprietary Intel Mobo implementation that didn't work with EDAC.


    I'm leaning towards #1 for the flexibility and simplicity, but that's where the RAID issue comes in.

    While ESXi is flexible, I wouldn't call it simple. I have Proxmox where, since I don't have unlimited resources or a budget, working out the best way to set up the storage back end can be something of a task. (I still haven't settled on a config - I'll work with it more this fall.)
    Any virtualization platform adds complexity. It may be worth it, if you're planning on running more than one server guest and can afford enough storage for OMV and other servers, with room for growth. Along with backup, that much disk space can become "pricey".


  • If you look through the thread (it is long, this is a better -> direct link) flashing a Dell H200 to "IT mode" makes it a JBOD controller. Thereafter, it's software raid capable which includes mdadm, ZFS and BTRFS is possible, along with transparent SMART stat's pass through. There was a risk in flashing it, but for the price of the card on an auction site ($25), I couldn't pass it up and it's working fine.

    I see. I've found a couple of used cards with cables for about $100 with shipping and everything, I might give that a try. I'd rather pay a bit more now than pay for it later.


    ECC is a very good idea for a server. I've seen it correct random hard errors that, otherwise, would have gone unnoticed. Intel ECC capable implementations work with EDAC utilities which allows for checks and stat's. I'm not so sure about AMD's implementations of ECC and I've seen at least one proprietary Intel Mobo implementation that didn't work with EDAC.

    The issue I'm having with the ECC is that either I'm pretty much stuck getting server-grade stuff (Xeon and Supermicro MB) which will be used with no warranty and fairly expensive, or I go with consumer-grade, of which people claim that certain AMD CPUs support it unofficially, and I've found an AM4 MB (ASRock B450 Pro4) that supposedly supports it as well, but whether it actually does so in practice seems to be impossible to get a straight answer to. I talked to someone who had a similar setup who said that a memory tool reported a data width of 64 and total of 128, as well as report that the memory was multi-bit ECC, but whether that implies that it's actually working or if it's just what the memory is capable of I don't know.


    If I do go with server-grade, I'm way out of my depth. I have no idea what Xeon performs like and what I'd need, and it's not terribly easy to look up benchmarks. They also tend to be less concerned about power consumption, heat and noise than consumer-grade. Whatever I get will basically be stuck in my living room, so I'm pretty keen on not getting something that'll try its best to imitate a jet engine.


    While ESXi is flexible, I wouldn't call it simple. I have Proxmox where, since I don't have unlimited resources or a budget, working out the best way to set up the storage back end can be something of a task. (I still haven't settled on a config - I'll work with it more this fall.)
    Any virtualization platform adds complexity. It may be worth it, if you're planning on running more than one server guest and can afford enough storage for OMV and other servers, with room for growth. Along with backup, that much disk space can become "pricey".

    Simple mainly in the sense that it'd let me use any OS if I get the idea to add something that I want and I can backup entire machines. If I don't virtualize, I'd have to have a NAS software that is basically an up-to-date Linux OS. FreeNAS doesn't sound like it'd do that for me, and while OMV on top of Debian sounds better, I'm unsure how Debian-like it actually is and how well it follows developments in the OS and packages. As for virtualization, I have some experience with ESXi, and I know some people who have even more, so hopefully I can figure it out. I don't anticipate that I'll be using much in the way of storage other than the NAS part with the actual data of course, so it's really just a question of how many different devices I want to run. VM storage won't be the point of failure for me. Unfortunately, it's not just the digital storage space that is limited for me.

    • Offizieller Beitrag

    If you want server grade hardware, don't eliminate the possibility of purpose built servers. There are plenty of options but it may take some patience and scouring around to find good deals.
    __________________________________________________
    As an example, I have a Lenovo ThinkServer TS140. It's an i3 that came with 8GB ECC (I added another 4GB stick for 12GB total.) I boot with 32GB USB thumbdrive, use a 4TB+4TB zmirror for data, and an extra 3TB drive (EXT4) for client backup and utility uses.


    I caught it on sale, on amazon, for $220 some years ago. The TS140 is a SOHO server (server grade) by design. It's whisper quiet, and I do mean nearly silent. I'd have no problem with that server being in my bedroom, by the night stand. Acting as file server and running Dockers, it's been fine - more than enough power from my purposes and it's idle power consumption is good. They may still be available as new old stock. Here's a video if you want to look a similar model over.


    If "quiet" is what you're looking for and you want server grade hardware, a new or used SOHO server might fit the bill. In any case, I'd check reviews to make sure the sound level is reasonable.
    ___________________________________________________


    I also have an Intel SC5650HCBRP. (This is the box that I bought the Dell PerC H200 controller for.) I got it for $150 and it's true server grade hardware, heavy duty, the best of everything. I upgraded the dual Xeon's to X5660 for a song (I think it was $32 for two CPU's). I installed 32GB of ECC and loaded up Proxmox. It holds 6ea 3.5" drives and 2ea 2.5" drives easy. For your situation, a commercial server of this type wouldn't work. While it's a tower case, it's beastly big and LOUD. (While I have it in a closet where the noise doesn't matter, I'm still giving thought to replacing the front grill and back case fans with quite Noctua models.)
    ___________________________________________________


    I know of at least one user on this forum that started with a Supermicro server. He dumped the rack mount case and the loud PS and repackaged the Mobo in a tower case with a quiet PS. (While he knew what he was working with, note that some commercial server Mobo's won't fit a standard case.)
    ___________________________________________________


    There's nothing wrong with consumer or pro-sumer grade equipment either. I have an Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, and a 4TB rsync utility disk as a backup device.


    These are just ideas to kick around.

    If I do go with server-grade, I'm way out of my depth. I have no idea what Xeon performs like and what I'd need, and it's not terribly easy to look up benchmarks.

    CPU benchmarks are easy to look up here -> passmark. They have Xeon's in their lists for comparison.

    while OMV on top of Debian sounds better, I'm unsure how Debian-like it actually is and how well it follows developments in the OS and packages.

    OMV is completely Debian. OMV is an application layered onto Debian. When updating, all packages and security updates come from Debian repo's.

  • Okay, I've been dredging the market for used servers, and I've come to a few conclusions. Finding a used pre-built server seems nigh impossible, which means I have to either buy them new or in parts and put it together myself. If I'm going to virtualize things and keep my options open, I'll also need a HBA and better hardware to support it. My options are then:

    • Get an expensive, pre-built tower system.
    • Buy used server parts and build my own system.
    • Get a less expensive pre-built system and just run it as a NAS/file server only.
    • Say screw it and go with consumer-grade parts.

    If I go with 1 or 2, I have room for HBAs, get ECC memory, enough juice to run ESXi and virtualize everything I might want, and some customization options (at least in the latter case). First option is significantly more expensive however, and while the second is managable in that regard, it's still quite expensive and adds the worry of finding parts that are compatible to the pile, not to mention the parts I get may be in any state of repair.
    Option 3 also gives ECC memory, but virtualization is basically out due to limited resources and space for extra cards. If I want more functions, I'll have to buy new hardware for it, but I might be able to work with RPI or similar in that case. Cost varies, but it's generally more in the consumer range of things.
    Option 4 does not give ECC support (probably, maybe, who knows?), but I don't really know if I need it. Sure, it's nice, but I've never lost anything in 20 years of non-ECC RAM storage solutions, and what are the worst-case scenarios anyway? I'm okay with a video frame getting corrupted or a few pixels getting the wrong color in a picture, so unless I've somehow managed to miraculously avoid all the catastrophic doomsday scenarios that the FreeNAS forum tells me are inevitable unless I run ECC RAM, I think it's fine. It has the advantage of being generally cheaper, highly customizable, might even be able to run ESXi if I get a HBA card, and replacement parts are both cheap and common.


    Given the cost and workload involved, I'm leaning towards either 3 or 4 now. The world is my oyster when it comes to the latter, and I can get as much performance and expansion space as I want, so it's really just a question of budget there.


    The former is somewhat trickier, but I've been looking at the HPE Microserver Gen10, which looks somewhat promising. The price is okayish given that it's supposedly pretty good quality stuff, it has 8GB of ECC with option to expand it, a HBA would not be necessary with no virtualization, dual 1GB network ports so I can separate internet and local network traffic, and gives me a PCIe slot for a 10GB network card if 1GB on the local network turns out to be too slow for my taste. With that, I get room for 2 storage disks, 1 SSD scratch drive, 1 SSD OS drive, and a spare drive slot should I need it. It also supposedly runs very quietly and at like 50W under full load. The only question I have there is if the CPU, either AMD Operton X3216 or X3418, is sufficient to run a file server that can shuffle data over the network as fast as a pair of mirrored HDDs can manage (and that's not even taking into account the possibility of future SSD upgrades), as well as potentially handle torrent traffic for a 100/100 Mbit connection. I've heard the CPUs are somewhat similar to Intel Atoms in terms of performance, but that doesn't tell me a lot.



    TLDR version: Would a Microserver Gen10 with OMV be a good option for my use-case? Does it have enough juice to do what I want it to, and should I go for the cheaper dual-core or the quad CPU? Otherwise, I'm probably back to looking at a custom-built consumer PC.

    • Offizieller Beitrag

    The only question I have there is if the CPU, either AMD Operton X3216 or X3418, is sufficient to run a file server that can shuffle data over the network as fast as a pair of mirrored HDDs can manage (and that's not even taking into account the possibility of future SSD upgrades)

    Generally speaking, the bottle neck would be the 1GB Ethernet interface. Either processor and your SSD's shouldn't have any trouble saturating a 1GB interface. I don't have a torrent client on a server so I couldn't comment on that.


    I've heard the CPUs are somewhat similar to Intel Atoms in terms of performance, but that doesn't tell me a lot.

    If you look at the passmark page, Atom performance runs from pitiful to very good on the high end. The top end Atom CPU performs on par with the AMD Operton X3418 and my i3. (That's not to say it's practical from a budget stand point.) While it wouldn't be good for ESXi virtualization, the X3418 won't have any trouble in a traditional file server role. With 4 cores, the X3418 could run a CLI (Web GUI) based virtual client, or a handful of Dockers with relative ease.


    I'm okay with a video frame getting corrupted or a few pixels getting the wrong color in a picture, so unless I've somehow managed to miraculously avoid all the catastrophic doomsday scenarios that the FreeNAS forum tells me are inevitable unless I run ECC RAM,

    I wouldn't believe all that either. (It seems all forums have at least one harbinger of data doom, death and destruction.)
    You're right, a couple corrupt bytes, here and there in a video file, would likely manifest as a artifact in the viewed stream.


    It depends on what you have and how you chose to protect it. IMO, using a ZFS zmirror for data I care about (some of it goes way back), is 95 percent of the game. If files are not loaded into RAM and manipulated, ECC is not doing much for stored data. And, as noted, the zmirror keeps static storage clean.


    In my case, I spent a good bit of time in researching the options and looking for (tight fisted) bargains. On the other hand, time can also be thought of as money. There's no right or wrong, just choices.


    Many users start with an old PC, throw a few drives in it and configure it up. That's actually a good starting approach. With first hand experience and as the platform evolves with add-on's, knowledge is gained about what users might want to do. Thereafter, selecting a replacement platform (or upgrade) that supports what they want to do going forward becomes much more obvious.

  • Alright, I'll probably be looking around for a Microserver then and see if I can get a decent deal on it, else just buy consumer parts. I don't think I can wait for good deals on used parts; I'd like to get this done, and I know I'll never get around to it unless I settle for good enough and just do it. Like I said, a lot of my data will be images, video and music, much of which I can replace if lost, but some of it (mainly home-made pictures and video) can't. Most of it will just be cold storage, only read to view or upload to friends/family occasionally, and maybe deleted when I no longer want/need it. If I get a few errors in the data that I can't detect, so be it, and if my hardware dies or the entire file system gets corrupted, hopefully my external backup will still be alive and well. I may be overthinking this, I just want to do it right from the start, you know?


    Thanks for all the help. If nothing else, at least I feel a lot more comfortable with my options now than when I started.

    • Offizieller Beitrag

    Like I said, a lot of my data will be images, video and music, much of which I can replace if lost, but some of it (mainly home-made pictures and video) can't.

    I don't know what you had in mind for data integrity protection but, have you looked at SNAPRAID ? One disk will protect all of your data, allow for recovery of files and folders as of the last SYNC, and will detect and correct bit-rot. It's worth a look and it's support in OMV, with a plugin, in the GUI.


    Of course, you still need that backup drive (nothing is a substitute for backup - a full second copy) but it would be nice to know that the data you're backing up is not corrupted.

  • It was my understanding that SnapRAID was some variation of RAID4, but I could be mistaken. Would that not require me to get another disk, and how would that work with 2 mirrored disks? I suppose I could skip mirrored disks entirely, but my experience with Windows parity storage spaces (RAID5 equivalent, I believe) is quite awful when it comes to write performance and not great with reading either.

  • It was my understanding that SnapRAID was some variation of RAID4, but I could be mistaken.

    You are mistaken. There is no striping involved and the parity creation is not done continuously on the fly, only on demand.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    • Offizieller Beitrag

    SNAPRAID takes the best feature from RAID5 (the ability to recreate a failed disk) and adds a lot of highly desirable features to include protection from bit-rot, file and folder restoration, and others.

  • Okay, so theoretically, I could have 2x4TB in a mirror for performance, and then 1x4TB for SnapRAID parity? Would there be any complications from using it on a RAID pool rather than on a single disk, e.g. if one disk in the mirror flips a bit but the other one doesn't?

    • Offizieller Beitrag

    Okay, so theoretically, I could have 2x4TB in a mirror for performance,

    NO, there is no performance gain using Raid + using Snapraid the drives are used individually, so 3x4TB = 2x4TB for data and 1x4TB for parity.

  • The performance gain would be from the mirror, not the 3rd disk for parity. But if SnapRAID doesn't work against a mirrored pool and requires 3+ disks in a RAID4/5/6-like setup, the entire point is moot. The whole point of using a mirror in the first place was to gain that read performance. I was hoping it could be used to basically check two disks in a mirror for errors and copy the data from the good disk to the bad disk if one is detected, while the mirror otherwise behaves as it normally would. If it can't do that, then I'd have to reevaluate the situation.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!