Posts by aitrus00

    Well I got it working finally. I think I was correct in my original assumption (partially) that there was a screw mount on the case that was probably shorting the board. (One didn't line up where it should have). The second thing was I think what @cadre mentioned about overtightening the CPU. I removed everything, reseated it all and voila it posted! :)


    So now my question is this, I can see all 8 drives in OMV now, all formatted, here are my questions:


    • What file system should each drive be to maximize the setup with UnionFS?
    • Best policy to use in UnionFS for mount options?

    Also, is there anything else I should know for best practice settings for SnapRAID and UnionFS? I want to make sure I get this right from the start :)

    Yeah I did that's how I found that RAM. I really think I missed a screw post on the MB somewhere, it feels like it's shorting out because of that. I will report back when I have time to gut and test individually.

    Verify that the RAM you have installed meets all requirements - correct number of sticks, installed in correct slots.

    This is what I have:


    MB: SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3
    CPU: Intel Xeon E5-1650 V3 Haswell-EP 3.5 GHz 6 x 256 KB L2 Cache 15MB L3 Cache LGA 2011-3 140W Server Processor BX80644E51650V3
    RAM: HYNIX HMA41GR7MFR4N-TF Hynix DDR4-2133 8GB1Gx4 ECCREG CL15 Samsung Chip Server Memory (x4)


    I made sure, initially, that all the RAM sticks were loaded in the blue slots as the manual said, then I removed all but one stick, and it was still doing the same thing. SO I am at a loss. I am guessing maybe a screw post on the case somewhere shorting out the board. I won't really know until I gut it, keep the motherboard by itself with one stick and see what happens. Does the configuration above look correct?

    I just put everything together last night, powered the system on this morning and it powers up the fans for about 3 seconds, then shuts down, then does it all again. Any ideas? I tried a different power supply, that wasn't it. I left just one stick of RAM in, that didn't do it either. Thoughts? I would hate to have to dismantle everything and start over :(

    I did settle on a power supply @jollyrogr thanks for weighing in though :)


    I finally have all the parts in house and ready to start putting it together. Now I have the question about how to partition these drives. So there are 8 drives, all 8 TB each, I was initially thinking saving 2 drives as parity to be able to withstand 2 drives failing, but not sure if that is overkill? Thoughts? Also I watched some videos a while back from TechnoDadLife about using UnionFS/SnapRAID but this was a while back with OMV 4.x. I have seen talk about MergeFS as well, just want to make sure I have the right focus initially when setting this up, want to get it right the first time :)


    As always, opinions are appreciated.

    I would go with OMV 5.x.

    Same as OMV 4.x. Just install updates from the Updates tab.

    You are a gentleman and a scholar! Thank you so much! I'm sure I will be back on the forums, but probably with questions related to OMV instead of hardware, can't wait to get this monster up and running, so much possibility! :)

    That would probably a good choice.

    Awesome thanks so much for all the advice (across the entire thread). One extra question concerning OMV itself, I will probably be building this system completely within the next month or 2, is OMV 5.x the way to go right now, or is there a reason I should stick with OMV 4.x? I am trying not to have to redo work in the coming months so would prefer to be on the latest and greatest if your opinion is it is stable enough. And what is the upgrade procedure for OMV 5.x when updates are released? Is it just a few clicks in the GUI to update? Command line updates? Full rebuild with a new build version?

    Those are both good power supplies but expensive and I don't think you need power supplies that big. My main server has an E5-2697v3 (145 W TDP vs e1650's 135W TDP), four hard drives, four SSDs, and one NVME drive and it never uses even close to the full PSU.

    I appreciate you. The link to the one you used (https://www.amazon.com/gp/prod…_asin_title?ie=UTF8&psc=1) is out of stock and only being offered used. I checked NewEgg and found it (https://www.newegg.com/evga-su…klink=true#scrollFullInfo) but the reviews there are much different than the ones on Amazon, and are a bit concerning. Can you recommend a different one but similar to what you used that would work? Trying to purchase the rest of the components I need to finally complete this build after so long :)

    Why are the number of rails important? As long the power supply has 9 sata power connectors, you should be fine. I had 20 hard drives and 8 SSDs on this power supply - https://www.amazon.com/gp/prod…_asin_title?ie=UTF8&psc=1. And this power supply (single rail) has worked well on two of my systems and would be enough for your system (would need 3 4pin molex to sata adapters though) - https://www.amazon.com/gp/prod…h_asin_title?ie=UTF8&th=1

    Thanks for the comments as always ryan. The 2 supplies that I found and edited into my post was after your reply to my initial post. What do you think of the ones I listed? I wouldn't want to go less than 650W with the E-1650 that will be in this build.

    SO I am struggling with what power supply to get. I have an SSD and 8 storage drives, so I need a power supply with enough modular rails to support all of that. Can anyone recommend a good one? I looked into the ones mentioned here but I didn't see any that had enough rails to support that many drives.


    @jollyrogr, the case I am using for the build is: https://www.amazon.com/gp/prod…_asin_title?ie=UTF8&psc=1


    Edit: Did find these 2 that would support 10x SATA. Thoughts? The Platinum version is $20 difference from the Gold, is the Platinum worth the extra money, or is the gold version enough for my needs?


    https://www.newegg.com/seasoni…-_-Product&quicklink=true
    https://www.newegg.com/seasoni…pd-850w/p/N82E16817151209

    I think I am going to go with this:


    MB: SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3
    CPU: Intel Xeon E5-1650 V3 Haswell-EP 3.5 GHz 6 x 256 KB L2 Cache 15MB L3 Cache LGA 2011-3 140W Server Processor BX80644E51650V3
    RAM: HYNIX HMA41GR7MFR4N-TF Hynix DDR4-2133 8GB1Gx4 ECCREG CL15 Samsung Chip Server Memory (x4)


    Next question I have would be how much wattage should I target for a power supply? Can someone recommend a good one, knowing what I am putting in this monster with HDs and components?

    Thanks for all the comments. I'd like to go the AMD route if possible for the lighter power consumption. I'm fine to add a dedicated video card if need be. The issue I have is finding AMD boards that have 10x sata ports and IPMI. IPMI is a luxury that I suppose I can do without so not a big issue if I don't have it (unless you out there have used IPMI and it's a feature you can't live without now).



    i am happy with my new m5015 (LSI 9260-8i)... i use 7 sata disks a 2 TB. Next time i may use a LSi 9266-8i but this one was cheap and i can't use pcie3. the more important question is which disks are used. 2 or 4 TB wd red drives are ok but shingeling drives like seagates are verry bad! so its easy to get a fast and reliable 25 tb storage (8*4 tb)... but 50 tb.. hmmmm... ok may be with 2 controllers, 2x 8xpcie onboard and a huge cassis with 16 drivebays... but if you plan this... then a SAS Storage Array Expander and SAS Drives may be a better choice. and it even makes no sense to do softwareraid on 16 disks...


    I am using 8TD WD Red (5400RPM) drives. I have 8 of them leaving 2 sata ports for my boot drive (if I get a board with 10x SATA on it). I don't think I have a full understanding of the recommendations here, whether to go Intel or AMD, and if the configurations I have above would work for what I am looking to do. or if I should take a totally different route with MB/CPU/RAM.

    Hello all,


    With AMD making a charge since Threadripper was released, I was wondering if Intel is still the preferred medium for a NAS server build these days. My usage for my NAS is as follows:

    • 4k movie streaming across gigabit lan and wireless (less so to wireless) via Orbi mesh wifi I have in the house
    • Doing home automation with HASS.IO
    • Potentially spinning up a VM (or other Dockers) for web development
    • Backups of personal computers in the house (2)

    Basically this will be my main media/streaming server. I know AMD has always lead the way with more efficient power consumption, so can someone shed some light if I should still be looking for Intel or maybe there is a more efficient AMD server-class board that would be better. I need a board with at least 10 SATA ports and ideally with IPMI for headless usage. Any assistance would be appreciated thank you!


    2 Configurations I was weighing up (Intel):


    MB: SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3
    CPU: Intel Xeon E5-1650 V3 Haswell-EP 3.5 GHz 6 x 256 KB L2 Cache 15MB L3 Cache LGA 2011-3 140W Server Processor BX80644E51650V3
    RAM: HYNIX HMA41GR7MFR4N-TF Hynix DDR4-2133 8GB1Gx4 ECCREG CL15 Samsung Chip Server Memory (x4)


    MB: GIGABYTE C246-WU4 LGA 1151 (300 Series) Intel C246 SATA 6Gb/s ATX Intel Motherboard
    CPU: Intel Xeon E-2136 Coffee Lake 3.3 GHz LGA 1151 80W BX80684E2136 Server Processor
    RAM: G.SKILL Ripjaws 4 Series 32GB (4 x 8GB) 288-Pin DDR4 SDRAM DDR4 2400 (PC4 19200) Desktop Memory Model F4-2400C15Q-32GRR


    The first configuration is more expensive but I get IPMI with it. Second configuration is cheaper, without IPMI, but the processor uses almost 50% less power. Thoughts? Other suggestions?

    OK so I was able to get six WD RED 8TB drives on a really cheap daily price on Amazon, so my question is this, since I am going to want 2 more of those drives, can I add them after the fact to my pool with OMV, or should I wait to get those 2 drives before doing my configuration?


    After all this discussion this is what I settled on:


    (8) WD RED 8TB drives
    ASRock C3758D4I-4L Mini ITX Server Motherboard SOC
    Fractal Design Node 804 Black Window Aluminum/Steel MATX Cube Computer Case
    E5-2420v4 CPU
    (4) Samsung DDR4 2133MHzCL15 8GB RegECC 1RX4 (PC4 2133) Internal Memory M393A1G40DB0-CPB
    Seasonic FOCUS Plus 650 Platinum SSR-650PX 650W 80+ Platinum
    SSD SATA 2.5" 120GB Dogfish Internal Solid State Drive
    Noctua NH-U12DX i4, Premium CPU Cooler for Intel Xeon LGA20xx (Brown)


    Am I missing anything? Does all that look good for the build? Appreciate all the feedback from everyone.

    Yep, I would get rid of it since I don't have another board to put it in. It is just the cpu (no heatsink - that is how it came). I use a Noctua NH-U12DX i4 heatsink. PM me if you are interested.

    I have a EVGA Supernova 650 P2 power supply but any good quality, platinum rated power supply should be good.

    Yes.

    I backup about 10TB to a second server and LTO tapes (don't even look at the prices for those lol). 60TB is a lot but I wouldn't go without backup forever.

    I won't be storing 60TB up front, currently I have almost 6 TB of data filled, but wanted a lot of room to grow since I don't currently have any 4k content but will be in the future not to mention potentially using this for surveillance video storage as well.

    Doesn't matter as long as you tell the bios to boot from it.

    16GB+ is fine. I don't think you can get an SSD smaller than that anyway.

    You definitely don't need that CPU. It is a 14 core cpu that I got from a friend. I started with an E5-2420v4 which should work well for your needs (which I still have if you are in the US).

    I would get identical drives. The PRO drives would bust your budget and I don't think they are needed. I got them for a project that is now using SSDs.

    For that CPU found it new on newegg.com for $420. You mentioned still having that processor, I am in the US, were you wanting to sell it? Also what power supply would you recommend? And one more question about the parity drives, if I am planning to get 8 total drives, would you recommend the way I was thinking of doing it (SnapRAID/UnionFS with 6 data drives and 2 parity drives) for fault tolerance? I like the idea of withstanding 2 drive failures, as I don't know how feasible it is to backup terrabytes of data :)

    Is noise level a concern? If the nas can be loud you can consider some 2u server Rack with 12 3.5 hot swap cages. It is pretty easy to find reliable hw in those.

    Not sure if noise level is a concern, moving into an older 1930 tudor house, not sure where I could put a rack, not to mention that running network cabling could be challenging. Actually have an electrician coming out to survey the house and see what is possible. Getting mesh wifi (Orbi) to assist with the 4k streaming around the house regardless of if I wire anything.



    It is my primary VMware ESXi server (currently 36 VMs) with one Xeon E5-2697 v3 and 128 GB of this ram in it.

    Yes but I would leave the boot drive connected to the motherboard. The LSI 9211-8i clones are cheap and easy to put in IT mode giving you at least 8 more drives.

    I wouldn't worry about drives tested with the board.

    Thanks so much for all the information. My Synology is starting to get fritzy so I am going to look at ramping up this project. I will look into the CPU/RAM you recommend for that initial board you recommended, I like the extra SATA ports on that one so I can expand if I need to. Assuming I should put my SSD boot drive on SATA0 and the rest on the ports following? What size SSD is good for OMV and its config? All other data will be on the data drive pool (60TB).


    As for the Xeon E5-2697 v3 CPU, prices range from $1500 - $3000 for the CPU alone, that might be a budget breaker :) Trying to stay at a max of $3000 (including all the hard drives). Since I will be using this mainly as an OMV server for 4k media/music streaming with maybe another VM for my web design. Can you recommend maybe something a bit more scaled down for CPU based on my needs?


    And for the parity drives, should I just get identical ones (6x for data, 2x for parity), or use something else for the parity drives? Noticed you mentioned the WD reds in another thread, that you had some PRO drives as well.


    - Aitrus