Replacement for the HP N54L - Or, My new [mammoth] home server :)

    • OMV 2.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Replacement for the HP N54L - Or, My new [mammoth] home server :)

      I have recently built a replacement home server that has superseded my old trusty HP N54L. I would like to say thanks to all who patiently gave input, suggestions and help etc in this thread: Mobo and other HW suggestions wanted for new OMV box with ECC as the finished article would not have been possible without them.

      Project Goals:

      1. A media server that will easily stream 4K to several devices in the house at once
      2. A place for backups from our laptops that will protect the integrity of family photos/videos
      3. Wife Seal of Approval
      4. Other things in the future.

      Wife Seal of Approval?

      We all know that when we want to buy a new toy that we have to think about the Wife Approval Factor. Geek girls - you don't have this problem, because when you say to your husband/partner that you want to build a l337 media server, he's probably drooling already and telling his mates how his Mrs is awesome. However, any guy who has set out on a project in the name of making life easier through IT that will benefit the house will have been stung at some point. My wife probably gets fed up of me talking about my latest project ideas, but she's incredibly tolerant and occasionally gives a lot of helpful input and helps with varying processes (although I don't think she'll ever forgive me for the Xbox360 cooling project that was a lot more hassle than it needed to be). She doesn't usually mind when I want to invest in a project but:

      1. It must 'just work' - when the wife wants her music or movies or whatever on her device of choice, it must work. No one likes investing in something that doesn't do what it says.
      2. No upgrading and piddling around with hardware every 6 months - RAM upgrades and scheduled software maintenance is allowed, usually when she is out with the girls

      In fact, my wife has been an active part in this project and has helped with the choosing of certain components and helped with assembling the build as well. For more information on Wife Approval Factor, please see this wiki: en.wikipedia.org/wiki/Wife_acceptance_factor

      Without further delay, here is the finished article:

      [IMG:http://i.imgur.com/AYUeVPK.jpg]

      Please note that the WAF Wiki article references the use of 'appealing colors' to gain WAF - My wife chose this chassis.

      The build:

      CPU: Intel Xeon E5-2695v3 [preowned]
      CPU Cooler: Corsair H105
      Thermal Compound: Arctic MX4
      Mobo: ASRock X99 WS
      GPU: Nvidia GeForce 9500GT
      RAM: 32GB DDR4 ECC (2 x Crucial 16GB DDR4-2133 RDIMM (CT16G4RFD4213))
      PSU: Seasonic Platinum SS-400FL2 400w Modular Fanless
      Chassis: Thermaltake Core V71
      Drives (RAIDZ2): 8 x Seagate ST4000DM000
      OS drive: Kingston SSDnow V300 120GB
      SATA cables: Blue flat for OS SSD, 8 x Black rounded for array drives (generic, no links)

      CPU - Xeon E5-2695v3:

      The main decision to use a Xeon CPU was ECC support coupled with the power to last. The use of the ZFS file system pretty much requires ECC, so my choices were: Avoton, Some of the i3's and Xeons (or AMD but I don't like them despite the N54L having an AMD). Being that I had in mind that I wanted this system to last a reasonable amount of time, the Xeons ended up being the solid choice. This then gave me the problem of budget. Initially I was considering an E3 series Xeon or a lower end E5, but a friend of mine who sells refurbished components sold me an E5-2695v3 for £350 and a crate of beer. With 14 cores at a base frequency of 2.3GHz up to 3.3Ghz it's one of the most powerful CPUs this board can take and is unlikely to need changing for the life of the build.

      CPU Cooler: Corsair H105

      When we decided on our motherboard choice (see below) we noticed that, unlike a lot of server boards, it has a Square ILM (pitch). This opened up the use of pre-built CPU water coolers and after looking through a lot of review sites the wife and I decided on the Corsair H105 due to it's extreme cooling performance and reasonable price. We were initially concerned about noise, due to one YouTube video showing it to be a little loud, but we don't regret the choice - it's a fantastic unit and very quiet even under load. My only complaint of Corsair was that until recently, they had no UK RMA address in the event things go wrong. This has now changed.

      Thermal Compound: Arctic MX4

      We opted to remove the stock Corsair paste in favour of MX4 which I totally love. We're huge gamers and our Xbox's and PlayStation's have all been re-pasted with MX4 and we've never had any problems with them. The use of MX4 was certainly a good choice when thinking about the Xeon CPU, the highest I've seen it get to is 35C on a hot day. Great paste, great price, great performance - what more could you want?

      Motherboard: ASRock X99 WS

      This was the hardest part of the system to settle on and, as those who were involved in the honing process will know, ended up being a small project in itself. Again due to ZFS, we needed to find a board that supported ECC. ECC only works if the motherboard, CPU and RAM all support it. Originally, as mentioned, I was looking at an Avoton all in one board because oddly enough those little Atoms support ECC. We decided against this as we felt that the CPU would probably need upgrading before the build was retired/reassigned. Being that they are integrated BGA units, the board would need to be replaced as well. We initially chose to go with a Supermicro X10SRA-F which was a reasonable £282. Initially things appeared to be good, but the board suffered from some IPMI bugs which Supermicro were unable to fix. When we sent the unit back for RMA we hit another problem. We had already committed to the Corsair H105 and didn't want to change it. This completely limits us to boards with a square pitch. The only other server boards with a square pitch that ticked all the other boxes were the ASUS Z10PA-U8 and Z10PA-U8/10G-2S. I've had a handful of negative experiences with Asus, so these were not an option. At this point I was concerned because I wasn't sure if it was actually going to be possible to keep the Corsair. However, checking through some of ASRock's highend workstation boards I noticed that the X99s on their site list support for ECC. From talking with others it's been noticed that sometimes a manufacturer will say that the board supports ECC memory because it will run with it, yet the ECC feature is never enabled. Being that ECC is notoriously difficult to prove if it's actually enabled, I wanted to make sure so I dropped ASRock an email. They confirmed that their X99 boards that list ECC do work with, and enable ECC providing a Xeon CPU and ECC memory is used. Pleased with this response, we went for the ASRock X99 WS from Amazon for £254.99 which looks stunning covered in blue heat sinks and has an absolutely amazing UEFI. Unfortunately the board doesn't have IPMI and integrated graphics, but the many benefits of this board and the excellent build quality have won through. In terms of Debian support, everything was detected and no restricted firmware was needed.

      Update: 13/7/2015: There is now a way to verify if ECC is working but it uses Windows. See below.

      The post was edited 6 times, last by ellnic ().

    • Replacement for the HP N54L - Or, My new [mammoth] home server :)

      GPU: Nvidia GeForce 9500GT

      Unfortunately, due to the ASRock board being designed for you to pretty much dump 4 high-end GPUs in it, it unsurprisingly doesn't have integrated graphics. This really wasn't a big deal for me though and I dug around in a box of tech from yesteryear and found an Nvidia GeForce 9500GT from an old gaming tower. Despite being the hottest component in the entire build at 38-39C, and delaying the POST ever so slightly it's serving a purpose. The open source Nouveau driver works OTB in Debian 7 and, with the exception of the lightdm login screen wallpaper occasionally pixelating on first boot only, everything else works fine. I have now uploaded a photo of this here.

      RAM: 32GB DDR4 ECC

      The memory we ended up going for is 2 x Crucial 16GB DDR4-2133 RDIMMs product code: CT16G4RFD4213. Not the fastest memory around, but it's ECC and it was pretty much all I could afford to spend being that the entire project had gone way over the original estimation. At £145 per stick, it gets the system up and running for the foreseeable future and is more than enough for a 20TB ZFS pool. The ASRock board can take UDIMMs, but I had already purchased the RDIMMs for the Supermicro which only took RDIMMs and LDIMMs. This has actually worked out for the best though, because I had trouble finding DDR4 UDIMMs, and to max out the ASRock board with it's total of 128GB, it would be 1 x 16GB in each slot, so this memory will never get wasted when upgrading which would have happened if I wanted to max out the Supermicro with LDIMMs.

      PSU: Seasonic SS-400FL, 400w Platinum Fanless and Modular

      I was very impressed when this arrived. I got it from a company on eBay and it was just under £100. The 460w was only a few quid more but I didn't want to go any higher because of heat generation. As it happens, the entire system with the all the drives in uses only ~89w at idle. The build quality is fantastic and the modular cables came in their own Velcro bags. The PSU was packed in a velour bag and surrounded with thick foam. The PSU itself doesn't appear to produce much heat, and is obviously silent. The only small complaint is that I would have wanted an extra SATA power cable with 4 connectors on, instead there was one with 2 and one with 4. I needed to power 9 drives so I bought a Startech 4 x SATA power splitter for the 2 x cable so it now has 5.

      Chassis: Thermaltake Core V71

      Beast! Beast! Beast! Whilst this is not anything like the original chassis (Node 804) that I really wanted, I am very pleased that we have ended up with this. The problem we encountered is that it's very difficult to find non-rack hardware that will accommodate at least 8 x 3.5" drives and, more importantly, keep them cool. Surprisingly, the Node 804 can do just that but not as well as the Thermaltake Core V71 with the minor sacrifice of space. Thanks to it's heavily perforated panels and 3 x 200mm fans, hard drive temps sit at about 35c even after spending a fair few hours copying data onto the array. It's pretty hot here in the UK right now, it's about 24C ambient at the moment (yes that's hot for us). The chassis has removable sides, top and front and 3 dust filters (front, top, bottom), there is a lot of room in this case for routing and hiding the various wires and plenty of grommets. If I had to be critical of this chassis its that there is no easy way to pick it up by yourself as there are no real grab points near the top of the chassis. Its far easier to grab it near the bottom if you have long arms. It measures in at almost 2ft tall/deep and just over 9" wide so it's not the easiest the move around. The chassis has power and reset buttons on the top front as well as 2 x USB 2, 2 x USB 3, Headphone and Mic jacks, HDD indicator LED, integrated fan controller speed (high/low) and a lights out button (to kill the blue LEDs in the 3 x 200mm fans). For unboxing pics of this Chassis, see here.

      Hard Drives: 8 x Seagate ST4000DM000's for array, Kingston SSDnow V300 120GB for OS

      The Kingston is a good performer with about ~520MB/sec write performance, strangely, Kingston list this to be about 70MB/sec less, but the proof is in the benchmarking. This is probably never going to be maxed out but I thought the Xeon deserved it. Needless to say it boots quick (see stats). I wasn't too worried about this drive being mirrored as I can get the OS up and running again quickly in the event of a failure. For the array, I will be using some of the drives that I already had in my HP N54L, the Seagate ST4000DM0000 and making the number up to 8 with more of the same. It's not the fastest drive on the planet, but it has a few very good plus points for me:

      1. 5900RPM - low power but decentish performance
      2. I already have 5 of them so I don't need to buy many more (I prefer not to mix models)
      3. Working in a RAIDZ2, the lower spin speeds shouldn't impact performance much and will still be able to saturate 6 Gig LAN connections.
      4. At the time of purchase, I could get them for £85 - making them by far the cheapest 4TB drive in the UK (Sadly, when I did this write up they are no longer on sale ). I have cheated slightly here though, see below.

      As it happens, these drives run cool and quiet. Even when I'm working next to the chassis, you wouldn't know there are 8 drives spun up in there. It looks like each drive only draws about 5w at idle, which is great. I was able to get these drives cheap by taking a bit of a gamble. I know from experience that the ST4000DM000 is used in quite a few of the Seagate Backup Plus range. Searching around I found these on offer at eBuyer for £85 each. I decided to take a risk and open it up and it was indeed a ST4000DM000 see here. This obviously carries the risk that Seagate won't honour a warranty, but I don't believe this to be the case. There were no warranty stickers on the drive enclosure, only the drive. I can't see why Seagate wouldn't honour a warranty on a failing drive if it's removed from the enclosure, but then they may not. At £85 though, minus the £10-15 I'd get for selling the enclosure, it's a very cheap 4TB indeed.

      Cables:

      Not really much to say about these but I ended up going for generic SATA cables from China on eBay. I wanted braided 4 x cables like the ones that Lian Li make, but you can't get them easily in the UK. Instead I settled for a 45cm flat neon blue cable for the OS SDD, and 45cm black rounded cables for the array. I chose rounded because they would be easier to manipulate.

      The Software:

      OS: Debian 7 (XFCE Desktop)
      OMV: 1.x on top of a Debian installation then upgraded to Stoneburner 2.1 with omv-release-upgrade
      Media Server: Emby
      Filesystem: Btrfs on OS drive, ZFS on RAID (via OMV ZFS Plugin)

      Quick software summary:

      OS: Debian! <3

      OMV: 1.x branch of OMV, but I like to have a full system with a GUI so I chose to install OMV on top of a fresh Debian install. (I have since moved to Stoneburner using the omv-release-upgrade command)

      Media Server: I am using Emby, whereas I was using Plex. As great as Plex was, it had a few really annoying metadata issues and Emby solves all of these and has great features. Emby is a little more resource heavy than Plex because it uses the mono libraries, this is one of the reasons why I have ended up retiring my HP.

      Filesystems: I am using BTRFS for the OS drive and ZFS for my array. The reason that I have chosen to use ZFS is because I am looking to protect the data that is stored within it. Unfortunately, BTRFS (an up and coming Linux native FS similar to ZFS) is not stable for RAID5/6 at the moment. ZFS is designed for Unix but has been ported to Linux by the guys at ZoL and it's fully functional. With the help of the OMV ZFS plugin, this is made easy.

      The post was edited 7 times, last by ellnic ().

    • Replacement for the HP N54L - Or, My new [mammoth] home server :)

      The Actual Build

      Now for some build pics. Some of these show the original black Supermicro board and blue box as it took a couple of weeks working with Supermicro before it was RMA'd. ASRock is after the first couple, you can't miss it!

      Pile of goodies [IMG:http://forums.openmediavault.org/wcf/images/smilies/smile.png]

      [IMG:http://i.imgur.com/o2MtSVS.jpg]

      SSD, SATA Power Splitter and extra LEDs (LEDs not going in at the moment)

      [IMG:http://i.imgur.com/FBnp279.jpg]

      Empty Chassis:

      [IMG:http://i.imgur.com/M76sGzm.jpg]

      Board and PSU in:

      [IMG:http://i.imgur.com/nkFxChY.jpg]

      CPU in, next to a 'nest of vipers' as @bobafetthotmail would call it [IMG:http://forums.openmediavault.org/wcf/images/smilies/biggrin.png]

      [IMG:http://i.imgur.com/DbPwlVr.jpg]

      Woohoo, new board here:

      [IMG:http://i.imgur.com/L6lZHmG.jpg]

      Corsair H105 on replacement ASRock Mobo:

      [IMG:http://i.imgur.com/EBvUGHL.jpg]

      Nvidia GPU in and room for expansion [IMG:http://forums.openmediavault.org/wcf/images/smilies/thumbsup.png]

      [IMG:http://i.imgur.com/MJWWdin.jpg]

      No longer a 'nest of vipers' and tidy SATA cables...

      [IMG:http://i.imgur.com/BDL2F0S.jpg]

      'But how did you tame that 'viper nest' I hear you scream?

      Why, lots cable ties...

      [IMG:http://i.imgur.com/GXqztk9.jpg]

      Hard Drives...

      [IMG:http://i.imgur.com/vtoSXTn.jpg]

      Now for some pretty lights. [IMG:http://forums.openmediavault.org/wcf/images/smilies/biggrin.png]

      Top panel powered on

      [IMG:http://i.imgur.com/2p4zimc.jpg]

      Haven't taken the protective film off the perspex window in this one..

      [IMG:http://i.imgur.com/vrFpJoC.jpg]

      Corsair block with blue plastic trim..

      [IMG:http://i.imgur.com/HGY9CZl.jpg]

      Front panel

      [IMG:http://i.imgur.com/4i8ad9b.jpg]

      Glowing hard drives.. I'm pretty sure that blue plastic would be UV sensitive [IMG:http://forums.openmediavault.org/wcf/images/smilies/evil.png]

      [IMG:http://i.imgur.com/uTjzmC6.jpg]

      Through the looking glass...

      [IMG:http://i.imgur.com/tZmvQkZ.jpg]

      It's a bit dark in there which is why I originally ordered the extra purple LEDs.. in practice, I probably won't put them in long term... but maybe just for one photo at some point [IMG:http://forums.openmediavault.org/wcf/images/smilies/smile.png]

      Now the moment you've all been waiting for...

      Vital Statistics

      Pricing

      When thinking about pricing, it's important to remember that I owned a couple of the Seagates already (and cheated with the rest!), and I got the Xeon at a severely discounted mates rate and it is preowned. I would not have been able to get the E5-2695v3 brand new. That said, the purpose of this should be to give others on the forum an idea of what it would cost if they wanted to build this rig. I've linked to the cheapest places I can find all of the components with the help of Price Spy. Unfortunately there are a couple of companies listed on price spy that I wouldn't use so, where appropriate, the price is the cheapest from one of the main players here in the UK. The only things I haven't linked to are the SATA cables and Nvidia as they would be eBay items that are likely to expire quickly. Instead, I have just put an estimated price. The Seagate drives are linked to the ST4000DM000, not the Backup Plus because it's unlikely that dismantling enclosures would be everyone's cup of tea [IMG:http://forums.openmediavault.org/wcf/images/smilies/smile.png]

      CPU: Intel Xeon E5-2695v3 - Scan - £1904.41 with delivery (Scan seem to list this as a mid-range CPU? )
      CPU Cooler: Corsair H105 - Amazon - £94.99 delivered
      Thermal Compound: Arctic MX4 - Amazon - £3.99 delivered
      Mobo: ASRock X99 WS - Amazon - £254.99 delivered
      GPU: Nvidia GeForce 9500GT - eBay - £15
      RAM: 32GB DDR4 ECC (2 x Crucial 16GB DDR4-2133 RDIMM (CT16G4RFD4213)) - eBuyer - £250.38 delivered
      PSU: Seasonic Platinum SS-400FL2 400w Modular Fanless - Box - £94.99 delivered
      Case: Thermaltake Core V71 - Amazon - £115.01 with delivery
      Drives (RAIDZ2): 8 x Seagate ST4000DM000 - Amazon - £112.30 delivered
      OS drive: Kingston SSDnow V300 120GB - eBuyer - £44.97 with super saver delivery
      SATA cables: Blue flat for OS SSD, 8 x Black rounded for array drives (generic, no links) - eBay - £20

      Total cost at time of this write up: £3682.13

      Power Consumption

      Despite the fact that this system is rocking a high end Xeon and a bunch of drives with huge LED fans, it actually chews a lot less than I thought. I will probably still shut it down when not in use, but I am no longer concerned about huge power bills should I accidentally leave it on 5 minutes longer than it needs to be. Armed with a Watt meter and this site, I worked out that if I were to leave it on 24/7 it would cost me about £8 a month. Being that it's only on for half a day at the most and only some days in a month, this is probably going to cost about £4 a month to run. Surprisingly, I was working on an old HP Pavillion Slimline tower for someone recently and it chewed almost as much! (albeit, it was an AMD)

      POST: spikes to almost 200w when all the drives fire up, then down to ~135w
      Booting: 125w-135w
      Xfce login screen: 87w
      Sat on Xfce desktop: 89w
      Geekbenching: aprox 190w-250w (this was very difficult to keep an eye on as it was all over the place).
      Copying 4TB of data into the pool: 124w (mostly)
      Streaming to Emby (not transcoding): 115w-125w (when I measured this it varied a bit, but I think it was affected by other processes)
      Streaming to Emby (transcoding): 150w

      Now lets have a look at the CPU usage chart in OMV. This was taken when I was copying data into the RAIDZ2 and testing Emby streaming to a several devices:

      [IMG:http://i.imgur.com/Img7BBs.png]

      Not even over 40%. This system should last a while [IMG:http://forums.openmediavault.org/wcf/images/smilies/biggrin.png]

      The post was edited 16 times, last by ellnic ().

    • Replacement for the HP N54L - Or, My new home server

      Verifying if ECC is actually working

      UPDATED:13/7/2015: This method requires Windows to verify ECC

      Thanks to @bobafetthotmail, @Markess, @ryecoaaron and all others who have helped with this.

      Unfortunately, as of this point in time there does not seem to be a reliable and solid way to verify that ECC is working when running Linux [IMG:http://forums.openmediavault.org/wcf/images/smilies/sad.png] However, we can use Windows.

      What you will need:

      1. A Windows install - I installed a trial copy on a spare drive and booted from it, preserving my Linux environment.
      2. AIDA64 tool from here Extreme, Engineer or Business editions will all work.

      Once you have AIDA64 open, go to Motherboard > Chipset > North Bridge

      You should see something like this:

      [IMG:http://i.imgur.com/Jar6R9q.png]

      You can see here that under Error Correction, it lists "Supported, Enabled" for ECC, ChipKill and Scrubbing. If you find that it says "Supported, Disabled" then your setup is "in theory" capable of ECC, but something is stopping it from working. This could be because there is a BIOS kill switch and it may not be changeable. ChipKill and ECC Scrubbing may not show as enabled on all systems, but for most they should. If the ECC section says "Not supported" then you do not have the ability to run ECC.

      If you have used AIDA64 to verify ECC on your system, please consider buying it. The Extreme edition is only $39.95. This appears to be the only piece of software that is capable of getting detailed and accurate info on ECC, and Finalwire deserve the money.

      ZFS Pool


      When you create a ZFS pool, you can set a value called alignment shift, or ashift. This sets how the data is aligned on the drives and can have a great benefit or crippling effect on not only the performance of the pool, but the available size. I originally decided to go for ashift=9 which is usually only used for 512k sector drives (2TB and below). Being that I have 4TB drives that are 4K, the logical thing to do would be to let ZFS set the ashift value automatically and this would have been ashift=12. When you create a pool with ZFS or the OMV ZFS plugin, it should, in most cases detect the correct ashift based on the sector size and you don't need to do anything. However, you can override it's default and set it to a value of your choice. It's important to remember that I have a pool of 8 x 4TB drives. The ideal amount for RAIDZ2 to get the most out of performance is 4, 6, 11, 18. This puts me smack bang in the middle of one of the recommended values, so I thought that the alignment would already be out. I went with ashift=9 and checked the write performance compared to ashift=12 on an empty tank. Surprisingly, the results appeared to be identical. I started copying data into the pool and this is where is all went Pete Tong. Write performance went from the high 600s to below 300MB/sec.. even though this is enough to saturate the 2 x on board Gig LANs, I thought it was an unjustifiable dip. So the rule is here, do not mess with the alignment shift [IMG:http://forums.openmediavault.org/wcf/images/smilies/smile.png]

      The Benchmarks


      It's always very difficult when benchmarking a system in my opinion because there are a ton of tools out there that fit the job, and all of them seem to want to score things in entirely different ways. Some tools are not even cross platform and as such, it's difficult to get accurate figures. I have opted to use Geekbench, which is cross platform and puts the CPU through it's paces. It's scoring system is the same across the board so it's easy to see how this compares to other systems.

      The following was taken under Windows 8.1 and I don't have a license for Geekbench, so this runs in 32-bit mode. I may be able to update this in the future.

      [IMG:http://i.imgur.com/W53bUKi.png]

      I'm pretty pleased with that considering that nothing is set to performance and it's a 32-bit copy. Windows 8 probably isn't helping but it's a good result.

      Now for the HDDs write performance:

      Array (Ashift=12) Empty:

      Source Code

      1. /mnt/Tank # time dd if=/dev/zero of=ashift12.bin count=20000 bs=1M conv=fdatasync && sync
      2. 20000+0 records in
      3. 20000+0 records out
      4. 20971520000 bytes (21 GB) copied, 30.6864 s, 683 MB/s
      5. dd if=/dev/zero of=ashift12.bin count=20000 bs=1M conv=fdatasync 0.02s user 10.87s system 35% cpu 30.689 total


      Array (Ashift=12) with 5TB of data:

      Source Code

      1. /mnt/Tank # time dd if=/dev/zero of=ashift12.bin count=50000 bs=1M conv=fdatasync && sync
      2. 50000+0 records in
      3. 50000+0 records out
      4. 52428800000 bytes (52 GB) copied, 92.9716 s, 564 MB/s
      5. dd if=/dev/zero of=ashift12.bin count=50000 bs=1M conv=fdatasync 0.04s user 27.44s system 29% cpu 1:32.97 total


      As you can see, the performance of the pool has degraded slightly, but this is still enough to saturate several Gig LAN connections. The read performance should be better than this.

      OS SDD

      This is a really interesting one being that Kingston actually list the V300 as ~450MB/sec max....

      Source Code

      1. time dd if=/dev/zero of=image.bin count=20000 bs=1M conv=fdatasync && sync
      2. 20000+0 records in
      3. 20000+0 records out
      4. 20971520000 bytes (21 GB) copied, 40.4319 s, 519 MB/s
      5. dd if=/dev/zero of=image.bin count=20000 bs=1M conv=fdatasync 0.01s user 19.43s system 48% cpu 40.434 total


      Can't argue with that though!

      iPerf (LAN testing):

      [will add this soon]

      Boot time:

      POST to GRUB: 32.78 seconds - most of this is taken up by the GPU initialising and the hard drives spinning up.
      GRUB to XFCE login screen: 17.25 seconds - a chunk of this (about 6 seconds) is taken up by the ZFS infrastructure initialising.

      Temps

      Just a quick one on temperatures. I've been incredibly impressed with it the V71 chassis's ability to keep the components cool. After about 6 hours of heavy use (I say heavy, but as the OMV graph above shows, load never goes over 40%!) the Corsair H105 and the Core V71 are able to keep the temperatures at levels anyone would be pleased with. In fact, the hottest component in the whole build is the crappy GeForce 9500GT that I grabbed out a box last minute. I'm not going to cover temps under things like Geekbench etc, as they aren't real world applicable.

      [IMG:http://i58.tinypic.com/16kx5aq.png]

      Noise

      A final word on the noise that this system produces. It's extremely quiet considering what's in it and how cool it manages to keep it. This is no doubt due to a combination of MX4 Paste, the Corsair, and the fact that the 3 x 200mm fans are kept on the default 'High' speed setting. Personally, I can't really tell the difference between High and Low... so I leave it on High, which I would say it almost silent. There are no obvious noise pros for setting the fans to low. It's all very well me saying that I I think it's quiet, but a lot of the time it's entirely down to personal opinion. However, I would easily say that this could sit next to a main viewing screen and at about 8ft away, it would not disrupt viewing at all.

      This was measured using my phone and a decibel app so it might not be spot on. However, I tested how good the app was by holding it at a distance of 1M and talking at a normal level. This measured 51dB which, according to what I have read, is pretty much spot on.

      Ambient noise was measured as: 33-34dB
      From 6ft away: 35dB
      Holding the phone at the side of the chassis as if sitting right next to it: 38dB

      I wanted to include some screen shots of the app here, but the clicking of the buttons to take a screenshot sent the meter up to 61dB [IMG:http://forums.openmediavault.org/wcf/images/smilies/wink.png]

      It would also be worth noting that I have the Corsair and rear chassis fans set to silent mode in the UEFI, and not only does this make them very quiet, but I have also been impressed with the fact that I do not hear the fans increase and yet the temperatures above are maintained. The UEFI says that the fans all sit at around 993-1180RPM.

      Final Words

      Right, well that's pretty much all I can think of right now so I am going to round it off here. Please let me know what you guys think, and if you have enjoyed reading please click the like button. I am open to suggestions/requests for different benchmarking tools etc if there is any information that you'd like to see here please let me know. I will probably make numerous changes and additions to this over the next week or so as I've been writing this whilst down with flu [IMG:http://forums.openmediavault.org/wcf/images/smilies/sick.png] so there are probably a few errors that I will get round to fixing.

      The post was edited 14 times, last by ellnic ().

    • Stellar...absolutely stellar. Especially the cable ties. Cable ties are often the difference between greatness and absolute rubbish! ;)

      I'm curious if you plan to turn it off completely when not in use or plan to suspend/hibernate? If the latter, please share your approach! I've been afraid to try anything on my box with ZFS other than shutting it down completely when not in use for an extended period.
      Working with computers since the days when unboxing and set-up required 3 weeks with a soldering iron!
    • Replacement for the HP N54L - Or, My new home server

      Thanks guys :)

      @Dropkick Murphy I wish I could, but we never dump anything. Always reassigned as something else. :)

      @tekkb My wife cringes sometimes too, I think sometimes she goes along with it to get a break while I'm setting things up ;) She loved Plex on the HP and I promised Emby would be better. She has not been disappointed :)

      @Markess At the moment, we are shutting down completely. I either do that remotely myself if we aren't in the same room from a tablet etc, or the power button is set to halt so my wife can just push the power button if I'm not there/asleep. The server is on a remote standby plug (as are most appliances in our house) so after the lights on the server go out she can hit the button on the remote and kill the board completely. If you are interested in some decent units, these ones from Germany available on Amazon are absolutely excellent because they are hard coded with DIP switches (unlike some of the cheap ones) so if there is a power cut you don't have to go round repairing the whole bloody lot (we did that before and it SUCKED). The standby switch may seem excessive to some, but our electricity bill is only £120/quarter.... well, a little more now I guess but worth it ;)

      The post was edited 4 times, last by ellnic ().

    • @ellnic Maybe I missed one of the posts and you already mentioned it, but I noticed from the parts list you needed 3 extra Seagates, while the one picture shows three Seagate Backup Plus. Did you strip the drives from the enclosures? Do you have the same situation there, that the external units cost less than buying just the bare drive by itself?
      Working with computers since the days when unboxing and set-up required 3 weeks with a soldering iron!
    • Replacement for the HP N54L - Or, My new home server

      @Markess Yes, I did cheat a bit by doing that actually. I thought it would be worth the risk because I can't honestly see why Seagate wouldn't honour a warranty on just the drive (but I could be in for a nasty shock ;)) It got mentioned here but I didn't actually mention it in this thread. I will amend the Build and Pricing sections to reflect that. Sadly those drives have vanished from eBuyer now :( I don't know how many they got in, but I should think they went pretty quick at that price. Yes, they were a lot cheaper than the bare drive itself, which is ridiculous really. But if you're willing to do what I have, keep your eyes peeled for these external drives on offer. They are nearly always cheaper than the drive itself. Maybe I'll get a drive fail, who knows. But all they have to do is last more than 24 months, and I can't see that not being the case with temperatures of 35C. If I do get a drive fail, then it was obviously going to fail anyway and there is nothing I could have done any differently.

      The post was edited 3 times, last by ellnic ().

    • On my Dell PowerEdge T410 with dual Xeon E5620 and 32 GB of DDR3 Registered ECC ram:

      Source Code

      1. root@intranet:~# edac-util -s
      2. edac-util: EDAC drivers are loaded. 2 MCs detected
      omv 4.1.8.2 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • Replacement for the HP N54L - Or, My new home server

      Sweet. Looks like we may have hit the nail on the head.

      I just tried it on a box I know doesn't have ECC just to see what a negative response is and I got "Fatal: Unable to get EDAC data: Unable to find EDAC data in sysfs"

      Your result with 2 MCs would appear to align with the number of CPUs. Or, could be that your 32GB is in the black and blue banks (if it has them) so over 2 controllers. I don't know this but it's a guess. When I was looking in AIDA64 it seems to separate the 2. One was not shown as active for ECC yet capable. I can only think this is due to only the blue banks being used.

      The post was edited 1 time, last by ellnic ().

    • I think you are right about cpus - each physical cpu has a memory controller.
      omv 4.1.8.2 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!
    • Replacement for the HP N54L - Or, My new home server

      Ah, that makes sense. I don't suppose you happen to have any non-ECC memory for that box do you? (If it takes it) I don't have any for mine. I'd be interested to see if that output changes. If it does, then this is the answer to checking ECC. Gone are the days of going through this lot: pugetsystems.com/labs/articles…CC-RAM-Functionality-462/ if it doesn't, then we can't rely on this. Although I am fairly certain this is the answer, having gone through a ton of others I just want to verify it 100%.
    • I'm pretty sure other memory won't work but I can't test either. It is my production server at work :)

      I think I just invalidated edac-util as an ECC test.. The following is the output from my workstation I am typing on now (i7-965 with non-ecc ddr3 ram):

      Source Code

      1. # edac-util -s
      2. edac-util: EDAC drivers are loaded. 1 MC detected
      omv 4.1.8.2 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.9
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please read this before posting a question.
      Please don't PM for support... Too many PMs!