Which hard drive?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Which hard drive?

      Hi everyone,
      My 4TB RAID1 is nearly full, and I want to buy 2 new hdd for my NAS so that I can build a second RAID1.
      I Can choose between those models:

      • Seagate IronWolf Nas ST4000VN008 - 112€
      • Hitachi Deskstar NAS [H3IKNAS40003272SE] -122€
      • WD Caviar RED WD40EFRX - 125€


      Which One do you suggest? I already have 2 caviar RED in my nas right now.
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • thanks a lot :)
      still in doubt :( the 4TB of the red series is the only one good, seagate is terribile with the older disks, but with the new one with more than 6TB they are great :/
      I'll probably go with the cheaper one :/
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • ness1602 wrote:

      Dont read that,it's not usual use case.
      What? They are using them for storage and it shows drive quality and longevity under harsher conditions. That is a great use case to prove if a drive is good. They properly cool them so having 40 disks (they use more than that) in one chassis makes no difference. The number of drives Backblaze uses is much higher sample size showing much better failure statistics than the relatively few drives one user has had good luck with.

      I have had great luck with over 20 Hitachi drives and about 10 Reds myself but I will still trust Backblaze statistics over my experience.
      omv 4.0.14 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • Re,

      ryecoaaron wrote:

      What? They are using them for storage and it shows drive quality and longevity under harsher conditions.
      But read it very carefully, the "worst" drive in the report is a "ST4000DM001" ... which is suggested as a pure desktop consumer product ... yeah, of course it fails more often in BlakeBlaze's environment ... look at the specs of the ST6000DX000 and the ST4000DX000 too ...

      Anyway, you have to take into account the complete statistics: drive days and the overall count of drives.
      No drive is considered completly error free ... and at least, some people claims, that in the method from backblaze are some faults ... use it with caution is my recommendation.

      And please be honest, some most of the drives are not really made for backblaze's environment ...

      Sc0rp
    • Sc0rp wrote:

      But read it very carefully, the "worst" drive in the report is a "ST4000DM001" ... which is suggested as a pure desktop consumer product ... yeah, of course it fails more often in BlakeBlaze's environment ... look at the specs of the ST6000DX000 and the ST4000DX000 too ...
      None of the drives they use in the report are "enterprise" drives. Maybe not all of them are targeted at desktops but they are all consumer products. They use these drives because they are much cheaper and it is easier to just throw more drives into their redundancy setup.

      Sc0rp wrote:

      Anyway, you have to take into account the complete statistics: drive days and the overall count of drives.
      No drive is considered completly error free ... and at least, some people claims, that in the method from backblaze are some faults ... use it with caution is my recommendation.

      And please be honest, some most of the drives are not really made for backblaze's environment ...
      I was a quality manager for 20 years. I always think about complete statistics. I still don't understand your points. My drives run 24/7 and have constant activity since they have VMs running on them. Maybe the IOPS aren't as high in my server. Backblaze is still using the drives in a giant "NAS". As for the environment, I would say a lot of OMV users put their drives in a worse situation - less air flow through the case, higher temps since the are not in a data center, and starting/stopping the drive all the time. I consider these worse than what backblaze does.

      As for what Backblaze considers a bad drive, well that is safe for data. More people using OMV should be safer with their data especially the ones that don't backup their files.

      So, I am being honest and I stand by my recommendation to look at their statistics. I have and probably why I haven't had drive issues...
      omv 4.0.14 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • macom wrote:

      every statistic has its limitation

      Yes, and that's whether it's relevant or not for your situation. Say backblaze has numbers for 100,000 disks and you want to buy just two (your sample sizes now varies just by factor 50,000). How does the statistic with a 100,000 sample size affects your own installation? In exactly no way since your two disks both know nothing about statistics. If you chose the 'best' model based on someone else analyzing 100,000 samples these 'best' disks can be DOA, fail within minutes, hours, weeks or months. Or last significantly longer than their statistical/averaged counterparts. You simply don't know since this other statistic is irrelevant for you with your laughable low sample size.

      If you want to buy for example 10,000 disks a statistic covering a 100,000 sample size gets interesting. Since then you have a very very little chance that the statistical behaviour in your installation follows the much larger one. Besides that most statistics made by technical staff are wrong anyway (anyone interested in my opinions on this might follow that).

      If I want to buy disks and care about longevity I look not at irrelevant statistics someone else made (since I buy mostly not more than a few at a time) but at warranty instead. The more years the better, the less hassles wrt refund/return/RMA policy the better.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • tkaiser wrote:

      In exactly no way since your two disks both know nothing about statistics.


      Well, it could be a question if results from the past can be reproduced in the future, but if we assume that this is true the 10,000 disks can predict the propability of failure of two disks. Of course I can be unlucky and have two disks fail. In the end warranty is also based on statistics (including risk management).

      If I flip a coin 10.000 times the result will be 50% heads and 50% tails. If I flip it the next time I know it will be heads with a propability of 50%. This 50% propability is independent of the results I optained the last 3, 5 or 6 times.
      If I "tune" the coin to get 70% heads and 30% tails for 10.000 flips (better hard drive than the average) I can predict that the next flip will be heads with a propability of 70%.
      BananaPi - armbian - OMV4.x | Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - 1x Intenso SSD 120GB - OMV3.x 64bit
    • I've had more than 1000 disks die,usually one,or mdadm RAID1 cases. There isn't a single drive that i would recommend,for desktops. For NAS,i've had least HDD's die from 4TB Seagate and Red Pro 8Tb(also Seagate/Samsung 2TB).
      So this is my recommendation after a few years in the field.Rye, i didn't mean to attack,just wanted to point out that most HDD's that die in Backblaze blaze :D are the one's that have problem with vibrations. Usually.
    • Of course every HDD is a single cae and it can last 5 minutes like 10 years. It's not different from a dice, where if you hit 1 you'll die.
      The this is: if you use a d6, a d20 or d100 you have less chanches to hit 1, you still have them, but it's lower.

      thanks to everyone in this thread :D
      Sadly blackblaze didn't use any hdd of my list, and 46 hdd for the wd red is not really helping.. I'll go with the cheper one and an hail mary!
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • ness1602 wrote:

      I've had more than 1000 disks die,usually one,or mdadm RAID1 cases. There isn't a single drive that i would recommend,for desktops. For NAS,i've had least HDD's die from 4TB Seagate and Red Pro 8Tb(also Seagate/Samsung 2TB).
      So this is my recommendation after a few years in the field.Rye, i didn't mean to attack,just wanted to point out that most HDD's that die in Backblaze blaze :D are the one's that have problem with vibrations. Usually.
      Just curosity, What's your job? It's not common to handle so many hdds!

      Still I would love if OMV could gather about hdd lives and make a statistics!
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • I'm sorry but there seems to be a serious lack of understanding of statistics especially in the manufacturing sector. Please dont talk about these types of statistics unless you have significant quality control experience.

      Backblaze has a huge sample size that very few organizations can reproduce. This very relevant because it predicts manufacturing failure which is about the only failure you can worry about. Warranty will have to fix the rest. The statistics they publish are very valid whether you but one or 1000 drives. If you dont want to buy one of the drives listed, then the statistics obviously dont help you.

      The case the drives are in means nothing since the failure rate should not be higher among one type of drive since they are in the same type of cases. Referring to vibration problems is a quality control issue since vibration is a hard drive killer.

      I never thought there would be such a disagreement with good published statistics. I will just STFU since my suggestion sucks and everyone has better ideas...
      omv 4.0.14 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:


      If you dont want to buy one of the drives listed, then the statistics obviously dont help you.
      Not your fault if I can't find them in my country :(
      Still I think that this statistics is very interesing: 4TB red is the only one of the series that is reliable, 3TB is terrible and 6TB is not much better. That's why i think 46 is not a sample size big enough.
      Also you can see a positive trend with Seagete: older hard driver were simply garbage, meanwhile the new ones (6 and 8TB) are cheap and good, with a sample size big enough.


      That's the reason why I'll go with the Seagate this time, because apparently Seagate started to make good hdd again.
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25
    • macom wrote:

      if we assume that this is true the 10,000 disks can predict the propability of failure of two disks

      That's just weird and impossible. Sample size is too small. This is just a nice example of statistics misinterpreted totally :)

      You buy two disks. One fails after 1 hour, the other after 20 years. Longevity by statistic: ~10 years. The statistics you publish about your own disk usage is 'my disks last 10 years on average' (stupid anyway since... sample size too small). There's ZERO relationship between the stuff Backblaze does and your 2 disks. Zero. Again: still simply zero.

      If you buy 10,000 disks then Backblaze data mining is for you. If you buy less than 100 disks better use your brain instead. Buy disks that get replaced easily if a failure occurs, take care of what's important wrt data integrity (almost all OMV users seem to not care at all), do backups and check your disks regularly.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • tkaiser wrote:

      ryecoaaron wrote:

      I never thought there would be such a disagreement with good published statistics.
      Statistics are interesting once sample size matches. If your sample size is way lower the numbers others collected are simply irrelevant.
      I really don't understand how you can say that.
      That at the hard drivers as dices. You can throw a dice without knowing how many faces it has, it can be 4 like 100. If you roll 1, you lose. If you pick your dice randomly you have a chance to get the 4 faces, and get 1 with 25% of chances.
      If you choose the 100 faces instead, you still can roll a 1, but you have 1% chances that this will happen.

      Luck is still fundamental, but you can help your luck avoiding hdd with an higher failure rate. That is the poin of statiscs, to help pyou choose the option with more chances of succes.
      Intel G4400 - Asrock H170M pro4s - 8GB ram - 2x4TB WD RED in RAID1 - 1TB Seagate 7200.12
      OMV 3.0.79 - Kernel 4.9 backport 3 - omvextrasorg 3.4.25

      The post was edited 1 time, last by Blabla ().

    • Blabla wrote:

      Luck is still fundamental, but you can help your luck avoiding hdd with an higher failure rate.

      No. The few disks you want to buy are not affected by any statistic correlations that happen(ed) somewhere else (your disks really don't understand statistics and they don't know how/when they 'should' fail. That's not how things work). Statistics is hard to get, yes I agree.
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.