WD Red/Seagate Ironwolf are worth it?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • WD Red/Seagate Ironwolf are worth it?

      Hi everyone,
      in two weeks I lost one Seagate Ironwolf and 1 WD Red (3 years old). The first one was replaced since it was still warranty, but the WD red is not.

      Now I'm looking to buy a new 4TB hdd and I wonder if it's worth it to buy a NAS HDD considering that my NAS is usually up only 6/7 hours for day and that I won't use RAID.

      I was thinking about bying a simply barracuda or WD blue
      Intel G4400 - Asrock H170M Pro4S - Syba SI-PEX40064 Marvell 88SE9125 - 8GB ram - Corsair VS350W - 2X6TB Seagate Ironwolf - 4x2TB WD Enterprise
      OMV 4.1.17 - Kernel 4.18 backport 3 - omvextrasorg 4.1.2
    • Pro series are just to expensive for me. That's why I was in doubt between normal series (barracuda, blue) or NAS (ironwolf, Red).
      Intel G4400 - Asrock H170M Pro4S - Syba SI-PEX40064 Marvell 88SE9125 - 8GB ram - Corsair VS350W - 2X6TB Seagate Ironwolf - 4x2TB WD Enterprise
      OMV 4.1.17 - Kernel 4.18 backport 3 - omvextrasorg 4.1.2
    • Only the Ironwolf and Red, all the other HDD I have are still alive and working, even an old seagate with a borken firmware lol That's why I was thinking about going with a Barracuda/Blue
      Intel G4400 - Asrock H170M Pro4S - Syba SI-PEX40064 Marvell 88SE9125 - 8GB ram - Corsair VS350W - 2X6TB Seagate Ironwolf - 4x2TB WD Enterprise
      OMV 4.1.17 - Kernel 4.18 backport 3 - omvextrasorg 4.1.2
    • Blabla wrote:

      Only the Ironwolf and Red, all the other HDD I have are still alive and working, even an old seagate with a borken firmware lol That's why I was thinking about going with a Barracuda/Blue
      Historically, I spend only about $1US per day on storage drives, and I while I do shop around for good pricing, I buy the upper end stuff.

      I am currently maxed out with 8 drives in my OMV box. Four are WD Red NAS, all of which have expired warranties, three are HGST Deskstar NAS, all of which have some warranty remaining, and a new Seagate Exos which has the full five year warranty remaining.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • I have 2, 2TB drives. One is the "main" drive, which is the Seagate. It gets pretty regular use. It's 4yrs old, I can't remember the model but it's not a NAS drive (it might well be a Barracuda). The other, is a WD Green. It only gets really used when I run rsync to sync my data to it. It's also 4yrs old.

      I'm ordering 3, 4tb drives here in a week or so. 1 will be my "main drive", 1 will be a mirror, and the other will be an external backup...

      Honestly, I'll probably go with the Blue's, but I've not made a final decision yet.
      Air Conditioners are a lot like PC's... They work great until you open Windows.

    • tkaiser wrote:

      KM0201 wrote:

      I'll probably go with the Blue's, but I've not made a final decision yet.
      Can you elaborate on your strategy to choose this specific brand so others can learn from or at least get the idea?

      Just since my my personal strategy to deal with NAS drives is to avoid those that suffer from the usual 'Load Cycle Count issue' WD is famous for.

      No real strategy I guess... My NAS isn't on 24/7, I''ve actually had good luck w/ WD in the past (before my 2tb's, all my drives were 1TB WD's). I've honestly yet to have a WD fail on me (but I have had a couple of Seagates and Hitachis.. go figure).
      Air Conditioners are a lot like PC's... They work great until you open Windows.

    • I've used Ironwolfs(currently 4TB in OMV), Reds and Red pro in Qnap, and pretty much the all work the same for backup purpose. Pro(from Seageate and WD) are maybe good if you're going to encrypt you RAID ,because the give you less rw penalty.
      So yes,for me NAS drives are worth it,because historically , they performed better in NAS boxes.
    • KM0201 wrote:

      I've honestly yet to have a WD fail on me (but I have had a couple of Seagates and Hitachis.. go figure)

      Nope, I won't for a few simple reasons:
      • ignoring a well known problem IMO is not really smart (WD LCC issue that is not even adjustable with WD Blue unlike the older WD desktop drives).
      • statistics do not work at all when sample size is (way) too small: 'a couple of' disks is nothing you could rely on but you would need to use (tens of) thousands to base your decision on statistics.
      • if disks are operated in a somewhat harmful environment (vibrations) and it's expected that they will die within a few years not choosing those with longest warranty coverage seems weird to me.


      As for statistics: within the last decade 100% of failed drives in productive installations were Seagate. Go figure which brand we buy...
    • tkaiser wrote:

      KM0201 wrote:

      I've honestly yet to have a WD fail on me (but I have had a couple of Seagates and Hitachis.. go figure)
      Nope, I won't for a few simple reasons:
      • ignoring a well known problem IMO is not really smart (WD LCC issue that is not even adjustable with WD Blue unlike the older WD desktop drives).
      • statistics do not work at all when sample size is (way) too small: 'a couple of' disks is nothing you could rely on but you would need to use (tens of) thousands to base your decision on statistics.
      • if disks are operated in a somewhat harmful environment (vibrations) and it's expected that they will die within a few years not choosing those with longest warranty coverage seems weird to me.


      As for statistics: within the last decade 100% of failed drives in productive installations were Seagate. Go figure which brand we buy...
      I see what you're saying, and I'm not completely dead set on the Blue's, I may end up with Red's, or I might do like I did last time and go with a mix (probably not ordering drives till next week)... but even one of the links you linked before said the drives were fine in a Non-Raid environment (which I don't use).
      Air Conditioners are a lot like PC's... They work great until you open Windows.

    • KM0201 wrote:

      but even one of the links you linked before said the drives were fine in a Non-Raid environment
      You'll always find some link on the Internet where someone reports 'product xy is working great. No problems experienced so far'.

      IMO it's better to focus not on anecdotes but facts. And these are two entirely different issues
      • WD LCC issue affects all WD desktop drives that are not used as such in Windows (affects some of their older 'NAS drives' as well). Ignoring this is IMO really not very smart.
      • With RAID in mind TLER becomes an additional issue with desktop drives.
      In general it's that easy: All HDD will die eventually. It's not a question of if but only when. That's why we do backups, hopefully use a storage toplogy that both reduces drive stress and allows for data integrity (ZFS or btrfs) and take care of long warranty coverage. Why should I buy a new drive after 4 years when I simply can get an RMA number and get a 'recertified' drive almost for free?
    • ness1602 wrote:

      My experience is opposite of kaisers,we first had most dying from Toshiba, then HGST then WD,and we had Seagate least dying brand
      This is not the opposite since I simply said 'within the last decade 100% of failed drives in productive installations were Seagate'. And this is for the simple reason that we only use Seagates anyway. How should another brand be amongst the failed drives then? :)

      The simple statistical observation '100% of failed drives were Seagate' is both true and contains zero information without context at the same time (and this applies to almost all statistical data available around HDD health -- the fact that usual human beings like us have a hard time understanding statistics only adds to the mess)

      The post was edited 1 time, last by tkaiser ().

    • tkaiser wrote:

      WD LCC issue affects all WD desktop drives that are not used as such in Windows (affects some of their older 'NAS drives' as well). Ignoring this is IMO really not very smart.
      So, I shucked a recently purchased WD Essentials external a month ago, to find a 5400RPM Class Blue (which recently replaced mechanical Greens in the US Catalog). Parking speed was set to 300 by default.

      I don't have enough information, of course, to know if its a change by WD, or if its that WD isn't consistent with its hardware settings batch by batch, and you don't know what you'll get,
      Primary: OMV 4.x, Asrock Industrial IMB-181-L, Pentium G3220T, 16GB, HP 10GbE
      Backup: OMV 4.x, Supermicro X9SIL, Xeon 1220L, 8GB ECC, Mellanox 10GbE
      Learning & Exploring: OMV on Proxmox, Asus M5A97 LE R2.0, Opteron 3320EE, 12GB ECC, Intel 1GbE
    • Markess wrote:

      Parking speed was set to 300 by default
      I would better look at SMART attribute 193 (Load_Cycle_Count) and start/stop operations and compare with 'power on hours' to calculate real behavior if the drive is in use.

      BTW: few years ago we overtook some installations and had to pull an awful lot of WD Green out of various Drobo back then. Attribute 193 was way beyond specs (300,000 load cycles according to drive's datasheets and we had between 700,000 and 1.5 millions in the field) and when testing with our usual procedure * we realized that many of those drives with high 193 values were affected by a lot slower random IO performance (head movements).

      * Usual procedure to test drives: We automatically set up 10 partitions and run a quick iozone test for sequential and random IO performance in partition 1 (outer tracks), partition 10 (most inner tracks) and partition 5 (in between). Then a single partition is created and the random IO test is repeated (testing more or less only head movements then). If drives are older than 3-4 years they will be removed from productive storage and tested this way (while a resilver with another disk runs on the productive storage). If (random IO) performance dropped dramatically we usually ask the contractor for a replacement (worked a lot of times) and otherwise the disk will be put into a cold storage array (usually archive storage).
    • I had massive increase of load-cycle-count with my Seagate when using hdparm. After switching to hd-idle no significant increase any more. At that time I found several posts in the internet mentioning this.
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!: