WD Red NAS Drives and Spin Down [Q]

  • Thinking about getting a couple of WD Red NAS 4TB drives, one going to be used on a X86 system which will be up 24/7 (Home office/Media Files)
    The Second one will be a on a Odroid XU4 as a backup - This drive will be spun down most of the time as backups will be done once a day.


    Question is, as the RED's are designed to be 24/7 drives, is it bad to keep spinning them down/up, if so should I get a desktop (Blue/Green) version instead.


    Comments/Suggestions please :thumbup:


    Leigh

    • Offizieller Beitrag

    I stumbled today into this very interesting post (as usual) by backblaze about what parameters they put special eye on for hard disk replacement. The most interesting part imo is the correlation in between failed drives and power cycles. Lots of people are here always concerned about power saving by sending the server to hibernate or sleep, the remedy could be worst.


    I could be the same applies for spinning down the disks.



    https://www.backblaze.com/blog…cate-hard-drive-failures/

  • Thanks for the link subzero79


    Reading through they "Experts" are still not sure but I can understand that spin down, parking heads can be stressful to a drive.
    My main concern was in buying a NAS drive that designed to run 27/7 and spinning it down (Powered will be 24/7) was a bad thing.
    Are Blue/Green drives designed to be spun down "more" that the red NAS ???


    May just get RED's as they should be more robust and not spin them down


    Leigh

  • This tool may be interesting for you:


    http://idle3-tools.sourceforge.net


    It's available under Debian apt and it works as expected with wd red. But if you use your wd reds for a zfs you may encounter the following I/O errors in combination with mpt3sas driver for lsi sas controllers:


    https://github.com/zfsonlinux/zfs/issues/4638


    Because of this behavior I disabled the standby mode for my 8x4tb wd red in a raid-z2.


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • But if you use your wd reds for a zfs you may encounter the following I/O errors in combination with mpt3sas driver for lsi sas controllers:

    Hi @hoppel118, is this behavior only relevant for the lsi sas controller or in general? I use zfs with 4x 3TB WD Red on onboard sata and disk standby and didn´t encountered any problems yet.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • @cabrio_leo It seems to be an lsi or mpt3sas problem. But good to know that you don't encounter these problems with your wd reds.

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!