ZFS array has dropped a drive

  • I've got a six disk ZFS array which is showing as degraded. When I check the SMART screen, all drives are accounted for but sde is a 2TB drive that in an LVM volume rather than a 8TB drive in the array. All six of the 8TB drives are showing as available and passing SMART.


    Any advice on how to readd the missing drive?


  • KM0201

    Hat das Thema freigeschaltet.
  • What was /dev/sde maybe not be /dev/sde now as disks can be re-lettered, for example between system boots. You should have had some kind of mail notification of a faulted drive which would simplify identifying the missing 8TB disk.


    Otherwise, you can use the zpool history command to see which drives were used and hence what is missing, e.g. zpool history zpool01 | grep create and combine this with ls -l /dev/disk/by-id/* to id your disk in the pool.


    I have to say that 6 x 8TB drives in raidz1 pool is risky. Using raidz2 is a far better/safer choice for these sized drives.

  • I know what disk is missing, what i dont know is how best to re add it to the pool.


    Is it best just to remove it, format it, and readd it? or is there a better approach?

  • zpool remove is the wrong command. The zpool status message is telling you to use "zpool replace" ! Understand you proceed at your own risk, I'm not responsible for any data loss or other damages.


    The steps are:


    1. zpool offline the UNVAIL disk.


    zpool offline zpool01  117xxxxxxxxxxxx <- sub full string here


    2.Secure Wipe the dropped disk using the WebUI - Be doubly sure you've selected the correct one.


    3. zpool replace the offlined disk with the disk you wiped.


    zpool replace zpool01  117xxxxxxxxxxxx  ata-WD-XXXXXXXXXXXXXXX  <-- sub actual values

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!