ZFS super slow resliver

  • So I had a failed disk.
    I offlined it, and then did zpool replace vault 3650711494974687722 /dev/disk/by-id/ata-WDC_WD80EFAX-68KNBN0_VAHEU2VL


    Now its reslivering automatically. but man is it slow:
    root@openmediavault:/dev/disk/by-id# zpool status
    pool: vault
    state: DEGRADED
    status: One or more devices is currently being resilvered. The pool will
    continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
    scan: resilver in progress since Mon Jul 29 04:44:45 2019
    3.05G scanned out of 29.9T at 14.3M/s, 610h47m to go
    260M resilvered, 0.01% done
    config:


    NAME STATE READ WRITE CKSUM
    vault DEGRADED 0 0 0
    raidz2-0 DEGRADED 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAJ7EV5L ONLINE 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAJ7DYBL ONLINE 0 0 0
    replacing-2 DEGRADED 0 0 0
    3650711494974687722 OFFLINE 0 0 0 was /dev/disk/by-id/ata-WDC_WD80EFAX-68KNBN0_VAHASGWL-part1
    ata-WDC_WD80EFAX-68KNBN0_VAHEU2VL ONLINE 0 0 0 (resilvering)
    ata-WDC_WD80EFAX-68KNBN0_VAH1ZYPL ONLINE 0 0 0
    ata-WDC_WD80EFAX-68LHPN0_7SJ75XKW ONLINE 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAHAMJ5L ONLINE 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAGTMG3L ONLINE 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAHZDX7L ONLINE 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAHZ1NDL ONLINE 0 0 0
    ata-WDC_WD80EFAX-68LHPN0_7HKUL9LN ONLINE 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAGZ6M7L ONLINE 0 0 0
    ata-WDC_WD80EFAX-68KNBN0_VAHAUVSL ONLINE 0 0 0


    errors: No known data errors


    Is this right or did I mess up somewhere?

  • I have no personal experience with resilvering, but with the regular scrubbing. For my experience the scrubbing throughput also slows down under certain circumstances e.g. the amount of network traffic. And my impression is that the scrubbing throughput stays at a very low level even there is less network traffic, at least a certain amount of time.


    The time for resilvering depends on the amount of data which is in pool, the raid schema and the number of disks in the pool.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!