Hi,
I'm using a zfs Pool,
but i saw this HDD state ( by S.M.A.R.T. tool )
it's seems two device are not functionnal...
REMOVED & UNAVAIL
how can i replace them in the pool ?
thanks
Hi,
I'm using a zfs Pool,
but i saw this HDD state ( by S.M.A.R.T. tool )
it's seems two device are not functionnal...
REMOVED & UNAVAIL
how can i replace them in the pool ?
thanks
I'd replug every HDD
and now all are online,
but two devices are now on resilvered
just need to wait ?
thanks
I would wait. What are you using that you can "replug" your drives?
Are the drives in an older server chassis?
the size i see in the pool
i plug the hardware device
every disk are ok now
but i don't see the totale size..
Pool status (zpool status):
pool: Pool1
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://zfsonlinux.org/msg/ZFS-8000-9P
scan: resilvered 1.71T in 5h52m with 0 errors on Sat Jan 19 20:36:09 2019
config:
NAME STATE READ WRITE CKSUM
Pool1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-ST2000VN000-1HJ164_W520JHC9 ONLINE 0 0 3
ata-ST2000VN000-1HJ164_W520JHGD ONLINE 0 0 0
ata-ST2000VN000-1HJ164_W520JGXY ONLINE 0 0 0
ata-WDC_WD20EFRX-68AX9N0_WD-WMC301013769 ONLINE 0 0 0
ata-WDC_WD20EFRX-68AX9N0_WD-WMC301090495 ONLINE 0 0 0
ata-WDC_WD20EFRX-68AX9N0_WD-WMC301111420 ONLINE 0 0 0
errors: No known data errors
Pool details (zpool get all):
NAME PROPERTY VALUE SOURCE
Pool1 size 10.9T -
Pool1 capacity 86% -
Pool1 altroot - default
Pool1 health ONLINE -
Pool1 guid 6052818310017031301 -
Pool1 version - default
Pool1 bootfs - default
Pool1 delegation on default
Pool1 autoreplace off default
Pool1 cachefile - default
Pool1 failmode wait default
Pool1 listsnapshots off default
Pool1 autoexpand off default
Pool1 dedupditto 0 default
Pool1 dedupratio 1.00x -
Pool1 free 1.44T -
Pool1 allocated 9.43T -
Pool1 readonly off -
Pool1 ashift 0 default
Pool1 comment - default
Pool1 expandsize - -
Pool1 freeing 0 -
Pool1 fragmentation 33% -
Pool1 leaked 0 -
Pool1 multihost off default
Pool1 feature@async_destroy enabled local
Pool1 feature@empty_bpobj enabled local
Pool1 feature@lz4_compress active local
Pool1 feature@multi_vdev_crash_dump disabled local
Pool1 feature@spacemap_histogram active local
Pool1 feature@enabled_txg active local
Pool1 feature@hole_birth active local
Pool1 feature@extensible_dataset enabled local
Pool1 feature@embedded_data active local
Pool1 feature@bookmarks enabled local
Pool1 feature@filesystem_limits disabled local
Pool1 feature@large_blocks disabled local
Pool1 feature@large_dnode disabled local
Pool1 feature@sha512 disabled local
Pool1 feature@skein disabled local
Pool1 feature@edonr disabled local
Pool1 feature@userobj_accounting disabled local
Pool filesystem details (zfs get all):
NAME PROPERTY VALUE SOURCE
Pool1 type filesystem -
Pool1 creation Wed Jul 1 11:16 2015 -
Pool1 used 6.28T -
Pool1 available 752G -
Pool1 referenced 6.28T -
Pool1 compressratio 1.00x -
Pool1 mounted yes -
Pool1 quota none default
Pool1 reservation none default
Pool1 recordsize 128K default
Pool1 mountpoint /mnt local
Pool1 sharenfs off default
Pool1 checksum on default
Pool1 compression off default
Pool1 atime on default
Pool1 devices on default
Pool1 exec on default
Pool1 setuid on default
Pool1 readonly off default
Pool1 zoned off default
Pool1 snapdir hidden default
Pool1 aclinherit restricted default
Pool1 createtxg 1 -
Pool1 canmount on default
Pool1 xattr on default
Pool1 copies 1 default
Pool1 version 5 -
Pool1 utf8only off -
Pool1 normalization none -
Pool1 casesensitivity sensitive -
Pool1 vscan off default
Pool1 nbmand off default
Pool1 sharesmb off default
Pool1 refquota none default
Pool1 refreservation none default
Pool1 guid 6341434178132475291 -
Pool1 primarycache all default
Pool1 secondarycache all default
Pool1 usedbysnapshots 0B -
Pool1 usedbydataset 6.28T -
Pool1 usedbychildren 49.9M -
Pool1 usedbyrefreservation 0B -
Pool1 logbias latency default
Pool1 dedup off default
Pool1 mlslabel none default
Pool1 sync standard default
Pool1 dnodesize legacy default
Pool1 refcompressratio 1.00x -
Pool1 written 6.28T -
Pool1 logicalused 6.28T -
Pool1 logicalreferenced 6.28T -
Pool1 volmode default default
Pool1 filesystem_limit none default
Pool1 snapshot_limit none default
Pool1 filesystem_count none default
Pool1 snapshot_count none default
Pool1 snapdev hidden default
Pool1 acltype off default
Pool1 context none default
Pool1 fscontext none default
Pool1 defcontext none default
Pool1 rootcontext none default
Pool1 relatime off default
Pool1 redundant_metadata all default
Pool1 overlay off default
Pool1 omvzfsplugin:uuid 2c202288-a1c5-4d23-a42e-deaa04f2b5b2 local
Alles anzeigen
how can i have the total size really available of my pool ?
thanks
but i don't see the totale size..
errors: No known data errors
Pool details (zpool get all):
NAME PROPERTY VALUE SOURCE
Pool1 size 10.9T -
Yep. But look my pics , We only see 7.01T
When i'm in the folder i Saw that.
I'm running a ZFS mirror, not RAIDZ. (So I don't have the parity disk subtraction.)
Did you have more space before the disk resilver?
I believe @hoppel118 is running a RAIDZ, ZFS array. Maybe he'll look at this.
i had a old version of my pool with about 10T.
i had a problem, i reinstall OMV, but reimport my pool ( i've lost all my data )i rebuilt it but only with 7.T
i would add every volume disk on this pool.
how can i do that ?
thanks
I believe @hoppel118 is running a RAIDZ,
My too I an running a Raid Z1 array.
The pool size information from "zpool get all" is the size where the parity disk(s) are not substracted.
You can try zfs list. It outputs the used and available space of the pool, which should fit to the information of the ZFS plugin of OMV.
You can calculate it here: ZFS / RAIDZ Capacity Calculator
Don't see the 10T.
I have 6*2T
On raidz2 , 7T are too small ..
but when i saw the ZFS calculator..
7T is the max size .??
You have 6 drives with 1.82T (roughly). You have to subtract 2 of those drives for parity (RAIDZ2)
That leaves 4 drives at 1.82 = 7.28T in raw storage. A bit more must be subtracted out for ZFS overhead, checksums, etc. 7T is about right.
Pool1 used 6.28T -
Pool1 available 752G
You appear to have 6.28T of data on the array, with 752G remaining
7T (capacity) - 6.28T (data) = 720GB (remaining empty space)
This is very close to what you have.
ok.
thanks for your help to understanding
I believe @hoppel118 is running a RAIDZ, ZFS array. Maybe he'll look at this.
Ok, so it’s solved now. Great, thanks @flmaxey!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!