Didnt find a way to check
zpool get all poolname in linux displays pool features/properties
Didnt find a way to check
zpool get all poolname in linux displays pool features/properties
Hi
I have a small problem. 2 out of 5 of my brand new WD Red drives pretty much kicked it - SMART errors, cannot even do a selftest etc.. . So I pulled them out and my zraid2 is still online as expected.
When I replace the drives with new ones, can I add them to the pool via the GUI or do I need the cli ? And if I need the cli how do I do it?
Can I add both at the same time or better one after another?
thx for any help
No gui yet (is suppose to be implemented). By cli then, one by one. Check the Oracle documentation for zfs administration for disk replacement procedure.
zpool get all poolname in linux displays pool features/properties
Cool, i did overlook this one
So its confirmed, out of luck for F...nas users:
NAME PROPERTY VALUE SOURCE
INTERNAL size 10.9T -
INTERNAL capacity 85% -
INTERNAL altroot /mnt local
INTERNAL health ONLINE -
INTERNAL guid 5948082030589541869 default
INTERNAL version - default
INTERNAL bootfs - default
INTERNAL delegation on default
INTERNAL autoreplace off default
INTERNAL cachefile /data/zfs/zpool.cache local
INTERNAL failmode continue local
INTERNAL listsnapshots off default
INTERNAL autoexpand on local
INTERNAL dedupditto 0 default
INTERNAL dedupratio 1.00x -
INTERNAL free 1.60T -
INTERNAL allocated 9.28T -
INTERNAL readonly off -
INTERNAL comment - default
INTERNAL expandsize 0 -
INTERNAL freeing 0 default
INTERNAL feature@async_destroy enabled local
INTERNAL feature@empty_bpobj active local
INTERNAL feature@lz4_compress active local
INTERNAL feature@multi_vdev_crash_dump enabled local
INTERNAL feature@spacemap_histogram active local
INTERNAL feature@enabled_txg active local
INTERNAL feature@hole_birth active local
INTERNAL feature@extensible_dataset enabled local
INTERNAL feature@bookmarks enabled local
Hi
I already "resilvered" one if my two drives I had to replace in my raid-z2. the first one took 48h with a speed of 40mb/s !! WoW (each drive has 5TB)
Now I enabled write back cache on all disk because I have a UPS attached - now the speed is 370mb/s (will finish in less 6h!!!)- that like 9x the original speed!
Is that because resilvering is faster when the 1st drive has already finished or is that only due to the enabled write cache !? ..if it is only the cache I am about to byte my a.. !!
I've started to use the ZFS plugin on a test NAS that serves as download server. So far ZFS has been very stable and performance has been great. I did notice a strange behavior on the Status -> Disk Usage. The statistics are not shown for the ZFS pool. See the file I've attached.
Hi
I successfully replaced all my drives on-by-one. The Zpool is online and doesnt show any errors.
However after the replacement I recognized constant I/O access on my raid ever 5-10s or so. This was really making me crazy - disk clicking all the time!.
In iotop it seemed txg_sync to be the cause. However nothing and nobody was writing to the disk, so no change of data!?.
So I did:
disable ALL plugins incl. SMB NFS ..ect
unplugged NIC cables
...still clicking drives and txg_sync in iotop every couple of seconds.
I am no doing a zfs scrub out of desperation, hopefully that solves the issue.
Does anyone have an idea why that problem occured after exchanging my discs?
OK after scrubbing still constant disc access.
Any ideas?
nope,
DMESG is totally clean !
OK I exported the pool and still the disks are rattling every couole of seconds. So it is probably not OS related, Such a crap!
Did you read the thread that I posted?There is a posible solution disabling a zfs component.
Yea I did.
I tested it with the posted "echo" command and did not see any difference
I did some digging around with regards to the collection of statistics for Disk Usage on a ZFS pool. It turns out that collectd was throwing the following errors:
Dec 28 07:42:00 server collectd[2837]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Dec 28 07:42:00 server collectd[2837]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-reserved.rrd, [1419748920:2893754368.000000], 1) failed with status -1.
I guess this error occurs due to the fact that the df plugin from collectd does not properly recognizes the ZFS file share. This can fixed by changing /etc/collectd/collectd.conf:
After this change the syslog did not throw the errors. However, the information page still did not show the stats for the ZFS pool. It turned out that the script /usr/sbin/omv-mkgraph did not properly generated the graphs for the pool. However the rrd database df-Data was properly created in the folder var/lib/rrdcached/db/localhost. The culprit is this part of the script, because there is no sub folder inside df-root:
# Plugin: df
TITLE_DF="Disk usage"
COLOR_LINE_DF_FREE="#00cc00" # green
COLOR_LINE_DF_USED="#ff0000" # red
COLOR_AREA_DF_FREE="#b7efb7" # green
COLOR_AREA_DF_USED="#f7b7b7" # red
for dirname in df-root ; do
[ ! -e "${DATA}/${dirname}" ] && continue
rrdtool graph ${IMGDIR}/${dirname}-hour.png --start ${HOURSTART} ${DEFAULTS} --title "${TITLE_DF}${HOURTITLE}" .... (These lines are long so only showing first part)
rrdtool graph ${IMGDIR}/${dirname}-day.png --start ${DAYSTART} ${DEFAULTS} --title "${TITLE_DF}${DAYTITLE}" .... (These lines are long so only showing first part)
rrdtool graph ${IMGDIR}/${dirname}-week.png --start ${WEEKSTART} ${DEFAULTS} --title "${TITLE_DF}${WEEKTITLE}" .... (These lines are long so only showing first part)
rrdtool graph ${IMGDIR}/${dirname}-month.png --start ${MONTHSTART} ${DEFAULTS} --title "${TITLE_DF}${MONTHTITLE}" .... (These lines are long so only showing first part)
rrdtool graph ${IMGDIR}/${dirname}-year.png --start ${YEARSTART} ${DEFAULTS} --title "${TITLE_DF}${YEARTITLE}" .... (These lines are long so only showing first part)
done
Alles anzeigen
I altered this code to:
# Plugin: df
TITLE_DF="Disk usage"
COLOR_LINE_DF_FREE="#00cc00" # green
COLOR_LINE_DF_USED="#ff0000" # red
COLOR_AREA_DF_FREE="#b7efb7" # green
COLOR_AREA_DF_USED="#f7b7b7" # red
for dirname in df-root ; do
[ ! -e "${DATA}/${dirname}" ] && continue
rrdtool graph ${IMGDIR}/${dirname}-hour.png --start ${HOURSTART} ${DEFAULTS} --title "${TITLE_DF}${HOURTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-day.png --start ${DAYSTART} ${DEFAULTS} --title "${TITLE_DF}${DAYTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-week.png --start ${WEEKSTART} ${DEFAULTS} --title "${TITLE_DF}${WEEKTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-month.png --start ${MONTHSTART} ${DEFAULTS} --title "${TITLE_DF}${MONTHTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-year.png --start ${YEARSTART} ${DEFAULTS} --title "${TITLE_DF}${YEARTITLE}"
done
for dirname in df-Data ; do
[ ! -e "${DATA}/${dirname}" ] && continue
rrdtool graph ${IMGDIR}/${dirname}-hour.png --start ${HOURSTART} ${DEFAULTS} --title "${TITLE_DF}${HOURTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-day.png --start ${DAYSTART} ${DEFAULTS} --title "${TITLE_DF}${DAYTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-week.png --start ${WEEKSTART} ${DEFAULTS} --title "${TITLE_DF}${WEEKTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-month.png --start ${MONTHSTART} ${DEFAULTS} --title "${TITLE_DF}${MONTHTITLE}"
rrdtool graph ${IMGDIR}/${dirname}-year.png --start ${YEARSTART} ${DEFAULTS} --title "${TITLE_DF}${YEARTITLE}"
done
Alles anzeigen
Now the statistics work again. However, this fix is not ideal since I manually added the df-Data part.
The statistics are not shown for the ZFS pool. See the file I've attached.
I run zfs in two VM, i don't have this problems that you mention
I run zfs in two VM, i don't have this problems that you mention
That is indeed strange. I use it on a HP Microserver N54L. I did a clean install of OMV 1.7 and installed the ZFS plugin (0.6.3.5). My pool is called Data and is mounted as /Data. Maybe someone else that can reproduce the problem?
If you don't mind the previous stats you can always delete and recreate them
I had to manually do the following:
omv-mkconf collectd
service collectd restart
Then click Refresh on page. After that, it worked fine. Will have to look into this more. Shouldn't be too hard to fix.
Please delete this post.
I fixed issue after a cold boot and some missing drives came back online. Now my zpool shows up fine.
Please delete this post.
I fixed issue after a cold boot and some missing drives came back online. Now my zpool shows up fine.
Glad to hear you managed to resolve the issue.
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!