So I just set up my first ZFS storage pool, a raidz2 with 8 3TB hard drives. To my surprise I only have 15TB of storage space available. Is this right? I was under the impression that I would have 18TB of storage space.
[HOWTO] Instal ZFS-Plugin & use ZFS on OMV
-
- OMV 1.0
- raulfg3
-
-
What is the output of
zpool status
and
zfs list <poolname>?Your usable space can be seen under 'AVAIL'.
I think the unit of the reported value is TiB not TB.There is also some overhead: https://serverfault.com/questi…-on-4k-sector-disks-going
-
What is the output of
zpool status
and
zfs list <poolname>?Your usable space can be seen under 'AVAIL'.
I think the unit of the reported value is TiB not TB.There is also some overhead: https://serverfault.com/questi…-on-4k-sector-disks-going
Thank you for your reply!
As per your questions:Code
Alles anzeigenzpool status pool: data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1XV83SA ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1768209 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N7XR0NYN ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N1XV8XL6 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1761585 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3CP7C0U ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N3CP7ZT8 ONLINE 0 0 0 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N6EZ3VL7 ONLINE 0 0 0 errors: No known data errors
I figured there would be overhead but hoped it would be less then 8TB but if it is so, then it is so. -
I figured there would be overhead but hoped it would be less then 8TB
I think the overhead, caused by the ZFS-FS itself, is less than 8TB. You loose 6 TB for redundancy: 24TB - 6 TB = 18TB. Then you have to subtract the efficiency overhead, which is aprox. 92,4% for RAIDZ2 with 8 disks (see table in that link):
18TB x 92,4% = 16,63TB which is equal to 15,14 TíB.I think everything is OK with your pool.
You could try to create a striped RAID Z4 with 4 disks per each RAIDZ2. Maybe then you have less overhead.
-
I think the overhead, caused by the ZFS-FS itself, is less than 8TB. You loose 6 TB for redundancy: 24TB - 6 TB = 18TB. Then you have to subtract the efficiency overhead, which is aprox. 92,4% for RAIDZ2 with 8 disks (see table in that link):18TB x 92,4% = 16,63TB which is equal to 15,14 TíB.
I think everything is OK with your pool.
You could try to create a striped RAID Z4 with 4 disks per each RAIDZ2. Maybe then you have less overhead.
Thank you for your explanation, it makes sense! So 15 TB is the size of my pool.
-
Hi,
Trying to create mirror pool with 4 disks in latest OMV 3 with zfs plugin, gives following error
Error #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-id | grep 'sda$'' with exit code '1': ' in /usr/share/php/openmediavault/system/process.inc:175Stack trace:#0 /usr/share/omvzfs/Utils.php(395): OMV\System\Process->execute(Array, 1)#1 /usr/share/omvzfs/Utils.php(135): OMVModuleZFSUtil::exec('ls -la /dev/dis...', Array, 1)#2 /usr/share/openmediavault/engined/rpc/zfs.inc(136): OMVModuleZFSUtil::getDiskId('/dev/sda')#3 [internal function]: OMVRpcServiceZFS->addPool(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('addPool', Array, Array)#6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ZFS', 'addPool', Array, Array, 1)#7 {main}
Is there a workaround for this?
Another question what would be the best choice for 4 disks and being safe for 2 disks failures i don't care loosing space using now RAID10 , so i guess a mirrored stripe would be good, any hints what commands to do in cli as i can't seem to do it in the plugin webinterface.
Thanks
UPDATE:
I did this is it correct?
root@omv:~# zpool create data mirror /dev/sda /dev/sdb
root@omv:~# zpool status
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errorsroot@omv:~# zpool add data mirror /dev/sdc /dev/sdd
root@omv:~# zpool status
pool: data
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
data ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
errors: No known data errors
root@omv:~# -
That is basically correct and gives you RAID10-like functionality (striped mirrors). You can also do zpool create data mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd to create the whole thing in one fell swoop.
Before you go loading it up with data though, I would recommend you do the following, to avoid issues with drives being re-named on a reboot.
1. zpool export data
2. zpool import -d /dev/disk/by-id dataThat will use drive disk IDs instead of sda, sdb, etc. My drives have a tendency to change which letter they're assigned to on every reboot, but the ID stays the same.
-
As wolffstarr has said, use the dev ID. For future reference, if you haven't already created the pool:
Get the IDs first:
Then choose the identifiers (starting wwn) for the disks. You can see which disk (e.g sda) has what identifier and use the full string (if it has a -partX at the end, you've got the partition number too, truncate at the -)
This way, you will add the disks by ID and it's SO much easier to track down failed disks etc as the sdX identifiers can change.
To make a new RAID10 pool mounted at /mnt/Tank you would use:
Codezpool create Tank -m /mnt/Tank mirror wwn-0x1xxxxxxxxxxx wwn-0x2xxxxxxxxxxx mirror wwn-0x3xxxxxxxxxxxx wwn-0x4xxxxxxxxxxx
So wwn1 and wwn2 are mirrors, as are wwn3 and wwn4.
You could also use the identifiers starting ata. Usually, the serial number is the last part of the ata string. So I label the caddys with this. And when I do a zpool status it displays these against each drive.
Edit, correction about wwn and ata.
Sent from my iPhone using Tapatalk
-
zpool create data
I you are using disks with 4k sectors you should create your pool with the ashift option to get correct sector alignment:
zpool create -o ashift=12 data....
You have to do this when you create your pool. You can´t change it later (as far as I know).
-
Another question what would be the best choice for 4 disks and being safe for 2 disks failures i don't care loosing space using now RAID10
Yes, that gives a good performance and less time for resilvering because the data is more or less copied from the other disks and need not to be recalculated out of parity information.
-
Error #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-id | grep 'sda$'' with exit code '1': ' in /usr/share/php/openmediavault/system/process.inc:175Stack trace:#0 /usr/share/omvzfs/Utils.php(395): OMV\System\Process->execute(Array, 1)#1 /usr/share/omvzfs/Utils.php(135): OMVModuleZFSUtil::exec('ls -la /dev/dis...', Array, 1)#2 /usr/share/openmediavault/engined/rpc/zfs.inc(136): OMVModuleZFSUtil::getDiskId('/dev/sda')#3 [internal function]: OMVRpcServiceZFS->addPool(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('addPool', Array, Array)#6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ZFS', 'addPool', Array, Array, 1)#7 {main}
This is a well known error and I think this was happening while you were trying to add disks to the basic mirror. Several people have this reported already in this thread.
E.g. myself: http://forum.openmediavault.or…?postID=150321#post150321There I wrote: Conclusion: I would not recommend to use the plugin for expanding a pool.
-
Yes, ashift 12 is fastest writes, a little less space. You can also use ashift 9 for 4K. A bit less write speed, more space. Benchmark both and decide. Zfs should detect most 4K sector drives and set the ashift to 12 automatically. If you have created a pool and didn't specify an ashift, you can check it using:
It will also appear in:
But usually only if you've specified an ashift. Whereas zdb shows it regardless.
If you do a:
And get nothing returned, you likely didn't specify it. Do:
For my smaller dedicated emby box this gives:
One for each mirrored pair.
Sent from my iPhone using Tapatalk
-
Tried to install it but it's stuck on Building initial module for 4.9.0-0.bpo.3-amd64. Waited for 1 hour but it just stopped working
Code
Alles anzeigen....... Unpacking libuutil1linux (0.6.5.9-2~bpo8+1) ... Selecting previously unselected package libnvpair1linux. Preparing to unpack .../libnvpair1linux_0.6.5.9-2~bpo8+1_amd64.deb ... Unpacking libnvpair1linux (0.6.5.9-2~bpo8+1) ... Selecting previously unselected package libzpool2linux. Preparing to unpack .../libzpool2linux_0.6.5.9-2~bpo8+1_amd64.deb ... Unpacking libzpool2linux (0.6.5.9-2~bpo8+1) ... Selecting previously unselected package libzfs2linux. Preparing to unpack .../libzfs2linux_0.6.5.9-2~bpo8+1_amd64.deb ... Unpacking libzfs2linux (0.6.5.9-2~bpo8+1) ... Selecting previously unselected package linux-compiler-gcc-4.9-x86. Preparing to unpack .../linux-compiler-gcc-4.9-x86_4.9.30-2+deb9u2~bpo8+1_amd64.deb ... Unpacking linux-compiler-gcc-4.9-x86 (4.9.30-2+deb9u2~bpo8+1) ... Selecting previously unselected package linux-headers-4.9.0-0.bpo.3-common. Preparing to unpack .../linux-headers-4.9.0-0.bpo.3-common_4.9.30-2+deb9u2~bpo8+1_all.deb ... Unpacking linux-headers-4.9.0-0.bpo.3-common (4.9.30-2+deb9u2~bpo8+1) ... Selecting previously unselected package linux-kbuild-4.9. Preparing to unpack .../linux-kbuild-4.9_4.9.30-2+deb9u2~bpo8+1_amd64.deb ... Unpacking linux-kbuild-4.9 (4.9.30-2+deb9u2~bpo8+1) ... Selecting previously unselected package linux-headers-4.9.0-0.bpo.3-amd64. Preparing to unpack .../linux-headers-4.9.0-0.bpo.3-amd64_4.9.30-2+deb9u2~bpo8+1_amd64.deb ... Unpacking linux-headers-4.9.0-0.bpo.3-amd64 (4.9.30-2+deb9u2~bpo8+1) ... Selecting previously unselected package linux-headers-amd64. Preparing to unpack .../linux-headers-amd64_4.9+80~bpo8+1_amd64.deb ... Unpacking linux-headers-amd64 (4.9+80~bpo8+1) ... Selecting previously unselected package zfsutils-linux. Preparing to unpack .../zfsutils-linux_0.6.5.9-2~bpo8+1_amd64.deb ... Unpacking zfsutils-linux (0.6.5.9-2~bpo8+1) ... Selecting previously unselected package zfs-zed. Preparing to unpack .../zfs-zed_0.6.5.9-2~bpo8+1_amd64.deb ... Unpacking zfs-zed (0.6.5.9-2~bpo8+1) ... Selecting previously unselected package openmediavault-zfs. Preparing to unpack .../openmediavault-zfs_3.0.18_amd64.deb ... Unpacking openmediavault-zfs (3.0.18) ... Processing triggers for man-db (2.7.0.2-5) ... Processing triggers for openmediavault (3.0.86) ... Restarting engine daemon ... Setting up zfs-dkms (0.6.5.9-2~bpo8+1) ... Loading new zfs-0.6.5.9 DKMS files... Building for 4.9.0-0.bpo.3-amd64 Building initial module for 4.9.0-0.bpo.3-amd64
-
-
@Grinchy Maybe it's an alternative for you to use the Proxmox LTS kernel instead. If so, go to "OMV-Extras - Kernel" and "Install Proxmox kernel" at the omv webui.
This works fine for me and some other guys here.
Greetings Hoppel
@hoppel118
Why do you prefer another kernel? What is the advantage in contrast to the 4.9.0 bpo? Where is the reference to ZFS? I always thought the Proxmox kernel is only necessary, if a want to run virtual machines within OMV. -
-
Why would a ZFS pool be imported with the sda,sdb etc naming convention instead of the dev ID? I'm relocating drives between servers and when I import they come up as sdx. At first I didn't care, then something decided to reshuffle my drives and the pool disappeared. It was easy enough to import, but now the shared folders don't work. Nor will it let me edit. I have to delete them and re-create. Seems bothersome unless they can be imported using their ID name which never changes. Thoughts?
-
@vulcan4d
Please look at this post (no. 687):https://forum.openmediavault.o…?postID=151266#post151266and here (no. 661): https://forum.openmediavault.o…?postID=149193#post149193
-
Good mornig at all , sorry for silly questions but I'm a beginner.
Iv just do a fresh installation of omv3 , whit zfs plugin installed whit no problem, i would like ask something about snapshot and upgrade.
From the zfs tab i see snapshot ,but i dont understand how use it , i see only filesystem/volume and nothing else.
How is the correct procedure for snapshot??
If i made a new install or an upgrade, i lose my all shared_data pool ??
Thank you very much for everybody's advice is appreciated. -
How is the correct procedure for snapshot??
Please look here: https://forum.openmediavault.o…?postID=142722#post142722
If i made a new install or an upgrade, i lose my all shared_data pool ??
Normally not, if you "export" your pool before. In the new installation then you have to "import" the pool. But in any case you should have a working backup, before doing some major configuration changes
The ZFS plugin has only a limited functionality. It may be necessary to use the CLI for some actions.
Some good hints ZFS cheat sheet
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!