[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

  • So I just set up my first ZFS storage pool, a raidz2 with 8 3TB hard drives. To my surprise I only have 15TB of storage space available. Is this right? I was under the impression that I would have 18TB of storage space.

  • What is the output of
    zpool status
    and
    zfs list <poolname>?


    Your usable space can be seen under 'AVAIL'.
    I think the unit of the reported value is TiB not TB.


    There is also some overhead: https://serverfault.com/questi…-on-4k-sector-disks-going

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    2 Mal editiert, zuletzt von cabrio_leo ()

  • Thank you for your reply!
    As per your questions:

    Code
    zfs list data
    NAME   USED  AVAIL  REFER  MOUNTPOINT
    data   512K  15.0T   205K  /data


    I figured there would be overhead but hoped it would be less then 8TB but if it is so, then it is so.

  • I figured there would be overhead but hoped it would be less then 8TB

    I think the overhead, caused by the ZFS-FS itself, is less than 8TB. You loose 6 TB for redundancy: 24TB - 6 TB = 18TB. Then you have to subtract the efficiency overhead, which is aprox. 92,4% for RAIDZ2 with 8 disks (see table in that link):
    18TB x 92,4% = 16,63TB which is equal to 15,14 TíB.


    I think everything is OK with your pool.


    You could try to create a striped RAID Z4 with 4 disks per each RAIDZ2. Maybe then you have less overhead.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • I think the overhead, caused by the ZFS-FS itself, is less than 8TB. You loose 6 TB for redundancy: 24TB - 6 TB = 18TB. Then you have to subtract the efficiency overhead, which is aprox. 92,4% for RAIDZ2 with 8 disks (see table in that link):18TB x 92,4% = 16,63TB which is equal to 15,14 TíB.


    I think everything is OK with your pool.


    You could try to create a striped RAID Z4 with 4 disks per each RAIDZ2. Maybe then you have less overhead.

    Thank you for your explanation, it makes sense! So 15 TB is the size of my pool.

  • Hi,


    Trying to create mirror pool with 4 disks in latest OMV 3 with zfs plugin, gives following error


    Error #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-id | grep 'sda$'' with exit code '1': ' in /usr/share/php/openmediavault/system/process.inc:175Stack trace:#0 /usr/share/omvzfs/Utils.php(395): OMV\System\Process->execute(Array, 1)#1 /usr/share/omvzfs/Utils.php(135): OMVModuleZFSUtil::exec('ls -la /dev/dis...', Array, 1)#2 /usr/share/openmediavault/engined/rpc/zfs.inc(136): OMVModuleZFSUtil::getDiskId('/dev/sda')#3 [internal function]: OMVRpcServiceZFS->addPool(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('addPool', Array, Array)#6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ZFS', 'addPool', Array, Array, 1)#7 {main}


    Is there a workaround for this?


    Another question what would be the best choice for 4 disks and being safe for 2 disks failures i don't care loosing space using now RAID10 , so i guess a mirrored stripe would be good, any hints what commands to do in cli as i can't seem to do it in the plugin webinterface.


    Thanks


    UPDATE:


    I did this is it correct?


    root@omv:~# zpool create data mirror /dev/sda /dev/sdb
    root@omv:~# zpool status
    pool: data
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    data ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    sda ONLINE 0 0 0
    sdb ONLINE 0 0 0
    errors: No known data errors


    root@omv:~# zpool add data mirror /dev/sdc /dev/sdd
    root@omv:~# zpool status
    pool: data
    state: ONLINE
    scan: none requested
    config:
    NAME STATE READ WRITE CKSUM
    data ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    sda ONLINE 0 0 0
    sdb ONLINE 0 0 0
    mirror-1 ONLINE 0 0 0
    sdc ONLINE 0 0 0
    sdd ONLINE 0 0 0
    errors: No known data errors
    root@omv:~#

  • That is basically correct and gives you RAID10-like functionality (striped mirrors). You can also do zpool create data mirror /dev/sda /dev/sdb mirror /dev/sdc /dev/sdd to create the whole thing in one fell swoop.


    Before you go loading it up with data though, I would recommend you do the following, to avoid issues with drives being re-named on a reboot.


    1. zpool export data
    2. zpool import -d /dev/disk/by-id data


    That will use drive disk IDs instead of sda, sdb, etc. My drives have a tendency to change which letter they're assigned to on every reboot, but the ID stays the same.

  • As wolffstarr has said, use the dev ID. For future reference, if you haven't already created the pool:


    Get the IDs first:


    Code
    ls -l /dev/disk/by-id/ | grep sd


    Then choose the identifiers (starting wwn) for the disks. You can see which disk (e.g sda) has what identifier and use the full string (if it has a -partX at the end, you've got the partition number too, truncate at the -)


    This way, you will add the disks by ID and it's SO much easier to track down failed disks etc as the sdX identifiers can change.


    To make a new RAID10 pool mounted at /mnt/Tank you would use:


    Code
    zpool create Tank -m /mnt/Tank mirror wwn-0x1xxxxxxxxxxx wwn-0x2xxxxxxxxxxx mirror wwn-0x3xxxxxxxxxxxx wwn-0x4xxxxxxxxxxx


    So wwn1 and wwn2 are mirrors, as are wwn3 and wwn4.


    You could also use the identifiers starting ata. Usually, the serial number is the last part of the ata string. So I label the caddys with this. And when I do a zpool status it displays these against each drive.


    Edit, correction about wwn and ata.



    Sent from my iPhone using Tapatalk

  • zpool create data

    I you are using disks with 4k sectors you should create your pool with the ashift option to get correct sector alignment:


    zpool create -o ashift=12 data....


    You have to do this when you create your pool. You can´t change it later (as far as I know).

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Another question what would be the best choice for 4 disks and being safe for 2 disks failures i don't care loosing space using now RAID10

    Yes, that gives a good performance and less time for resilvering because the data is more or less copied from the other disks and need not to be recalculated out of parity information.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Error #0:exception 'OMV\ExecException' with message 'Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C; ls -la /dev/disk/by-id | grep 'sda$'' with exit code '1': ' in /usr/share/php/openmediavault/system/process.inc:175Stack trace:#0 /usr/share/omvzfs/Utils.php(395): OMV\System\Process->execute(Array, 1)#1 /usr/share/omvzfs/Utils.php(135): OMVModuleZFSUtil::exec('ls -la /dev/dis...', Array, 1)#2 /usr/share/openmediavault/engined/rpc/zfs.inc(136): OMVModuleZFSUtil::getDiskId('/dev/sda')#3 [internal function]: OMVRpcServiceZFS->addPool(Array, Array)#4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#5 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('addPool', Array, Array)#6 /usr/sbin/omv-engined(536): OMV\Rpc\Rpc::call('ZFS', 'addPool', Array, Array, 1)#7 {main}

    This is a well known error and I think this was happening while you were trying to add disks to the basic mirror. Several people have this reported already in this thread.
    E.g. myself: http://forum.openmediavault.or…?postID=150321#post150321


    There I wrote: Conclusion: I would not recommend to use the plugin for expanding a pool.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo () aus folgendem Grund: typo

  • Yes, ashift 12 is fastest writes, a little less space. You can also use ashift 9 for 4K. A bit less write speed, more space. Benchmark both and decide. Zfs should detect most 4K sector drives and set the ashift to 12 automatically. If you have created a pool and didn't specify an ashift, you can check it using:


    Code
    zdb -C


    It will also appear in:


    Code
    zfs get all Tank


    But usually only if you've specified an ashift. Whereas zdb shows it regardless.


    If you do a:


    Code
    zfs get all | grep ashift


    And get nothing returned, you likely didn't specify it. Do:


    Code
    zdb -C | grep ashift


    For my smaller dedicated emby box this gives:


    Code
    ashift=12
    ashift=12


    One for each mirrored pair.



    Sent from my iPhone using Tapatalk

  • Tried to install it but it's stuck on Building initial module for 4.9.0-0.bpo.3-amd64. Waited for 1 hour but it just stopped working :(

  • @Grinchy Maybe it's an alternative for you to use the Proxmox LTS kernel instead. If so, go to "OMV-Extras - Kernel" and "Install Proxmox kernel" at the omv webui.


    This works fine for me and some other guys here.


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    Einmal editiert, zuletzt von hoppel118 ()

  • @Grinchy Maybe it's an alternative for you to use the Proxmox LTS kernel instead. If so, go to "OMV-Extras - Kernel" and "Install Proxmox kernel" at the omv webui.


    This works fine for me and some other guys here.


    Greetings Hoppel

    @hoppel118
    Why do you prefer another kernel? What is the advantage in contrast to the 4.9.0 bpo? Where is the reference to ZFS? I always thought the Proxmox kernel is only necessary, if a want to run virtual machines within OMV.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • No, you can use the Proxmox kernel without virtualizing. Proxmox is also debian.


    The Proxmox kernel simply works in combination with zol, because Proxmox supports zfs by default. That is the only point.


    @Grinchy Take it or leave it! ;)

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Why would a ZFS pool be imported with the sda,sdb etc naming convention instead of the dev ID? I'm relocating drives between servers and when I import they come up as sdx. At first I didn't care, then something decided to reshuffle my drives and the pool disappeared. It was easy enough to import, but now the shared folders don't work. Nor will it let me edit. I have to delete them and re-create. Seems bothersome unless they can be imported using their ID name which never changes. Thoughts?

  • OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • Good mornig at all , sorry for silly questions but I'm a beginner.
    Iv just do a fresh installation of omv3 , whit zfs plugin installed whit no problem, i would like ask something about snapshot and upgrade.
    From the zfs tab i see snapshot ,but i dont understand how use it , i see only filesystem/volume and nothing else.
    How is the correct procedure for snapshot??
    If i made a new install or an upgrade, i lose my all shared_data pool ??
    Thank you very much for everybody's advice is appreciated.

    HP GEN 8 ,SSD OS , 4x4TB WD RED ,16 G ECC Ram

  • How is the correct procedure for snapshot??

    Please look here: https://forum.openmediavault.o…?postID=142722#post142722

    If i made a new install or an upgrade, i lose my all shared_data pool ??

    Normally not, if you "export" your pool before. In the new installation then you have to "import" the pool. But in any case you should have a working backup, before doing some major configuration changes ;)


    The ZFS plugin has only a limited functionality. It may be necessary to use the CLI for some actions.


    Some good hints ZFS cheat sheet

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!