Problem Creating mirrors with zfs plugin

  • Hi guys!
    I have tried adding a zfs pool with a mirror with two drives and then rebooting and trying to add another mirror with two drives to the same pol...keep getting errors...


    Error about informing the kernel....reboot...


    No pool after reboot?!


    Is anyone else having problems?


    This is the error I got after making sure drives were wiped and rebooting after creating pool with two drives...this came up when adding anoter mirror to same pool

    Code
    Error #0:
    exception 'OMV\Rpc\Exception' with message 'Invalid RPC response. Please check the syslog for more information.' in /usr/share/php/openmediavault/rpc/rpc.inc:186
    Stack trace:
    #0 /usr/share/php/openmediavault/rpc/proxy/json.inc(95): OMV\Rpc\Rpc::call('ZFS', 'expandPool', Array, Array, 3)
    #1 /var/www/openmediavault/rpc.php(45): OMV\Rpc\Proxy\Json->handle()
    #2 {main}

    bookie56

  • This is a known error of the ZFS plugin. It is possible, but you have to do this by command line. This was already described here in this forum.


    Have to look for the thread.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Well, that is strange....
    I checked status after adding the extra mirror of two drives:


    I thouht it was strange when I saw I had:




    So things are working even if I was getting strange readouts....





    bookie56

  • Look here


    It should be something like that:
    zpool add yourpool mirror /dev/sdd /dev/sde


    Replace the /dev/* by your disk IDs



    Edit: You were faster than I :)

    So things are working even if I was getting strange readouts....

    Nevertheless it´s a known bug of the plugin.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    2 Mal editiert, zuletzt von cabrio_leo ()

  • Did you enable the ACL support for ZFS?


    Like @flmaxey has recommended here (thanks ^^ ), some ZFS settings should be changed after creating the pool.


    Code
    zfs set aclinherit=passthrough yourpoolname
    zfs set acltype=posixacl yourpoolname
    zfs set xattr=sa yourpoolname
    zfs set compression=lz4 yourpoolname

    There he also explained the meaning of the settings and why one should set it.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • I can install and create zfs pools via the terminal but OMV doesn't see my drives with any file system on them...

    I did create all of my ZFS pools by CLI and all of them were recognized by OMV. Sometimes a reboot was necessary.
    But I am also using the ZFS plugin e.g. for creating the ZFS file system, modifying attributes and so on.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    Is this similar to what you're trying to do?



    While you may get an error of no consequence, when adding the 2nd and 3rd mirror (I think it's an RPC call), this can be done in the GUI in a few minutes. (Depending on the size of the drives.)

  • Hi guys!
    Well, now things are OK!
    I, as pointed out, did a new install of OMV on this server (with all other drives apart from system not connected) and updated it until todays date....


    I then as shown before added the:

    Code
    deb http://ftp.debian.org/debian jessie-backports contrib main


    to my /etc/apt/sources.list and ran update..


    I then ran the following to install zfs on my system:



    Code
    # apt-get install linux-headers-$(uname –r)
    # apt-get –t jessie-backports install zfs-dkms

    I then added my pool:

    Code
    # zpool create -f DN_Storage1 -o ashift=12 mirror /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N1406149 ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0852714

    The reason I created the pool with just two of my disks at once was just to make sure there were now hickups when adding to the pool. Here I have added two more drives as a mirror:


    Code
    # zpool add -f DN_Storage1 -o ashift=12 mirror /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WMC4N0905034 ata-WDC_WD30EFRX-68EUZN0_WD-WCC4N4ZT3S78

    Now zpool status gave me this:



    I then added a files system for my clonezilla files:



    Code
    # zfs create DN_Storage1/Partimag

    df -h gave this:



    I added the zfs info that cabrio_leo pointed me to that flamaxey had put up:


    Code
    zfs set aclinherit=passthrough DN_Storage1
    zfs set acltype=posixacl DN_Storage1
    zfs set xattr=sa DN_Storage1
    zfs set compression=lz4 DN_Storage1


    I then added the zfs plugin as shown here


    I then created a share with the same name as Partimag and then applied ACL privileges without hickcup....;)



    Done!

  • Little update!
    Only a little fly in the ointment to annoy me...
    Backing up files to new zfs mirrors from old server via live cd and the built in graphics has decided to stop working....can't see any progress from that computer...


    I do know how much there is to back up and can see what is appearing on my new zfs server.....I will have to check a couple of times when it gets to the right size and then see if there is hard drive activity....really good timing!!


    Not been a good weekend!


    bookie56

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!