[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • Offizieller Beitrag

    When I bring up the "Expand pool" dialog, it still shows all drives even if they are already in use, it would be nice if drives that are already part of a zpool or vdev were hidden.

    I noticed that after I uploaded the update. The problem is with zfs. If you add a filesystem to the zpool, it will mark the drives in use. The way the omv-zfs stuff is written, it doesn't recognize the pool as a filesystem so it doesn't know it is in use. I haven't looked yet but I think this is a big change.



    The drive list in the "Expand pool" dialog window only shows 3 drives at a time even if I expand the window. It would be really nice to be able to expand this area so I can see more drives at a time.

    I'm sure a height can be added to the field. I agree more than 3 drives should be showing but I don't want a ridiculous amount that makes the window too big.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Is there a possibility to write a log file to see which ZFS commands are used by the plugin and what was the response?

    Should be easy to add. I will look into it.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    any info on fixes for the ZFS plugin?

    Fixes for what? Logging the commands is a feature.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    That was a different person who asked for that feature.

    I know that. I didn't say you asked for that.




    I am talking about the errors I have already reported on page 32.

    Yes, I responded to those comments and said they would take a lot of coding. This is a lot of coding just to prevent someone from picking the wrong drive. If you pay attention to which drives are in the pool, this will cause no issues. I really have no interest in this plugin. I am just trying my best to keep it usable.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • @milfaddict: Did you made another try with less disks for a vdev, as I had suggested it in post #641? I have tried to explain why this could be helpful to narrow the cause.


    If I had such a lot of disks I would test it personally. Unfortunately I have not.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • @milfaddict: Did you made another try with less disks for a vdev, as I had suggested it in post #641? I have tried to explain why this could be helpful to narrow the cause.


    If I had such a lot of disks I would test it personally. Unfortunately I have not.


    So I deleted the pool and made a new Z1 pool and then tried expanding it with another Z1 vdev. I got the same errors as before. So I rebooted, deleted the pool and tried again, this time I used the first three drives in the list "sdaa, sdab, sdac" and then expanded using the last three drives "sdx, sdy, sdz" just to make sure OMV, NOT ME was trying to use a drive that was already in use. I still got all the same errors.


    No matter what RAID level or drives I select, I get the same errors.

  • No matter what RAID level or drives I select, I get the same errors.

    Hi @milfaddict thank you for testing.


    I see @luxflow is currently online. Maybe he can help :)
    @luxflow: Is there a possibility to log what is going on?


    @milfaddict: If you want you can try this:


    zpool create yourpool raidz /dev/sdx /dev/sdy /dev/sdz raidz /dev/sda /dev/sdb /dev/sdc
    zpool status
    zpool add yourpool raidz /dev/sdd /dev/sde /dev/sdf
    zpool status


    Please replace the /dev/sd* string with your devices.


    After that you should see a pool consisting of 3 striped raidz1 vdevs out of nine disks, if they haven´t been any 'zpool create' error message.


    And if you then go back to the WebUI in the ZFS menu you should see the pool also there.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • So I have been trying other options like FreeNAS but FreeNAS and BSD does not support my Supermicro AOC-SASLP-MV8 HBA's because there is no driver for Marvell controllers. Could this be the reason I am having problems with ZFS and getting SMART to work?


    From what I have read, there is a Linux driver for Marvell controllers and OMV is based on Debian Linux even though it is a fork of FreeNAS which is based on FreeBSD for which there is apparently no Marvell driver.


    If my HBA's are the problem, would replacing them with Supermicro AOC-USAS-L8i cards solve my issues?


  • If my HBA's are the problem, would replacing them with Supermicro AOC-USAS-L8i cards solve my issues?

    YES , If really is the problem.


    USAS-L8i is a good HBA Card, but works better if load IT firmware and not use built-in RAID and use the card as HBA not as RAID Card.

  • Just a warning... This will make all read-only textareas look darker.


    nano /var/www/openmediavault/css/omv-custom.css

    I am referring to post no 530. Please note that with newer versions of OMV (> 3.0.74) the name of the file has changed:
    old: omv-custom.css
    new: theme-custom.css

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • So after all this time and after spending $600 on new LSI HBA's, I am still getting all the same errors I was getting before. I create my ZFS pool successfully, expand it and get errors, reboot and see that the second vdev only has 11 drives instead of 12 and the pool is degraded. Have you made any progress into fixing the horribly broken ZFS plugin yet?


    FreeNAS doesn't make you install a plugin to install another plugin just to use ZFS, just sayin'... maybe OMV core should include ZFS support like you do for Btrfs, again, just sayin'...

  • You do not install a plugin to install a plugin. OMV-Extras "Plugin" is just a repo you add. Wrapping this as plugin makes it easier to access for people who are not that familiar with the use of cli to add a repo.

    Chaos is found in greatest abundance wherever order is being sought.
    It always defeats order, because it is better organized.
    Terry Pratchett

    • Offizieller Beitrag

    Have you made any progress into fixing the horribly broken ZFS plugin yet?

    take a look by yourself


    https://github.com/OpenMediaVa…avault-zfs/commits/master


    Thing is there are little to zero developer resources ATM.


    I can try and replicate your issue but I am limited to vm, few people have a server with that amount of physical disks just to test, even less developers. Why are you still trying btw? Shouldn't you already switched to another platform that works for you.
    Also have you tried to just create the array in commmand line instead of the plugin?

  • Shouldn't you already switched to another platform that works for you.Also have you tried to just create the array in commmand line instead of the plugin?

    I was going to just use FreeNAS but my old Marvell based HBA's was preventing me from doing so. I just got new LSI HBA's and flashed them. I thought my old HBA's might have been the cause of my problems with OMV so I went back to OMV (I have separate drives for OMV and FreeNAS) to try and see if switching out my HBA's made a difference.


    No I did not try the command line because even if I were successful, I still would not trust OMV if there is something wrong with ZFS on OMV. So until I know ZFS on OMV is rock solid I can't use it.

  • @milfaddict A bit late but I might have an idea regarding your issue. I haven't read all posts so this could already have been discussed.


    By default ZFS On Linux (ZoL), which is the ZFS implementation used on Debian imports pools with "by-dev" policy (/dev/sdX), which could be bad if your /dev/sdX assignments change on reboot, which I've understood could happen on some systems.


    To fix this you should edit the file /etc/default/zfs and set ZPOOL_IMPORT_PATH="/dev/disk/by-id" (This setting is commented by default).


    Next create the pool, either via cli or the plugin. Then export the pool, and finally re-import it again with: zpool import -d /dev/disk/by-id <pool_name>


    Check that it works via: zpool status


    Try to reboot and check if the pool comes back online as it should.

  • @nicjo814


    Nice to see you back here. ;)


    @luxflow made this a default. So, this should not be a problem.


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Nice to see you back here.

    It's nice to be back :) . I will try to be more active on the forum again...


    Regarding the change by luxflow, do you know which file he modified to correct the issue? I just made a fresh install of the plugin and /etc/default/zfs had the "wrong" settings as far as I could tell.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!