[HOWTO] Instal ZFS-Plugin & use ZFS on OMV

    • Offizieller Beitrag

    I found a few bugs...


    1 - It calls a setSize function that doesn't exist anymore but I don't *think* it is needed -or- it needs to be added back. Need to do some tests.


    2 - When expanding a pool, it shows all the drives even if they are part of the pool. This shouldn't be a problem to fix.


    3 - In a z1-pool, it forces you to expand by at least 3 drives. This also shouldn't be a problem to fix.


    So, when you tried again, the first drive was probably already partially added to the zpool. I will try to fix these things in the next few days.

  • @ryecoaaron You are fantastic! ;)

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

  • Er, it should force you to expand by 3 drives. You can't create a RAIDZ1 vdev with less than 3 drives, and the plugin doesn't have a facility for creating a zpool with vdevs of different types, I don't think. All the RAIDZ vdevs should be N+2 drives, where N is the RAIDZ level. Otherwise, it's just a mirror dev with N copies.

    • Offizieller Beitrag

    Er, it should force you to expand by 3 drives. You can't create a RAIDZ1 vdev with less than 3 drives, and the plugin doesn't have a facility for creating a zpool with vdevs of different types, I don't think. All the RAIDZ vdevs should be N+2 drives, where N is the RAIDZ level. Otherwise, it's just a mirror dev with N copies.

    I guess this is just another reason why I shouldn't be working on this plugin. I know the first two points are right but this lowers my interest in fixing this. I need to talk to a long time Solaris admin why zfs is weird this way.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I guess this is just another reason why I shouldn't be working on this plugin. I know the first two points are right but this lowers my interest in fixing this. I need to talk to a long time Solaris admin why zfs is weird this way.

    Well I hope some fixes this or I am going to have to say screw it and go try FreeNAS, OpenFiler, ZFSguru or Open Attic. I had such high hopes for Open Media Vault but these issue are making the experience a pain and is lowering my trust in OMV. Why is it that I seem to be running into problems with basic tasks like creating a ZFS pool.


    I need to get this storage server up and running ASAP and can't wait around for someone to fix this.

    • Offizieller Beitrag

    Well I hope some fixes this or I am going to have to say screw it and go try FreeNAS, OpenFiler, ZFSguru or Open Attic. I had such high hopes for Open Media Vault but these issue are making the experience a pain and is lowering my trust in OMV. Why is it that I seem to be running into problems with basic tasks like creating a ZFS pool.


    I need to get this storage server up and running ASAP and can't wait around for someone to fix this.

    OMV is good for a lot of people. I still don't understand why you need a 36 drive zfs array especially considering you say you have no linux experience?? And yes, I have had 20+ drives running on OMV without zfs.


    All you have done is complain about the quality of plugins, ask for super fast support from volunteers, and offered no support but threaten you will leave. If you try this on the freenas forum, you will have a very bad experience. Consider it lucky that I even try to maintain this plugin without any knowledge or experience. Give me one reason why myself or anyone else should?


    I wish you good luck with the alternatives. I'm sure they will work perfectly for you.

  • OMV is good for a lot of people. I still don't understand why you need a 36 drive zfs array especially considering you say you have no linux experience?? And yes, I have had 20+ drives running on OMV without zfs.

    Because that is how many drives my servers hold and I don't want to have to add drives later and then have data unevenly spread across the pool. Also, I don't want to go and expand my array later with many TB on it only to have problems like this come up. There are many home users with 24, 36, 45, and 60 drive servers not to mention enterprise and government entities that have much larger arrays.

    Zitat von ryecoaaron

    All you have done is complain about the quality of plugins, ask for super fast support from volunteers, and offered no support but threaten you will leave. If you try this on the freenas forum, you will have a very bad experience. Consider it lucky that I even try to maintain this plugin without any knowledge or experience. Give me one reason why myself or anyone else should?

    I have not complained but I have been frustrated with some of these technical issues and I am sure that frustration has shown in my writing. I have never meant for any of that to be directed toward anyone. I can not offer support because I am new to OMV and I have not threatened to do anything, I merely stated that I need to have this server up and running and if for whatever reason OMV can not satisfy my need, I will obviously need to try a different solution.


    I really do appreciate all the help the community has given me. As for your one reason, if some thing is broken, shouldn't those in charge of said thing that is broken do everything they can to fix said thing. I am not the only person who uses this plugin. Anyways... I am sorry if I insulted you. I hope this gets fixed and I thank you for all the hard work you and the mods/devs put in.

  • There are many home users with 24, 36, 45, and 60 drive servers

    ?(


    Everything what you want can be done bei CLI. Personally I would not create such a big server with OMV and ZFS filesystem without basic knowledge about ZFS commands. Maybe there are other, more suited solutions available.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    4 Mal editiert, zuletzt von cabrio_leo ()

    • Offizieller Beitrag

    Because that is how many drives my servers hold and I don't want to have to add drives later and then have data unevenly spread across the pool. Also, I don't want to go and expand my array later with many TB on it only to have problems like this come up. There are many home users with 24, 36, 45, and 60 drive servers not to mention enterprise and government entities that have much larger arrays.

    I wasn't questioning the number of drives. I was questioning the fact that you think zfs is the only option.



    if some thing is broken, shouldn't those in charge of said thing that is broken do everything they can to fix said thing. I am not the only person who uses this plugin.

    I guess you missed the point that there is no one around in charge of "said thing". I am pretty much the only one who is willing to put any time into this plugin right now and I don't use or know zfs. This bug doesn't seem to be affecting most users and like cabrio_leo said, you can do the needed steps from the command line until it is fixed. That said, I will try to fix it but it will be when I have time.

  • When I imported my pool created in freenas I had to do it via Cli to get the id mapping and not the path. The fixes in the import settings for zfs didnt seem to work for me.
    But that was very easy to get right from Cli. zpool import -d /dev/disk/by-id "insert pool name here"

  • I guess this is just another reason why I shouldn't be working on this plugin. I know the first two points are right but this lowers my interest in fixing this. I need to talk to a long time Solaris admin why zfs is weird this way.


    Not sure why this is weird, but I can try and explain it. RAIDZ1 vdevs are the rough equivalent of RAID5, so 3x1TB = 2TB usable and 1TB Parity (aka N+1). RAIDZ2 vdevs are the rough equivalent of RAID6 (N+2). RAIDZ3 uses a third disk's worth of space for parity (N+3).


    Think of a zpool with 2 or 3 RAIDZ1 vdevs in it as RAID50, and the same with RAIDZ2 as RAID60. So what milfaddict is doing would be striping across three N+3 arrays.


    ZFS actually will let you create a zpool with different types of vdevs, but the performance is going to be crappy and keeping track of drives can get chaotic; I would definitely NOT recommend allowing different types of vdevs in a single zpool.

    • Offizieller Beitrag

    Not sure why this is weird, but I can try and explain it.

    Until I talked to the Solaris admin that I worked with, I didn't fully understand how zfs worked. After talking to him, I still think it is weird but I understand it. The plugin is currently correct in how many drives it forces you to add but I need to make the vdev type read only to prevent the bad performance you speak of.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Until I talked to the Solaris admin that I worked with, I didn't fully understand how zfs worked. After talking to him, I still think it is weird but I understand it. The plugin is currently correct in how many drives it forces you to add but I need to make the vdev type read only to prevent the bad performance you speak of.


    Thanks man. I wish I had the time to devote to learning more about PHP, but I'm getting crushed at work and will be for the next 3-6 months. Unless I find another job, I won't be able to contribute in a useful manner for quite some time.

    • Offizieller Beitrag

    Unless I find another job

    PM and we can chat about other potential opportunities (if you are interested) :)

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Anyone want to try my fixes to expand and to not show used drives? http://omv-extras.org/testing2…ault-zfs_3.0.19_amd64.deb

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Alright


    So when I started OMV, my ZFS pool had disappeared and I had to import it, not a huge deal but that does make me nervous.


    Before I installed the updated plugin, I decided to delete the old zpool. When I tried, I got this error. https://i.imgur.com/x0SUaVRr.png


    So I went ahead and installed the updated plugin. Everything went fine. When I went back into ZFS, there was no pool showing. So I tried to import it again and got this error. https://i.imgur.com/mKfpUY5r.png I am guessing the pool actually got deleted when I updated.


    Now I created the pool again with 12 drives Everything went fine until I tried expanding the pool with 12 more drives. I got this error. https://i.imgur.com/aWFOwM0r.png I then tried again only to get this error. https://i.imgur.com/qV7tmtBr.png So I rebooted, noticed the array grew, tried to expand again, got those same errors, rebooted, noticed the array had grown again. I am still missing 11TB and the array still says "DEGRADED".



    Two suggestions.


    1. As you have already mentioned. When I bring up the "Expand pool" dialog, it still shows all drives even if they are already in use, it would be nice if drives that are already part of a zpool or vdev were hidden.


    2. The drive list in the "Expand pool" dialog window only shows 3 drives at a time even if I expand the window. It would be really nice to be able to expand this area so I can see more drives at a time. https://i.imgur.com/WByxxi2r.png



    Here are the details of my pool if it helps.


  • Pool details part 2.


  • I have experienced that sometimes the error message of the ZFS plugin is misleading. I remember to do a zfs task in the past with the plugin and I got an rpc error with an unspecified message. Then I did the same job by CLI. By ClI I got a clear error message and indeed I tried to do something which was not possible.


    @milfaddict: Are you willing to do the same job again with less disks? I would propose to create a raidz1 with 3 disk or a mirror with 2 disks and then try to expand by another raidz or mirror and then the third one. Just to clarify if it is a problem of the plugin process itself or of something else (mayby one of your disks?) And I would propose also that you should (quick) wipe the disks before trying to create the pool. Then delete the pool zpool destroy <name of pool> and do a second try with more disk. This seems to be boring, but my experience is that sometimes it is necessary to go step by step to isolate the source of error.


    BTW: Here is a quick overview of some ZFS commands: Sun ZFS cheatsheat :D
    Greetings


    @ryecoaaron: Is there a possibility to write a log file to see which ZFS commands are used by the plugin and what was the response? I am not sure to see this in the syslog.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!