ZFS device(s) not listed in devices dropdown

  • To the plugin dev team... ;)


    Any

    I figured that was causing the problem. Thank you for confirming it. @subzero79 and I trying to come up with a good solution.

    Any good solution in sight ? no pressure ;)

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

  • Hello guys,
    I have similar issues as I did some updates on my NAS.


    This is what I'm currently running.


    Code
    root@NAS:/# uname -a
    
    
    Linux NAS 4.16.0-0.bpo.1-amd64 #1 SMP Debian 4.16.5-1~bpo9+1 (2018-05-06) x86_64 GNU/Linux



    My zfs devices also disappeared and the whole plug-in didn't work any longer. I tried to remove it and re-install. When I try I get this: (partly german output)




    Any idea?


    Cheers

  • I set the Kernel on OMV Extras to 4.15 and then to 4.14. On both versions I get this while installing zfs:


    Building for 4.14.0-0.bpo.3-amd64 4.16.0-0.bpo.1-amd64
    Module build for kernel 4.14.0-0.bpo.3-amd64 was skipped since the
    kernel headers for this kernel does not seem to be installed.


    root@NAS:~# uname -r
    4.14.0-0.bpo.3-amd64


    Any hint?


    ---


    After re-installing the headers for 4.15 i got the zfs back online.

  • This is a bit off-topic for this thread, but yeah you need to make sure that you have the headers.


    In my experience, people asking for help getting ZFS [back] up and running usually have one of 3 problems:


    1. Contrib in sources - plugin/install fails > Add contrib to sources
    2. Headers - Module build fails > install headers
    3. Wrong kernel - Upgraded kernel and module fails = pools that were working disappear > move to previous kernel


    Glad you got it working :) Enjoy

  • Hi, I had the same problem with the GUI and the config.xml. I was seeing the same error when trying to change the sharedfolders and ACL was greyed out as well. Copied from an old config.xml the filesystem part of my ZFS mount and everything works fine. trying to leave my finger off the zfs plugin from now on.
    Seems that the plugin is creating the problem. ah and forgot to say, my kernel is 4.9

  • Solution (?) / Workaround
    Since the omv installer refused to install to my USB-Stick, I followed the tip to use the debian installer and install omv 4.1.3 afterwards.
    Because I did not get zfs installed (kernel 4.16.something) I installed 4.9.x. Now zfs works with the above mentioned problems.


    To solve the disappearing problem of my imported pool i followed the tip from https://superuser.com/question…-after-reboot-on-debian-8 :
    Uncaught OMV\Config\DatabaseException: Fatal error 5: Extra content at the end of the document (line=4, column=3)
    Add a new file in /etc/modprobe.d/ with the content:

    Code: /etc/modprobe.d/zfs.conf
    options zfs zfs_autoimport_disable=0

    Now I try to follow this to add my missing Pool to devices to my /etc/openmediavault/config.xml
    : Finding the correct mntent UUID for a filesystem not in config.xml


    Not sure if this will work, but I'll let you know.


    Perhaps that helps someone.


    Update: Aargh. can't start the GUI anymore, after editing config.xml. I tried to insert the mntend part manually. Uncaught OMV\Config\DatabaseException: Fatal error 5: Extra content at the end of the document (line=4, column=3) . Of course I haven't made a backup BEFORE. Also after removing the stuff, it does not work anymore.

  • OK. My story: My old OMV3 got broken,so I tried to install OMV4. That did not work (broken installer). Then I installed nas4free. After I got this working I created a ZFS pool (1 HDD) and some hours later I finally had everything installed but had to find out that i cannot use the cirtualbox extension pack! Damn!


    So I came back to OMV4, finally got it working, imported the ZFS pool and now I am here because of the zfs-problem (see post above)
    After I screwed up my config.xml and since there was nothing important installed so far, I installed everything completely new.



    Now my pool exists after reboot, but unfortunately my zfs pool still does not appear in the devices list. I also cannot remove it because it is not in the mntend-section.


    Nevertheless this might be helpful?


    Here are my few notes how I did this:


    PLEASE DO NOT DO PLAIN COPY & PASTE!!! Please just use it as a possible orientation.



    I was really curious about ZFS and wanted to give it a try, but for now I will go back to a simple Ext4-Drive to get OMV work.


    If anyhow the OMV-ZFS plugin works, please let me know. ;)

  • Guys so there is some workaround to this problem? :( Right now I can't share my mirror :(

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • Why are they even treating the whole shared folder thing the way they are? When I use the docker ui plugin i can browse to any area of my box's filesystems. The issue lays with the way shared folders is parsing the filesystems. My issues started with removing (aka. zfs destroy <FILESYSTEM_I_DIDNT_WANT_ANYMORE) an unwanted FS via the CLI. In the FreeBSD world that type of zfs change is normal. The whole OMV is a fantastic piece of software albeit it's issues and idiosyncrasies. The Docker UI to me is really the hidden gem of this software, the ability to load up an Automated HTPC within docker in what took less time than it did to write this post is amazing. *Sits back and watches some TV*

    • Offizieller Beitrag

    My issues started with removing (aka. zfs destroy <FILESYSTEM_I_DIDNT_WANT_ANYMORE) an unwanted FS via the CLI. In the FreeBSD world that type of zfs change is normal.

    Shared folders in OMV use ZFS filesystems in the same way Shared folders use any other filesystem type. If a shared folder is attached to a ZFS filesystem, named FILESYSTEM_I_DIDNT_WANT_ANYMORE , the shared folder must be deleted before deleting the filesystem named FILESYSTEM_I_DIDNT_WANT_ANYMORE. This is logical. Otherwise a lot of high level configuration work could be destroyed with a low level mistake, such as accidentally deleting a filesystem. In essence, configuration items should be cleanly reversed.


    This is a requirement for the GUI, to maintain internal processes that operate in the background, for accurately reporting the state of the system. If you bypass processes that should be done in the GUI, by using the CLI, OMV's internal database gets out of sync with the actual state of the system. And,, that's when problems crop up.


    In the bottom line, it's best to use the GUI when and where it's possible. For standard configuration changes, the CLI should be reserved as a last resort.


    I'd agree with you on the Docker Plugin. That's a stellar piece of code. And while it does have some issues, I think the ZFS plugin is adequate for most users. (Let's set aside the ZFS package issues with the 4.16 kernel, the headers, etc.)

  • Guys so there is some workaround to this problem? :( Right now I can't share my mirror :(

    up. There is some way to resolve this?

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    up. There is some way to resolve this?

    What are you looking for? A resolution (may take awhile) or a work around (potentially available, if you're willing to rebuild).
    Note; if you clone or at least backup your current boot drive, you'd be able to back out of the following.
    ______________________________________________________


    Look at this post. -> OMV4 with kernel 4.9.0 and ZFS 0.7.9-2


    Notes:
    - Unless you have cutting edge hardware, kernel 4.9.0 will be fine.
    (In your case, you may have been running kernel 4.9.0 on your server, with OMV3.)
    - Start with a clean build
    - I created a new ZFS pool (So I don't know what the effects of importing an existing pool would be.)
    ______________________________________________________


    The referenced build was on actual hardware (not a VM), and since kernel 4.9.0 is the default kernel for Debian (9) Stretch, package upgrades should be fine.

  • So i got the same problem, no accessable pool in the drop down menu. I reinstalled everything with newest omv4 and kernel 4.9.0-6, Plugin seems to work and i can mount the pool and see it and the folders in the command line. What do i have to do to access my data again, i'm unterstanding what the problem is but i don't now how to fix that? Could somebody please help meand tell me what to do...

  • I can't delete my data right now :/ Is this the only work around? :( If I boot with a kernel different from 4.16 can I resolve this?


    Also after reading the post about BTRFS and OMV5 it's probably safer to go with BTRFS in case I will format my HDD

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!