Do Plugins ZFS and LUKS for OMV use disk ID's?

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Do Plugins ZFS and LUKS for OMV use disk ID's?

      I was wondering if these two plugins were made to use /dev/disk/by-id
      instead of the /dev/sda shinanigans that people tend to use by default.

      The reason I ask is that I am concerned that if I use your plugins and then the sda sdb sdc labels change in the system due to disk plugins and other tasks...
      my data will become lost/corrupt... or who knows what.

      Generally, it is not wise to use sda sdb for anything... instead disks should be referenced by ID... because that does not change during system admin tasks...

      The post was edited 1 time, last by k567890 ().

    • Why don't you get a OMV VM and use the plugin for itself so you can get the idea of the both of the plugins?

      You have been here since 2013 you should know by now that the whole block device backend in OMV is always being based on disk-by-id.
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • Thanks, I guess I should take that as confirmation then...

      Look, while I've joined the forum 3 years ago, I don't see how you can conclude that I should know how you've implemented internal components. The answer is not obvious from the users point of view by just using the product... and in fact suggests otherwise since the gui does not show device models or ids...quite unfriendly when selecting disks... it only makes reference to devices via unix device name (sda, sdb, etc). This is "OK"... as long as internally that is not used at all... by any plug-in.

      I'd have to investigate and set up complex tests to deduce the answer... thought it would be more efficient to just ask.
      I have searched for the answer you know...

      Anyways, hope you're right about the plug ins.
    • Look the display of the plugin will show available free devices by path sdb, sdc, As you know later on disk already formatted are mounted by UUID not sdX.
      The zfs offers by path and by-id if i recall, but omv will never offer a block device to be formatted if is registered or in use.
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • OK, but you know zfs is more than a filesystem so formatting isn't the only concern. It is also the redundancy supplier...a.k.a raid... so my concern was that since u the core team did not make zfs plugin... that he may have made the zfs pool creation via the unix dev name (sda,sdb,sdc,ect)... which would work fine and dandy because zfs does support this... but then one day... years down the road... people start swapping drives and then disaster strikes because sdb becomes sdc or whatever because some USB drive was plugged in perhaps or whatever...

      So while I know ZFS pool creation command supports by_id.... few guides show that....few devs know this... I had no idea what command he used. I did see some of the dev posts and saw that it was a struggle to create that plugin... he had little time and took a while to get it to function at all. By the way... I don't mean this as a swipe at the dev... hats off to him... this is the most valuable OMV plugin along with LUKS... a huge value... so greatly appreciated to say the least.

      I was just wondering...

      The post was edited 1 time, last by k567890 ().

    • k567890 wrote:

      I did see some of the dev posts and saw that it was a struggle to create that plugin... he had little time and took a while to get it to function at all


      The plugin was created by two developers, @miras did the backend (zfs commands I imagine and other stuff) and @nicjo814 did the frontend, with some core modifications done by the core developer to improve the fs backend and integrate it properly.The plugin is based of ZoL, there is nothing special we do here.

      k567890 wrote:

      few devs know this


      I assume @miras with the background he posted back in the time, he already knew this. Probably @nicjo814 can answer better since i haven't see miras since a long time.
      The plugin gives three options by-id, by path, and probably a third one but i can't recall now
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • k567890 wrote:

      OK, but you know zfs is more than a filesystem.

      So while I know ZFS pool creation command supports by_id.... few guides show that....few devs know this... I had no idea what command he used. I did see some of the dev posts and saw that it was a struggle to create that plugin... he had little time and took a while to get it to function at all. By the way... I don't mean this as a swipe at the dev... hats off to him... this is the most valuable OMV plugin along with LUKS... a huge value... so greatly appreciated to say the least.

      I was just wondering...


      you can choose by-id when create pool in webGUI: [HOWTO] Instal ZFS-Plugin & use ZFS on OMV

      forums.openmediavault.org/inde…hment/2166-scrub-1-2-jpg/

      and you can change it later if you want


      and the MOST importat: ZFS use his own metadata to know what disk are used in pools, so really do not matter adax order , the ZFS pool always work.


      Please do some test and inter-change HD SATA order to see that it's true.
      OMV 4.1.11 x64 on a HP T510, 16GB CF as Boot Disk & 32GB SSD 2,5" disk for Data, 4 GB RAM, CPU VIA EDEN X2 U4200 is x64 at 1GHz

      Post: HPT510 SlimNAS ; HOWTO Install Pi-Hole ; HOWTO install MLDonkey ; HOHTO Install ZFS-Plugin ; OMV_OldGUI ; ShellinaBOX ;
      Dockers: MLDonkey ; PiHole ; weTTY
      Videos: @TechnoDadLife
    • The plugin uses /dev/sdX or /dev/disk/by-path or /dev/disk/by-id whichever the user specifies at pool cration. However in ZoL you need to specify this again in the file /etc/default/zfs for ZoL to use proper labels when rebooting. This is copied from my file:

      Source Code

      1. # Specify specific path(s) to look for device nodes and/or links for the
      2. # pool import(s). See zpool(8) for more information about this variable.
      3. # It supersedes the old USE_DISK_BY_ID which indicated that it would only
      4. # try '/dev/disk/by-id'.
      5. # The old variable will still work in the code, but is deprecated.
      6. #ZPOOL_IMPORT_PATH="/dev/disk/by-vdev:/dev/disk/by-id"
      7. ZPOOL_IMPORT_PATH="/dev/disk/by-id"


      I'm afraid this file is not touched by the plugin since I was not aware of it's use when I worked on it.
    • The LUKS plugin doesn't (currently) store any configuration or state data pertaining to encrypted devices - they are enumerated live when using the GUI, hence I can't see any opportunity for the plugin to operate on the wrong device in this way.
      Filesystems on top of encrypted devices are handled in the same way as other devices in OMV, that is, using UUIDs.

      I suggest you try it out in a VM, and if you can reproduce a specific bug, let me know and I will have a look.
    • Thanks guys, you're awesome!!

      Thank you so much for creating these two plugins!

      Is adding a rotating auto snapshots feature to the plugin in the realm of possibility?
      That along with scheduled scrubs would make it feature complete.

      Here is a linux project of interest.
      https://github.com/zfsonlinux/zfs-auto-snapshot

      Here are some tips about getting auto versioning in Windows (Because SMB has this built in). So SMB + ZFS allow for direct GUI version control in Windows.
      [GUIDE] Windows Previous Versions and Samba (Btrfs - Atomic COW - Volume Shadow Copy)

      The post was edited 1 time, last by k567890 ().

    • Ohhh, just one more concern.

      The LUKS and ZFS plug in were made by seperate devs.

      I was wondering how did you manage a proper shutdown procedure when OMV uses both LUKS and ZFS.

      Are you doing something to make sure that the ZFS pool is unmounted before LUKS container is closed?

      proper OMV shutdown should be something like:
      stop all sharing services.
      flush all writes.
      unmount ZFS pool.
      close LUKS containers.
      etc...

      How did you co-ordinate this?

      The post was edited 1 time, last by k567890 ().

    • Right... I know... but still if they are used in a topology... there is dependence between these plugins...

      I realize you are using Javascript and then probably PHP to issue system commands...
      The question is what commands do you issue and in what order. Of course, I'm limiting this scope to just... how did you handle the shutdown sequence...

      Does the LUKS plug-in simply ignore that the system will be shutdown and never issue close commands? Or does it just issue close commands for open(mapped) LUKS encrypted devices oblivious to the fact that a ZFS pool hasn't been unmounted yet?

      If the LUKS container is closed while a ZFS pool is still trying to write... you would have corruption (the raid and checksums won't help).
    • For the LUKS plugin (same probably goes for ZFS), there is nothing special done by the plugin at shutdown - as mentioned, the plugins (and OMV as a whole, really) are merely interfaces to the underlying Linux system and commands. To find out more about unmounting Filesystems and encrypted devices at shutdown, you should consult the manuals for Debian/SysInit/systemd, ZFS, and cryptsetup/dm-crypt - I think you will find it is all taken care of appropriately by the system.

      In specific regard to closing a LUKS container whilst it is in use by a (ZFS) filesystem, the Linux kernel will not allow you to close the device mapper device whilst it is in use.

      If you have a specific case you are worried about hitting, I encourage you to set up a VM and try and break it.