Shared folders going N/A for device corrupts USB Backup

  • In this post I discribed a situation where the device entry in Shared Folders are going to n/a.



    It seems that this leads to an critical situation with backup jobs configured by USB Backup plugin. From one day to another the wrong content was backuped. It didn´t back up the configured shared folder any more but the root folder of the system drive with all sub folder. Because I have set the "delete"-option in the USBbackup job definition, my existing backup was corrupted.


    In the USBbackup email I saw:
    Please wait, syncing '//' to '/srv/f144d955d8d7fd1a199ea0dc188c38c9/Video' ...
    sending incremental file list


    I checked several times the settings in the USB backup job definition. But the share setting was correctly configured.


    Next email message was created some week agon, when the USbbackup was executed correctly.
    Please wait, syncing '/naspool/FilmeFS//' to '/srv/95330157ea8b6b143b13cf8cd279a999/Video' ...
    sending incremental file list



    So far, I could not fix the n/a error in Shared Folders. Therefore I cannot run USB backup anymore.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • @votdev Thank you for your reply. I am quite sure that the root cause is an export of my ZFS pool. All of this shared folders are on the same ZFS pool. Yesterday I need several hours to repair my OMV Installation (Recreate shared folders with n/a devices, fix misbehaving USBBackup jobs). It seems that all this problems are subsequent faults of the ZFS pool export.
    So I would be very interested that a remedy could be implemented.


    After fixing my OMV installation I did a short test to prove the root cause. I have no experience with VMs. So I reactivated an old sata drive for this test. I did the following (Prerequisite: Installed ZFS-plugin)

    • Connect the disk to a SATA port (It is not possible to do the test with an USB device).
    • Quick wipe the disk
    • ZFS plugin: Add pool
      -> Name: Testpool
      -> Pool type: Basic
      -> Device: this device
    • ZFS Plugin: Highlight pool "Testpool" -> Add Object:
      -> Type: Filesystem
      -> Name: FS
    • Create Shared Folder:
      -> Name: zfstest
      -> Device: Testpool/FS
      -> Path: zfstest/


      Preparation finished.

    • ZFS plugin: Button "Export pool"
      => The pool disappears from the ZFS plugin page
      => The device entry in "Shared folders" is going "n/a".
    • ZFS plugin: Button "Import pool" -> Enter pool name "Testpool"
      => The pool should reappear in the WebUI
      (Hint: If you have used an USB disk, the pool cannot be imported!)
      => The device entry in "Shared folders" is remaining "n/a".


      That is the problem. There is no possibility to detect the imported ZFS pool. So everything which is related to the shared folders doesn´t work anymore. USB Backup saves the root directory instead of the shared folder and so on.

      On a productive OMV system with ZFS usage this seems to be a severe problem. I would be very happy if a developer would take time to implement a solution.

    BTW: Side issue: In File System -> Create: All disks which are currently used for a ZFS pool are shown as available and can be formatted. Quite dangerous. "blkid" shows "ZFS member" for such disks. So it should be possible to recognise that this disks are already used.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • The problem is not OMV, it is the plugin. It seems the plugin is not implemented correctly because the backend framework is able to filter devices that should not be displayed as candidate in the create dialog.

    I agree concerning the topic with the create dialog. But the first point regarding the n/a devices in "Shared Folders" this is not the plugin. The same behaviour can also be seen if the ZFS pool ex-/import is done by command line interface. I have only several shared folders and it cost me hours to repair the installation and especially to get the USBBackup running again and recreate the backup itself. Sorry, but I don´t like such kind of unpleasant surprisis during usage of OMV, although it is a powerful NAS solution otherwise.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    I don’t know the ZFS plugin because it is an external plugin and not maintained by the OMV project. To me it seems that the pool export breaks something, but as said I don’t know this plugin. Shared folders do not store their absolute paths, instead the following is stored in the DB:


    shared folder => Mount point => device file


    If there are changes in this chain then all services relying on shared folders will fail.

  • As I tried to convey I don´t think this is problem of the ZFS plugin exclusively, because all ZFS actions can also be done by CLI.

    shared folder => Mount point => device file

    If the mount point "disappears" for any reasons, does OMV automatically recognise it, if the mountpoint appears again later? I saw a lot of UUIDs in config.xml, regarding the ZFS pool and subordinate file systems. Maybe the pool export is causing a change of this UUIDs. But the shown UUID in 'blkid' for the ZFS members is the same after a pool ex-/import.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    • Offizieller Beitrag

    If the mount point "disappears" for any reasons, does OMV automatically recognise it, if the mountpoint appears again later?

    This should be no problem. But keep in mind that ANY service (FTP, SMB, rsync, ...) is dumb and will write/read to the configured path, regardless if the path is pointing to a mounted filesystem or not. This is why it happens that the root filesystem is flooded when a mount point disappears. But this is no problem of OMV, this ALWAYS happens either if you manually setup your system from scratch.

  • Please open a bugreport on the omv-extras git repository.

    I'm also having problems with ZFS pool exports. Did anyone file a bug report? I would like to do so but I would also like to avoid filing duplicate reports if someone already submitted a report.


    Thanks.

    • Offizieller Beitrag

    This is a plugin issue, to avoid this problems avoid using the export button in the webui at the moment use the command line. I'll explain what's going on:


    When a pool or dataset is created or imported the plugin can insert entries in the fstab omv internal database (not /etc/fstab). Is an entry per pool and per dataset. Those entries (objects) have several properties as explained in the wiki, one of those properties is the internal id of the object which is randomly created when you press the import button or create a dataset or create pool. When a shared folder is created it also relates to his parent volume, this relationship is based on the previous ID. If you press the export button the plugin will flush all related entries of that zfs pool, a re-import will regenerate those entries again with a new internal id, now there is the problem.


    To solve this issue as mentioned above (for now) if you're gonna live export/importing the pool then do it in CLI. Or use omv-confdbadm to backup zfs entries, so when a situation like this happen you can delete the db entry of the just reimported pool and insert the old zfs entries.


    To solve this issue is to follow the omv guidelines an disable the export button if there are shared folders present that relate to the pool (which is exactly the same that happens in normal linux fs in omv). ATM there could be no immediate fix for this.


    @votdev is it possible that the unmount button could perform a specific command depending on the backend selected ?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!