Posts by konfuziuskus


    does anyone know if with the command,

    zfs send pool/fs@snap | gzip > backupfile.gz

    for an offline backup to safe multiple stages of pool, the file inside this backup file are zfs checksum protected? Can i see in later one receive stream if files are corruped?

    It would be nice if there were an option in the zfs plugin for send and later receive (import pool) to and from backup files eg. from/to an usb disk.

    Thank you for your help.


    results from omv-firstaid:

    Checking all RRD files. Please wait ...
    All RRD database files are valid.

    My time settings:

    root@omv4:~# timedatectl
    Local time: Mi 2019-12-18 18:03:08 CET
    Universal time: Mi 2019-12-18 17:03:08 UTC
    RTC time: Mi 2019-12-18 17:03:07
    Time zone: Europe/Berlin (CET, +0100)
    Network time on: yes
    NTP synchronized: yes
    RTC in local TZ: no

    Is there anything wrong?


    does anyone know the cause of rrdcached failure message:

    Dec 12 17:40:48 omv4 collectd[3075]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-root/df_complex-free.rrd, [1576168848:2709159936.000000], 1) failed: rrdcached: illegal attempt to update using time 1576168848.000000 when last update time is 1576168848.000000 (minimum one second step) (status=-1)

    I can`t find informations referring this, because the times are the same...


    My server is usually down, when the rsnaphot cron job have to start:


    5 * * * * root /var/lib/openmediavault/cron.d/rsnapshot hourly
    30 3 * * * root /var/lib/openmediavault/cron.d/rsnapshot daily
    0 3 * * 1 root /var/lib/openmediavault/cron.d/rsnapshot weekly
    30 2 1 * * root /var/lib/openmediavault/cron.d/rsnapshot monthly
    00 2 1 1 * root /var/lib/openmediavault/cron.d/rsnapshot yearly

    overwrite is with other times is useless, because the times will be change to default times automatically.

    Hi Aaron and friends,

    is there currently on OMV4 a practicable and reliable solution for rsnaphot in conjunction with autoshutdown?
    The solution to edit openmediavault-rsnapshot cron file is no more applicable to due this will be always overwritten to standard time values.
    I don´t want to let the system in on-state for hours at night only to backup few files within minutes. (2:00 - >3:30)
    Because there are other daily cron jobs of the system, i would like to run everything included rsync with begin at 0:00. Is this possible?

    Other solutions:

    Scheduled job, that are anacron type, have the disadvantage that all Job would start at same time after system wake up. Because of
    lock flag of rsnaphot, only one backup type is possible (from daily, weekly, monthly, yearly)

    Anacron can use a delay time for running jobs, but anacron only start on a specific time? (07:30?), when the system is not powered on.

    Can someone explain the content of the script: /etc/cron.d/anacron

    Do i think something wrong?

    Thank you for your answers!

    Hey there,

    i tried to delete a ZFZ object, sub filesystem, but this is not possible.


    "Before deleting this filesystem, You must delete shares referencing this filesystem."

    "OMVModuleZFSException: Before deleting this filesystem, You must delete shares referencing this filesystem. in /usr/share/openmediavault/engined/rpc/"

    All references to this filesystem are already deleted in shared folders and smb where it was previously created.

    Only in "ACM - User - Shared folder privileges" there are entries to this filesystems. Otherwhere there is nothing.


    i can found the filesystem entries within:

    Thank you for your help

    My way for a safe change of system and swap partitions on OMV:

    On the new free space, i can safe system images in addition, generated with clonezilla

    thanks a lot :)

    But for me it is unclear what is the difference from fix -m to fix only.
    There is no information from the snapraid manual for fix only (without options)

    What to you think about:

    1) mergerfs.balance from pool subfolder to prevent snapraid content files from move
    2) snapraid dup (to see if snapraid detect the moved files)
    3) snapraid sync (the new files to protect current status)
    4) if n-way parity exist: (remove one parity disk from content file or do it over web-gui)
    5) format this harddisk for large files (mkfs.ext4 -m 0 -T largefile4 DEVICE - inode space improvement)
    6) add this emty disk new as n-parity disk
    7) snapraid sync
    8) Do the same with the other n-parity disk

    In this case, the file protection should be alive during the new parity calc and
    the parity disks are reallocated.


    can someone tell me the difference between the snapraid omv gui commands:

    snapraid fix
    snapraid undelete

    are these commands the same on shell:

    snapraid fix
    snapraid fix -m

    What are the technical differences or limitations between them?

    Thanks a lot

    But only if client user UID/GID and server user UID/GID are the same.
    This is my problem. I can't setup the server user GID permanent to
    GID=1000. If i update my user on OMV do to update also the samba daemon settings,
    the GID change back to GID=100 an then Unix extensions are not enabled on mount on

    client side.


    I would like to activate cifs UNIX extensions on client mount.
    cifs UNIX extensions on smb.conf is active.
    (unix extensions = yes)

    On mount there is no cifs UNIX extensions visible as active.

    On my understanding:

    User on client and server must have the same UID and GID.

    Client user: UID=1000 and GID=1000.

    The permanent change for the user on OMV to GID=1000 is not possible.

    Users on OMV generated on web access usually have GID=100
    Users on OMV generated on command line starts with GID=1000

    I can add a new user with useradd. Then my user have GID=1000

    To syncronize the user with config.xml and samba daemon, it is needed
    to update the new user on omv web access.

    After this the new user GID=100 again and
    cifs UNIX extensions are not enabled.

    What is the right way to enable cifs UNIX extensions stable?

    Thank you much

    Now after another large data transfer over OMV2:nfs-Desktop-OMV4:cifs there where errors found with Quickhash GUI.
    My new solution:
    1) Copy all disk from OMV2 locally connected on OMV4 with rsync on OMVweb access.
    2) Do another rsync with -c (checksum) option to verify the previous transfered source with target data. Result: data match.
    3) Just for fun: check this source and target data with md5deep hint from Adoby on cli. Result: data match ;)