Posts by Aiakos

    Hi, since I dont have direct access to that machine it took me a while, but here is the output:

    root@NAS-Eltern:~# ls /var/lib/openmediavault/rsnapshot.d/
    root@NAS-Eltern:~# grep snapshot_root /var/lib/openmediavault/rsnapshot.d/*
    snapshot_root   /srv/dev-disk-by-label-Backups/BackupSnapshotsEltern-Share//Backups/BackupEltern-Share//

    //Edit: I do also use the LUKS plugin for the backup HDD and unlock it after restart by entering the passphrase in the OMV UI. Can this have any influence on the path?

    This is the content of the config file:

    Today it happened again - it switched back to "Backups" directory in picture 2. This time there were no updates performed and I also did not change anything else on the system. I will disable rsnapshot for now... :/

    Is there anything I can do that might give you a hint on what's going on?

    I need to bring the topic up again since the problem happened again. As you can see in the pictures, I have rsnapshot running for a while. After I did some updates at 01.09., rsnapshot changed the directory to the one shown in picture 2. After a recent upgrade (yesterday) it now seems to continue with the original directory (picture 1). I dont remember if the updates that I ran contained anything related to the rsnapshot plugin, but there is clearly a glitch here.

    Could you have a look into this, please? Using rsnapshot doesn't make much sense if it keeps switching its directories and therefore braking the "timeline"... :(

    Hi Chone,

    in the meantime - without having changed anything - I get this new error:

    Tue Jun 12 19:44:58 2018 VERIFY ERROR: depth=0, error=unsupported certificate purpose: CN=***
    Tue Jun 12 19:44:58 2018 OpenSSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed
    Tue Jun 12 19:44:58 2018 TLS_ERROR: BIO read tls_read_plaintext error
    Tue Jun 12 19:44:58 2018 TLS Error: TLS object -> incoming plaintext read error
    Tue Jun 12 19:44:58 2018 TLS Error: TLS handshake failed

    What does this mean?

    Hi everybody,

    I had OpenVPN working under OMV3 perfectly for quite a long time. After the upgrade to OMV4, I reinstalled the plugin and created new a new certificate for my client using the GUI. If I now try to connect the client, I get the error mentioned above:

    What can I do?


    Hi everybody,

    two weeks ago I updated from OMV 3 to 4 without any problems. Since then, rsnapshot uses a different directory in my backup share and thus builds up new snapshots from scratch. I now have one folder with the old snapshots:

    root@NAS:/sharedfolders# ls BackupSnapshots-Share/HT750/Backup-Share/
    daily.0   daily.11  daily.2  daily.5  daily.8    monthly.1  monthly.4  weekly.1   weekly.2  weekly.5  weekly.8
    daily.1   daily.12  daily.3  daily.6  daily.9    monthly.2  monthly.5  weekly.10  weekly.3  weekly.6  weekly.9
    daily.10  daily.13  daily.4  daily.7  monthly.0  monthly.3  weekly.0   weekly.11  weekly.4  weekly.7

    And a new one where the snapshots now get created:

    root@NAS:/sharedfolders# ls BackupSnapshots-Share/media/c486d52a-****-****-****-***********/Backup-Share/
    daily.0  daily.1  daily.10  daily.11  daily.2  daily.3  daily.4  daily.5  daily.6  daily.7  daily.8  daily.9

    Since I know that rsanpshot uses hardlinks in order to create incremental snapshots, I am afraid to break something if I just copy the folders from the old location to the new one. What is the recommended way to proceed in this case? Or is it even possible to have rsnapshot use the old directory again?


    Hi everbody,

    I recently upgraded two NAS systems from OMV 3 to 4 using omv-release-upgarde. Both upgrades went just fine, I am very happy - thanks for the hard work!! However, ob both systems there is a new directory /sharedfolders, which is empty. I guess it is supposed to acces shared folders directly, right? What do I need to do in oder to make this work?


    So far I have saved 4 syslogs of a normal shutdown / reboot and 1 syslog of a shutdown where the system got stuck.



    The only real difference between the two logs are the 5 lines at the end of the log where the system got stuck:

    systemd[1]: Stopping Avahi mDNS/DNS-SD Stack...
     systemd[1]: Stopping D-Bus System Message Bus...
     systemd[1]: Stopping ACPI event daemon...
     systemd[1]: Stopping System Logging Service...
     systemd[1]: Starting folder2ram systemd service...

    I was not able to find these lines in any of the logs, where the system shut down normally. Does this mean anything?

    Sorry, maybe I did not explain it very well. What I meant was that the disk gets auto mounted after I unlock it, but the systems waits for the timeout on boot. I can avoid this timeout by adding "noauto" to fstab, but then the disk does not get mounted automatically any more, after I unlock it. My question was how I can have both auto mounting and no timeout on boot. ;-)

    Your suggestion regarding reducing the timeout to two 2 seconds seems to work. Thus, I consider the problem to be solved. Thank you!