Posts by CarlB

    During Final Tests of the previous script some bugs came up. Bug were fixed and some minor update to the rotation engine were done. Updated code is attacched below.


    olduser :


    Quote

    [...] you might want to put all variable assignments (especially the ones that touch disks) in their own file which will be the main file people read. For instance, I don't want to have to scroll 200 lines to see where the logs are going. Same for the actual Rsync command as well and how it's called.


    thanks for the advices. I was thinking about spread the code in multiple files using functions to separate tasks and I will certainly do that in the future.

    Quote

    [...] what if I want to use the script arbitrarily in a one-off fashion? If I did, that would kind of suggest to detach the cron logic


    I don't think about detaching the cron logic because the script was written for this purpose. If I need to sync to folders in a one-off manner I simply run an rsync job from the command line.


    Quote

    Being that I haven't used "hdparm", I have a honest question: How do you know this succeeds? [...] Again though, I've never used it so I don't know if any switch can detect status, so maybe -y always works.

    I'm not completely sure that hdparm -y always work but I made some tests and (if the disk support power management functionality, i.e. hdparm -C returns a valid answer) the command works regardless of the mount status of the drive. Being that said you're right and bacause there is no 100% guarantee that umount and hdparm -y works I have inserted conditional tests also for them.


    About the "testing" matter I'm completely agree with Adoby . I don't reccomend to anyone to use my-script without analyzing & understand it and certainly I don't mean that one cannot use the ready-to-use solutions present.


    I want that the backups are done following exactly my needs ( custom logs, naming convention...) and I think that the best way of doing it it's via a custom script.


    About the reliability of that solution I agree that the strategy should be based on something well tested but once the custom rotation engine is validated I don't have any doubt about the reliability of the script because it simply run rsync (whose reliability is out of discussion).


    my-backup.sh.txt

    Hi everyone!


    I have installed rsnapshot and take a quick look on how it works and how it has to be configured and yep, it's certainly the easiest and faster way to have incremental snapshot but the idea to write my own script fashinated me and I have decide to stay on this way, also because I agree with Adoby observations about the much higher backup-strategy customization level reachable with a personal rsync script ( pre-post backup automated stuffs, naming style conventions, logfiles, mail notifications with personalized style...).


    So starting from the previous script, inspired by the Adoby solution I have completely rewrite the script in a snapshot-oriented manner in a way that feets my needs. It isn't completely ultimated ( i have to implement the pre/post backup security checks discussed early...) and I have to do some final test but it works and it's pretty much done. (I know maybe it's bloated or don't well written due to my limitate knowledge of bash but it works and do what I want so ok for the moment...).


    The script basically do what discussed so far...It implement a snapshot-based versioned-backup strategy of my omv-data drive to an external usb3 drive following the naming convention that i want. The drive is automatically mounted/unmounted within the script execution, logfiles are generated and sended as attachment via mail using mutt because mail doesn't allow me to attach files ( the personalized script log is linked in the mail body and the rsync detailed log is attacched).


    The core job is done using rsync with --link-dest option, and the clean/rotation job of the previous snapshots is done parsing the directory list with a couple of for loops based on the snapshot-number that is part of the snapshot name.


    As I said before the security checks on sentinel files are only sketchy and will be implemented on next upgrades.


    The script takes in input the backup frequency desidered (e.g. daily) like rsnapshot in order to use the same script for the various frequency.


    Source and Destination as well as the Retention Policy could be setted on the variable declaration sections.


    So... Here's the code ---> (seebelow) . Any advices are welcome!

    Very intersting advices. Thank you!


    I'll study theese solutions and try to implement the script as soon as possible but due to job and to my newbie skills on bash I don't grant that this happen rapidly. Anyway when things ready I'll post everything here to continue the discussion. See you soon!

    henfri , Morlan thanks for advices. I've checked plugins that you mention and yes, they are great and easy to use, but it seems that they both work with shared folder only so I would have to set a separate job for each shared folder (I don't have a "master" shared folder that contain the others, my fault). Moreover I need to backup the entire DATA disk wich contains all the shared folders but also the nextcloud data folder that is not a shared folder. For theese reasons I prefer to keep going with the personal rsync script way as Adoby does.


    I've downloaded your script Adoby and I'll take a look and study that in the next days in order to modify my scripts and get hardlinked snapshots too in order to reach the goal of the first question.


    For ransomware topic you are a lucky guy! In the sense that I'm also careful about what code is executed and where but I use windows for work (no alternatives because there aren't softwares alternatives for linux) and the omv-server is also used by "non-careful users" (and they also use windows too) so I'm concerned about theese kind of disasters. So in my position, what would you do? Are the ideas previously explained good (or at least reasonably dangerous)?


    In the case the DATA disk get's encrypted, the last snapshot remain clean right? If I understend correctly in this case the cron-job would create a new snapshot corrupted (an independent copy full of encrypted file) but the last clean snapshot is manteined because all the hardlinks are removed (every file is modified/encrypted) right?

    Hi everyone!


    I opened this thread because I'm a "newbie" on backup-strategies and this kind of stuff. Moreover I'm not a system-admin or a computer engineer ( I do a totally different job), so I also don't have a particular knowledge of linux-servers administration tasks, but I know by experience (I haved loss data in the past) the importance of having data safely stored and backed-up. For this reason about last year I started to look for information about nas-servers, in particular about openmediavault and (with the valuable help of the forum) I set-up my omv-server.


    OMV is awsome and with the help of nextcloud and syncthing dockers the server fits perfectly all my needs but as many newbie I've been victim of misconception about RAID that have led me to become interested in real backup and backup strategies only recently.


    That being said, last month I bought an additional hard-drive and an usb3 enclosure and started to manually backing-up my data daily with rsync ( as illustrated in the omv-getting started guide).


    Having to do the backup manually is became tedious so today I started to think about automating the job and I've do the following:


    1) Edit /etc/fstab adding the backup disk entry (I've copy the line generated by the WebUI when i manually mount the backup disk).


    2) Created the related mountpoint : /srv/dev-disk-by-label-BACKUP


    3) Wrote a bash script that do what I've done manually up to now. I'm totally new to bash (this is my first script, so be good, I do my best..) .


    The script execute a daily incremental backup to the backup-drive. After checking that the drive is plugged-in and that the mountpoint exist, the backup drive is mounted, the rsync job is executed and the backup drive is unmounted. The previous backup is retained. (Daily Copy + Last Day One). In order to get the script work the drive is manually power on/off at the backup-time.


    The script is the following:



    4) creating a cron-job in the Web-UI that execute the script daily and send me the output by e-mail.


    I've tested the script and the cron-job and everything is good but I'm not fully convinced and satisfied about my backup setup and I'd like to have advices from the forum about the following:


    Firstly I have a question on the backup strategy itself. The data source is a RAID1 array of 1TB size, the backup destination is an external 2TB hard drive. On the backup drive are stored two fully indipendent copy of the data: the daily copy and the previous-day one. Is it possible to increase the number of copy stored (going back for example of one week or possibly more) in order to make the most of the backup drive capacity and be able to reach, if needed, also the old versions of the files? Theoretically the idea is to edit the script to mantain only the modified/deleted file on the previous days folders. Is this doable with rsync? Do you have other ideas to reach the same goal?


    Secondly I have a question about security. I've read that if one of the client get infected by a ransomware, the malware can reach also the nas-server and encrypt the shares.

    Is it capable also to encrypt the backup disk? To minimize risks the backup disk is not shared and is manually power on/off and automatically mounted/unmounted by the script only for the backup-time.


    Is it possible to automate also this action? the idea is to let the drive on and plugged-in 24h but to activate/deactivate the usb3 pci card with the backup script only for the backup-time or maybe by using hdparm. Any idea?


    But, even if it were possible I'm not sure about safeness of this operation...if I could do it also a ransomware could do that, right?


    The second idea is to buy a programmable light switch (timer) and physically connect the backup drive enclosure to it, program the timer to stay on only for about 1-2 hours in the night (the daily backup only takes few minutes in most cases) and execute the cron job in that interval of time. Doing that I'll expose the backup disk to risks for more time but in the night there are also much less probability that a client get infected (they sleep) .What do you think about it? Is a good idea?

    Thank you both for the answers. Effectively the problem was that my ups was setted by default to no switches itself off after a power outage. I resolved the problem by connect the ups to my PC and by modify this option via the software provided by the manufacturer. Then I set the restore power policy on the BIOS to "Always ON" and everything work as I want.


    Furthermore, after doing some test and inspecting the parameters visible by the OMV-Web-UI I found that the behaviour of the system is regulated by 3 parameters (see attachments below) and is like this:


    1) Power Outage occures --> the UPS safely shutdown the server after "ShutdownTimer" seconds (in my case 30) and switches itself of after "ups.delay.shutdown" seconds (in my case 30) after the shutdown of the server.


    2) After Power is Restored the UPS come back on (only if a minumum of "ups.delay.start" seconds (in my case 180) have passed) and the omv-server restart.


    The only editable parameter seems the ShutdownTimer . The other two (visible via the services tab of the OMV-Web-UI) seems not editable


    So, problem Solved. Thank's again.

    Hi!


    I have an ups connected via usb with my omv-server. I have installed the NUT-Plugin and it's great! I have set that in case ups goes on battery mode the nas is safely shutdown in 15 seconds.


    I would like that the omv-server also automatically-restart when (and only when) power is restored (ups return in normal mode). How can I do that?


    Browsing the internet I've read that I should set the Power Restart Option in the BIOS on "Always On". Is the correct way? I suspect that if I set "Always On" in case of power failure the omv-server is safely shut down after 15 seconds but if the power outage is persistent with this option the server is restarted with the ups still in battery mode and I don't want it.


    Thanks

    Problem solved. I leave a summary of the procedure followed which can be useful for other novice users like me. Any comments/suggestions or improvement to the following configuration are welcome.


    1) From the OMV-WebUI create a user called 'docker' and give him r/w permissions to each shared folder to be synchronized


    2) Took the UID and GID of the docker user from CLI by executing : id docker


    3) Create and Start the Container by executing the following:


    Pull the image ------> docker pull syncthing/syncthing:latest


    Code
    docker create --name=syncthing --network=host \
    -e PUID=1005 -e PGID=100 -e TZ=Europe/Rome \ # Whit PUID and GUID from step 2)
    -v /srv/dev-disk-by-label-DATA/:/DATA \ # mountpoint of the shared folders
    --restart unless-stopped syncthing/syncthing:latest
    docker start syncthing


    4) Browse to Syncthing WebUI at 192.168.1.XXX:8384 and follow the wizard to set username and password.


    5) Connect the Remote Devices and Configure the Shares


    6) To Avoid permission problems go to 'Actions' -> 'Advanced' and check the 'Ignore Permission' box for every shared folder (see the attachments).

    (If 'Ignore Permission' is left unchecked the syncthing service create the file with r/w permission for the docker user but with r-only permission for others user)

    Hi everybody. I have installed the syncthing/syncthing container on my OMV5 machine by the following procedure:


    Code
    -> docker pull syncthing/syncthing:latest
    -> docker create --name=syncthing --network=host \
    -e PUID=1005 -e PGID=100 -e TZ=Europe/Rome \
    -v /srv/dev-disk-by-label-DATA/:/DATA \
    --restart unless-stopped syncthing/syncthing:latest
    -> docker start syncthing

    I use UID and GID of the 'docker' account (an account specifically created to run the syncthing service that have r/w/x permission on all the shared folder to be syncronize in order to use the same service/container to sync the shares of the different users which have the same permission on the respective folders).


    The permission are like this docker user rwx on Folder1 and Folder2 and user1 rwx on Folder1 & no access on Folder2 | user2 no access on Folder1 & rwx on Folder2


    The problem is that every file created by the syncthing service in the shared folder of the specific user (e.g. 'file' on Folder1) it's owned by docker account and it's not anymore editable by the user1 (unless going every time in the omv web gui and restore the permission on every subfolder of Folder1 again).


    To fix this issue i edit the PUID variable setting it at 0 (root account) and everything work fine but the syncthing service tell me that running the container as root shouldn't be done.


    So what I have to do? I have to run a specific syncthing container for every user to avoid permission problems or can I leave the container running as root without issues?


    I hope i have explained the problem correctly.

    Do you mean this thread? OpenVPN-Renew CRL


    If I understand correctly the plugin stop work when the certificates are revoked and they have to be updated manually.


    I execute openssl crl -in /etc/openvpn/pki/crl.pem -text to check my expiration date and i get Next Update : Jul 20 14:54:17 2020 GMT.


    So before Jul20 i have to renew certificates by executing : /opt/EasyRSA-3.0.6/easyrsa gen-crl and the plugin continue work. It's correct?


    Finally, just for curiosity, have you modified the server.conf file too to get the plugin work?

    Hi, for making openvpn plugin work on OMV 5 you have to follow these steps (for me works):



    0) On your router open the UDP 1194 port pointing your server ip (e.g. 192.168.1.xxx)


    1) On the Control Panel in the Web GUI set the following :


    --------------------------------------------


    GENERAL


    Enable -> ON
    Port-> 1194
    Protocol -> UDP
    Use Compression -> ON
    PAM Authentication -> ON
    Extra Options -> NONE
    Logging level -> Normal Usage output


    VPN NETWORK


    Adress -> 10.8.0.0
    Mask -> 255.255.255.0
    Gateway -> Select Your Connection (e.g. eth0)
    Default gateway -> ON
    Client to client -> OFF


    DHCP options


    DNS Server(s) -> NONE
    DNS search domains -> NONE
    WINS Server -> NONE


    PUBLIC


    Public Adress: yourdomain.duckdns.org (or other ddns or your static ip if you have one)


    --------------------------------------------
    Then Save and Apply


    2) Generate the certificate for the users on the WebGUI ( you may also do that in a second moment)


    3) Then SSH into your server and cd /etc/openvpn/ and nano server.conf


    4) Remove from the server.conf file the following line --> ;push "route 192.168.1.0 255.255.255.0" then Ctrl+X ->Y->Enter to apply changes


    (You can also delete -> ;push " route client-to-client" and other commented settings to make the file more clean but it is not necessary to get the plugin work. I don't know the reason why deleting the line indicated in 4 must be done to get the plugin work even if it's a commented setting too)


    5) cd ~ and service openvpn restart



    After this mod in the server.conf file the vpn works. The server.conf remains updated (also after reboot the machine) until you change something in the web GUI control panel of the plugin, then the wrong line appear again and you have to reupdate the server.conf file by repeat 3) 4) 5) to make the plugin work again.