Disaster Recovery Plan Help

  • Hi,


    I am planning out my disaster recovery on my home lab. So I have two OMV installs on separate hardware all in house (no offsite considerations needed).

    So on my OMV1 this is my main server that has Shares and about 7 Containers using the Compose plugin. I have everything else working for backing up I just need to know a restore plan. I checked the docs and couldn't see anything.


    So I have set up a backup under

    Services | Compose | Schedule

    I manually ran that and it works fine (according to logs) and has created backups locally on OMV1


    How do I do a restore on my OMV2 so that I will have all my data as a mock up of a disaster recovery?


    I know I can have the 'compose' files on OMV2 so that means I can have all the containers running, but how do I get the Data from OMV1 to OMV2 in a way that I could start up, for example nextcloudAIO, on OMV2 and be able to login and see all my data.


    My end goal is that OMV1 is in my basement, well if that floods then I can get to all my stuff on OMV2 which is my backup server.


    Both are running the same version of OMV 7.


    My compose set up on OMV1 and 2 are exactly how the documentation specifies.

    omv7:docker_in_omv [omv-extras.org]

  • chente

    Hat das Thema freigeschaltet.
  • macom

    Hat das Thema freigeschaltet.
    • Neu
    • Offizieller Beitrag

    The main purpose of the plugin's backup utility is to provide the user with a consistent copy of persistent container data. With that goal in mind, this utility is designed to stop the containers, backup the stopped containers, and then start the containers again. All this is done automatically as you already know. And what the compose plugin actually does is a synchronization with rsync.

    The key is the phrase "stop the containers."


    If you were to back up the containers with any specialized backup application without stopping the containers you would have a backup that could probably be inconsistent.


    So what you should do at that moment is use any specialized backup application to, now, make a compressed, versioned backup, etc. of the folder that the plugin generated. That same application is the one that should help you restore the backed up data.


    I personally do that with Duplicati, but you can use whatever app you want, or the openmediavault-borgbackup plugin.


    Regarding Nextcloud AIO, you should keep in mind that it is a somewhat special container. It is actually a container that manages other containers, something similar to Portainer (on a much smaller scale). Nextcloud AIO generates volumes in the docker folder that are not being backed up by the compose plugin. But Nextcloud AIO has in its GUI an easily configurable automatic backup utility (it uses borgbakup internally) that does something similar to the compose plugin. Stops Nextcloud AIO containers, backups them, and restarts them.


    I also use Nextcloud AIO, what I do is include the backup that Nextcloud AIO generates in my regular Duplicati backups along with the persistent data that the plugin's backup utility provides me.

    • Neu
    • Offizieller Beitrag

    In addition to everything I just told you, you have a Restore tab in the plugin that is not yet explained in the wiki but that allows you to restore from the backup generated by the plugin. If you have questions about the operation of that tab, ask them here.

    • Neu
    • Offizieller Beitrag

    And with respect to your particular case, to have the two servers synchronized, I think you need something more than all this. Everything I have mentioned are backups and I think what you need are synchronizations.


    A reasonable way to do this might be to stop the docker service, sync all docker data with rsync between the two servers, and restart the docker service. This way the two servers would always have the same data in the containers. But there's nothing I know of that does that automatically. You would need to make a script I think.

    • Neu
    • Offizieller Beitrag

    To sync data shares between your servers and to create a backup server, take a look at the remote mount -> doc. At the end of the doc, it will refer you to another (OMV5) document that discusses the in's and out's of creating a fully functional backup server that will be ready to go at a moments notice.

    • Neu
    • Offizieller Beitrag

    notnormalnerd

    Note that the procedure outlined by crashtest will work for shared data folders, but will not solve the synchronization of persistent data from containers and docker volumes created by Nextcloud AIO in the docker folder. To synchronize that type of data you must stop docker, as I already said, and then start it again. That's why you would need a script that does that job automatically. Or do it manually.

  • notnormalnerd

    Note that the procedure outlined by crashtest will work for shared data folders, but will not solve the synchronization of persistent data from containers and docker volumes created by Nextcloud AIO in the docker folder. To synchronize that type of data you must stop docker, as I already said, and then start it again. That's why you would need a script that does that job automatically. Or do it manually.

    Hi chente


    I really appreciate the response. I did read it over and was trying to see what my next move would be.


    Would your backup routine help you recover hardware failure?


    The scenario I am planning for is lets say I have OMV-A which is my main server that is every day use. I have OMV-B that is the 'backup'. My idea is that if there was a complete disaster on OMV-A and I had to bet a new server and new hard drives then could I build that OMV-C and Compose from OMV-B onto OMV-C and spin up the containers and have all my data.

    Ideally maybe have the ability to spin up the containers on OMV-B whilst OMV-C is being built.


    Maybe you have a point, maybe it is a sync I am needing.


    In all honestly I rarely use Nextcloud so if it died I wouldn't be too bothered. The one I really would worry about in paperless-ng as that has so many important docs on there.

    • Neu
    • Offizieller Beitrag

    It all depends on the downtime you can afford and the need for redundancy in the data you have.

    In my case I only need redundancy in the documents since my work documents are there, so a ZFS Raid guarantees the integrity of that data and guarantees that I will not lose a day's work until the next backup is made .

    Regarding the rest, if the server fails I can assume downtime. The worst scenarios could be a theft or fire in my house and the main and backup servers disappearing at the same time. Or a malware infection and end up with your data encrypted. In those cases I would have to recover data from a remote server where I also make backups. From the remote backup server you could redo everything as it was again assuming the time it takes to redo some configurations and recover data from the backup.


    If you can't afford even five minutes of inactivity, you need much more, we go beyond what is understood by a home server. Syncing all data does not protect you from malware, for example. For that you need versioned backups and would need restore time.

  • OK, thanks for your help, but I am confused as the plugin and docs seem to contradict each other.


    Here is a screenshot of the Settings for the plugin. The Shared Folder for Compose Files (as per doc) says to use App data (location of compose files) and use Shared Folder Data for Data (location of persistent container data)

    But in the docs it says of the Data folder

    And the App Data folder is says THAT is the place for persistent data


    Do you know where, if I have a Volume that is storing persistent data, is it going to Data or AppData?

    • Neu
    • Offizieller Beitrag

    I am confused as the plugin and docs seem to contradict each other.

    Well. There are differences of, I don't know what to call it, interpretation? between the one who programs the plugin, ryecoaaron , and the one who writes that document, me :)

    ryecoaaron wants that data folder to be used for persistent data and that's why in the GUI it is written like this. The approach would be a folder for persistent data independent of the compose folder in which only the yml and env files reside.

    In order to make using the plugin easier to understand, I use that folder in the documents to specify a path to a user data folder. And I make the persistent data end up in the compose (appdata) folder next to the yml and env files. I decided to explain it that way because it seemed easier to understand.

    It doesn't really matter, you can specify the path you want in that folder and use it for the type of data you want, or even not use it if you don't want and use absolute paths in compose files, symlinks or whatever you want.

    We could talk for many hours about this because there are millions of possible configurations possible. If you have read the docs and understood how the plugin works, that was the goal, you should be able to use the plugin's tools to create your own configuration of folders, paths, hard drives, etc., and use the CHANGE_TO_COMPOSE_DATA_PATH variable for what you prefer.

    • Neu
    • Offizieller Beitrag

    The CHANGE_TO_COMPOSE_DATA_PATH was added to the plugin because the plugin's backup and restore didn't support environment variables in the paths of volumes. Now that it does support environment variables in the paths, it has less value other than making the example compose files easier. But as chente said, use what works for you. If it works and backs up, that is all that matters.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK< so early indications are I have a simple backup plan, so far I have Git Tea running from a backup. I will do more testing at the weekend but if it all works I'll do a video to maybe help with an example.

    • Neu
    • Offizieller Beitrag

    the plugin and docs seem to contradict each other.

    I have modified the text in this part of the wiki document. I think that will avoid confusion.

  • So...my plan to test was

    Use the Compose Backup to create a backup set of files/folders on a share.

    Use Rsync to send that over to the backup share location on another server.

    Use the Restore function on Destination Server to restore from the folder and then start up the container.


    I fixed the issue of the UUID by using SED so that the vol.list files on the Destination point to /src/UUID_Of_Dest instead of the source server.


    I am testing some recovered containers today and I do have a doc on the process if it all works I am happy to send a pdf or markdown.


    Q: There is an option to Clear Cache on my Prod server in the Settings for Compose. What exactly is that doing?

    • Neu
    • Offizieller Beitrag

    There is an option to Clear Cache on my Prod server in the Settings for Compose. What exactly is that doing?

    With the 7.2 release of the Compose plugin, each tab is cached to make them faster. The Clear Cache button clears the cache for all tabs so that they reload fresh info.

    omv 7.1.0-2 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.5 | scripts 7.0.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!