Migration data from server to server

  • Hello,


    i would like to transfer my data from my old OMV4 server to a new OMV4 Server.


    1) Which is the best way to confirm the success of the copy process without failures?


    2) Is it meanfull to use rsync with checksum option on a second rsync run to compare data on
    old and new server for validation/corruption?


    3) wich is a good way for handling: omv cli respectively gui or on a linux desktop with e.g. grsync?


    4) what things do i have to consider for a smooth succsess?


    thank you much for your answers!

  • Since you seem to care about data integrity in case you're already using ZFS or btrfs and don't plan to change your choice I would create a snapshot at the source and then simply do a zfs|btrfs send/receive to transfer the snapshot to your new server (there exist tools that help with this).


    If you're using one of the old/anachronistic filesystems I would use rsync. Since most probably you'll run into your sync being bottlenecked by CPU (single-threaded transfers and rather strong ciphers used by default) I would recommend to set up rsyncd: https://www.jveweb.net/en/arch…ng-rsync-as-a-daemon.html


    When not using rsyncd I would try to use the weakest cipher possible (less CPU utilizations and if the CPU bottlenecks higher transfer speeds), usually that's arcfour. And as a last step I would stop all daemons on both source and destination and re-run the rsync just to ensure you get consistent data on both servers.

    • Offizieller Beitrag

    I assume that you backup your data.


    Then the most efficient way to copy the data might be to restore a recent backup of your data to the new server. It is a great way to verify that your backup system is good. If you use removable backup media, for instance over USB3, then you will also get fast local file transfers. It is a good idea to checksum the data. One simple way to verify that two folders have the same content is to calculate a checksum. You can install md5deep that can be used to calculate a single checksum for all files and subfolders.


    When copying a lot of files then rsync is great. If the transfer is not completed then you can resume without having to transfer all the files from the beginning. Also it can directly checksum each file transfer and works fine over the network.


    I use a GUI to consume files. Reading, watching, listening. And I prefer to use neither a GUI or CLI to handle backups or other routine tasks. Instead I try to script and automate.

  • Thanks for your answers!


    my concern: transfer (singular relocation, no backup) data (about 1 TB small files) from old OMV3 (Pentium G840 & 4GB-RAM & ext4) to my new OMV4 (Pentium G4600_2core+HT & 16GB-RAM & ZFS)


    Another idea: Transfer with help of my workstation (Cinnamon19.1_xeonE3-2140_4core+HT_16GB-RAM) from old to new OMV server with rsync and checksum option (all 1gb/s nic). The hardware bottleneck on my OMV engines should then not a problem cause the checksum calculations are made on my workstation? Is this right? Or is this also a problem with single-threaded?


    How is the working method of rsync checksum? Is it needed for a initial copy to the empty filesystem to do a checksum? Or
    only checksuming after the finished copy process with a new rsync to compare both locations?


    A transfer with help of USB3 and md5deep or similar could be an easy method, i will thing about this too :)

  • The hardware bottleneck on my OMV engines should then not a problem cause the checksum calculations are made on my workstation? Is this right?

    Nope. Checksumming is fast and here the bottleneck usually is local storage (which is hopefully faster than Gigabit Ethernet -- so in the end network should be the bottleneck). What's bottlenecking with rsync in 'normal' mode (using SSH between two hosts) is encryption/decryption (single-threaded task and negotiating a strong cipher by default). That's why I talked about setting up rsyncd or at least switching to the weakest cipher possible.


    Just give it a try, start a sync and look at the MB/s you're achieving. With such sync jobs (sometimes with an Apple machine in between if it's about transferring Mac fileshares between incompatible server implementations) we never managed to saturate Gigabit Ethernet so far. Usually I fire up 2 or 3 partial rsyncs in parallel since usually the involved servers have a couple of -- otherwise unutilized -- CPU cores and rather fast storage...

    • Offizieller Beitrag

    I'd use rsync on local media if possible. Over USB3 or plug into SATA. If that is not possible, rsync over a fast network connection.


    Then I'd just install and use md5deep to verify that the checksum for the original folder tree is the same as for the copy. You run md5deep locally and that is as fast as the HDD read speed allows, typically much faster than the copy speed, especially if it over a network connection.


    If you need to, if you don't get the same checksum, you can test and verify subfolders and copy parts over that doesn't match. I suspect that more than 999 times out of 1000 the full copy will be error free the first time. Digital file transfers are usually very good. Unless it is done using faulty and marginal equipment. Or if there are some bad files, they where on the original files from the start. So you might not even bother to checksum at all. Just check number of files and size. I suspect that is what most people do, unless it is extra special data that is irreplaceable.


    It is possible to run md5sum (or similar checksum utilities) from a script and generate a list of checksums for individual files and compare that with the original. But it is much faster and easier to work with whole subfolders using md5deep. There are plenty of other checksum methods other than md5, and for some purposes they might be better. But for this purpose md5 should be good enough and pretty fast.

  • I solved the migration from my old OMV2 to my new OMV4 with following solution:


    1) On workstation: Copy data from OMV2 to OMV4.
    2) On OMV4: Snapraid Sync this data
    3) On workstation with Quickhash GUI: compare two folders (Folder 1: OMV2, Folder 2: OVM4). Result: Folders match!


    Now there should be no errors created from the migration, data passed the process and are safe.

  • Now after another large data transfer over OMV2:nfs-Desktop-OMV4:cifs there where errors found with Quickhash GUI.
    My new solution:
    1) Copy all disk from OMV2 locally connected on OMV4 with rsync on OMVweb access.
    2) Do another rsync with -c (checksum) option to verify the previous transfered source with target data. Result: data match.
    3) Just for fun: check this source and target data with md5deep hint from Adoby on cli. Result: data match ;)

  • Hi all

    Getting an error when using rsync to push from omv 5.4.7-1 to a qnap nas.

    In omv, rsync, job, extra options I have -e "ssh -T -c arcfour -o Compression=no -x" --exclude-from='/root/rsync-excludes'

    I'm not sure if I have the syntax right, but the error is about the ssh cipher, error reads as follows

    Unknown cipher type 'arcfour'

    Is the arcfour cipher not in omv?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!