Rsync frustration of doing the simplest of things

  • I need help I just want to break it in half..... I ve been trying for 4 hours every possible combination of rsync and already read the rsync parameters which cause way more chaos instead specially the part saying


    Code
    rsync -rtv source_folder/ destination_folder/
    
    
    In the source_folder notice that I added a slash at the end, doing this prevents a new folder from being created, if we don't add the slash, a new folder named as the source folder will be created in the destination folder. So, if you want to copy the contents of a folder called Pictures into an existent folder which is also called Pictures but in a different location, you need to add the trailing slash, otherwise, a folder called Pictures is created inside the Pictures folder that we specify as destination.


    Process I am trying to accompish is below:


    I have created in OMV a shared folder called PCdata. Inside its empty. Connected the usb stick mounted it and created a shared folder with it because I want to transfer the damned folder called Hamilton. Afterwards guided to the Rsync command and as source chose hamilt [on New Volume, Hamilton/], as destination PCdata [on /dev/sdc1, PCdata/] and started the procedure. It copied all the Hamilton folder files to the destinatation buttttttttttttttt without creating the Hamilton folder itself. What extra option , command missing to create first the folder then inside the files. :cursing::cursing::cursing::cursing::cursing:

    • Offizieller Beitrag

    Just test different variants.


    Assume this starting state every time:


    After this command:
    rsync -a ~/Skrivbord/rsync_test/source ~/Skrivbord/rsync_test/destination
    ... you get:


    After this command:
    rsync -a ~/Skrivbord/rsync_test/source/ ~/Skrivbord/rsync_test/destination
    ... you get:



    After this command:
    rsync -a ~/Skrivbord/rsync_test/source/ ~/Skrivbord/rsync_test/destination/
    you get:



    After this command:
    rsync -a ~/Skrivbord/rsync_test/source ~/Skrivbord/rsync_test/destination/
    You get:

  • Just test different variants.

    First of all thank you for still trying to help. As you ll notice in a while from the below screenshot at least for the gui you dont have an option to exclude that / at the end of the source path because it creates it by itself during the shared folder creation. Also its a drop down list and you just choose what you have shared



    The only hope is that extra command field but then again how to force it to remove the dash at the end of the source path (I think mostly is there for entering parameters and not the rsync command itself)




    If I try through the cli how am I suppose to address the path of the external usb stick and the internal hdd drive containing the shared folder (sdc1 its for destination folder - logically /dev/sdc1 and sdd1 for the external usb drive which is the source)


    Tried from cli these 2 lines but failed both of not such directory error


    rsync -a ~/hamilt/Hamilton ~/PCdata/PCdata


    rsync -a ~/dev/sdd1/Hamilton ~/dev/sdc1/PCdata

  • @Adoby Well I figured it outtttttttttttttt but i was laughing from the length of the line and the times i have to execute it (236 times) as my music genres in order to be written folder - folder and check each time the outcome.


    What helped to find out how to address the hdd's (internal + external) was the command lsblk which showed me the mount points of the hdds (I was trying to point them directly with /dev/sdd1 and sdc1 and it doesnt work this way I cant say why but it can't) and finally came up with this small line


    Code
    rsync -a /srv/dev-disk-by-label-New*/Hamilton /srv/dev-disk-by-id-ata-WDC_WD40PURZ-85TTDY0_WD-WCC7K1EF7H4A-part1/PCdata

    By pressing enter magic happened. Hamilton at last createdddddd at the root of the destination shared folder and inside the files and not just plain files inside the root share folder.



    PS:Tried to paste the above command line to that extra options gui and the result was to have both the Hamilton folder and the files inside of him and outside with multiple errors.


    It would be nice at least not to type each time the dev-disk-by-id-ata-WDC_WD40PURZ-85TTDY0_WD-WCC7K1EF7H4A-part1 but I dont if that s possible ( I dont know @Adoby if that ~ symbol would make a difference if its to be placed at the right spot)


    Sort story goessssssssssssssssssssssssssssssss YOU CANT DO THIS SIMPLE thing from gui - NICE!!!!

    • Offizieller Beitrag

    I don't use the GUI. It seems that the GUI is designed to work with shared folders. I typically work with files and folders inside a share. Smaller parts of a shared folders.


    The GUI is great for doing things that the GUI is designed to do. But you must know what it is designed to do.


    I ssh in to a OMV server and run a rsync command from the command line. Or copy over the files using Midnight Commander.



    Or I write a script with a prepared rsync command and add it to crontab, from the command line in a SSH console window. so that the server run the script repeatedly, every day. Or however often I like.


    The command line is great for doing things that YOU want to do. But you must know what you want to do and how it is done. Or know how to look it up or figure it out. I usually prefer the command line. I only use the GUI to handle stuff that I know it is designed to do and that I also want have done. But only if it easier than doing it from the command line. Or if it means that I don't have to look up how it is done. Again.


    The first times you do something it is always much more difficult to do something from the command line. But after that it is usually easier to use the command line. Especially if you write it down in a script. Then you just have to remember that there is a script. I have many scripts for wierd and wonderful jobs. Too many scripts. I swim in scripts. I sometimes write the same scripts again because I don't remember that I wrote it before, perhaps a year ago. I figure it out when the best name for the script is already taken...


    I sometimes write a program to do the job. Perhaps even with a GUI. But more often as a command line command. Typically it is a bash script that I want to run much faster or slightly different from how I can do it in a command line. Or it may even be something that is impossible to do from the command line.

  • Its's nice to have an extra way of doing things but in 2019 and OMV has many other competitors freeware who offer this simple solution to be able to choose upper directories instead of sync between them. Is mandatory need I think and I was able to do it with Unraid. I didn't give a try to syncthing though. Maybe it can contain the folder as well as the data inside from the gui.


    Since you are into cli then why I found Hdd's (internal and external) at the /srv and not /dev Why sync command didn't work with the /dev path and worked with /srv??? As far as I know (purpose of /srv : Data for services provided by this system,
    /srv contains site-specific data which is served by this system.)
    Also is there a way not to address to my external Hdd as dev-disk-by-label-New* and use something like sdd1 which is been recognised by the OS also? I have to make the cli line more elegant because as it seems I ll use it many-many times.

    • Offizieller Beitrag

    Under /srv you find some stuff that OMV has mounted. It is local and remote volumes. An external drive is the same as a local volume. OMV has no way of knowing if a drive is inside or outside the cover. Or even if there is a cover. /srv can be seen as the basis for mounting stuff automatically. /mnt for stuff that is mounted by you, from the command line. /media for stuff that mounted dynamically when it is plugged in, for instance in USB. But this is just suggestions. You have to look.


    Under /dev you find devices that Linux has detected.


    Under /exports you find filesystems that are shared by some service.


    Under /sharedfolders you find shares mounted that OMV use.


    If possible I think it would be best if you access local files and folders on the OMV server through the /sharedfolders path. That is what I do, and it works fine. And that way I don't have to mess with strange names and uuids to write paths.


    For instance:


    /sharedfolders/nas0/sharedmedia/movies


    That is where I store movies on my OMV server nas0.


    Remote volumes are very different. I don't like how it works in OMV. There is a plugin for remote shares. So instead I have installed and configured autofs in combination with nfs. It makes the remote mounts pop up automagically when they are needed. I have them pop up under /srv/nfs. Like rabbits in a hat. You could use SMB or SSH to access remote shares instead. But I prefer nfs and autofs. I started using that long before I started using OMV. The way I do it is a bit old-school. There are supposedly better alternatives to autofs available now. But I haven't seen a good tutorial on it yet. And I am comfortable with autofs, so...


    So any of my OMV servers can freely access any share on any other OMV server. Great for automated backups, using rsync snapshots, between the OMV servers.


    For instance:


    /srv/nfs/nas0/sharedmedia/movies


    That is where I store movies on my OMV server nas0. But accessed over the network from another OMV server.


    So if I want to make a remote copy of something on one OMV server to another I can do this:


    cp -a /srv/nfs/nas0/sharedmedia/movies /sharedfolders/nas3/backups/nas0/sharedmedia/movies/20190411T115900/


    As you can see this is run on nas3. And it creates a timestamped backup of movies from nas0 on nas3.


    I typically use rsync and a different syntax to have rsync hardlink unchanged files from an earlier snapshot.

  • Well since I have yet started the transfer of files I finally came up with that line below @Adoby


    Code
    rsync -rtvu --progress --stats /srv/dev-disk-by-label-New*/test /srv/dev-disk-by-id-ata-WDC_WD40PURZ-85TTDY0_WD-WCC7K1EF7H4A-part1/PCdata

    What I noticed though is that if I run the command with wetty it locks the files to the destination since they are being copied over root (cant change user with wetty even if I su -u jim it reverts me to a plain $ which doesn trecognise any command) With patty again since I have to choose the user to log in the copy is done with both access and modification to files afterwards.


    i then came up with the command



    Code
    rsync -avuc --progress --stats /srv/dev-disk-by-label-New*/test /srv/dev-disk-by-id-ata-WDC_WD40PURZ-85TTDY0_WD-WCC7K1EF7H4A-part1/PCdata

    which leads to a succesful copy again with the extra benefit of being able to change my files even though the command runs from a root account.


    I think parameter a contains both (r)ecursive and (t)ime stamp with the addition of symlinks (I dont even know what that is)
    Also added the -c (checksum) for better verification if any....


    What do you think . Is the above command ok for the transfer I need? (a 1-1 copy with all properties - tags of source music to be inherited but without permissions. I dont need them since the main folder they can be accessed has credentials to enter)

  • I wouldn't use wild cards. *. But it might be OK?

    I used because it is NEW VOLUME I dont know how to address that space between them. Doing the same for win for years instead of typing Documents and settings for instance. Do you know a better way please feel free to share.



    I would most likely just use Midnight Commander in a SSH screen session. And perhaps calculate and compare checksums for source and destination afterwards.

    i use CloudCMD. Think is the same thing. I f you mean by compare checksums to right click properties of the main folder that would be different since the source is in ntfs format and destination in ext4. Or is there a way inside midnight commander produce that file somehow.

  • First of all, avoid spaces in file, volume and foldernames. A space is a seperation character for arguments. Either use  my_name or myName instead, to keep it readable.
    However, you can escape it by using  my\ name.

  • Checksums are independent of the file system, as long as you compare it in this layer, not the physical drive. You can use md5sum for comparison. There are a lot of scripts in the internet how to recoursivly check all contens of folders.
    I never used midnight commander, so I cant help you there.
    If you use rsync, you can start the exact same copy again with the - -checksum flag, which will compare the checksum of all files (normaly its just timestamp and size it looks for).

  • First of all, avoid spaces in file, volume and foldernames. A space is a seperation character for arguments. Either use my_name or myName instead, to keep it readable.
    However, you can escape it by using my\ name.

    Thanks for dropping by.... I didnt use it by choice its just how I found the hdd to be named with lsblk. probably you mean afterwards if I am goin to mount this hdd to a spot not to use spaces (of course I woudnt) but since I dont quite understand how it works I referenced it how I found it . my\ name its not quite informative. To my example below how I should reference it
    /srv/dev-disk-by-label-New Volume

    • Offizieller Beitrag

    Maybe Syncthing or BTSync would be more to your liking? Both have docker containers available. I tested Syncthing and it's pretty easy to set up. I was gonna use it in place of rsync, but ultimately decided it was just as simple for me to use rsync.


    Personally, I love rsync and have had no problem with it other than years ago when I was figuring out scheduling jobs... Once I got that awkward process down, it was easy peasy, but I don't use any triggers. Everything in the source folder, gets sync'd to the destination folder.

  • The upside for escaping is, that auto completion still works, while with quotations strings it does not always work. In other situations, I use quotation strings too.



    Is there a how to for the the above procedure

    Very first google result: https://www.unixtutorial.org/u…aring-directories-in-unix

  • Maybe Syncthing or BTSync would be more to your liking?

    Used it with Unraid but no point for me since I need the upper parent folder to be copied also. Image having to create folders for each music genre and sub genre I have created in my database all over the years.

    Very first google result: unixtutorial.org/using-md5deep…aring-directories-in-unix

    Thanks... its not that I m bored to search but sometimes you need the right way of typing what you need to find in order to have the correct results....just needed a verification that I m reading the right stuff.

  • Very first google result: unixtutorial.org/using-md5deep…aring-directories-in-unix

    At the point where I have to type md5deep -r -s /dir1> dir1sumsin order for the file to be created I dont see any file being created inside the directory.


    My line was


    Code
    md5deep -r -s "/srv/dev-disk-by-label-Music Data Base/Celtic_Folk" dir1sums

    Obviously even though it runs without errors I am addressing something wrong inside the command but what. Is there a file being created after this command or it stays in ram?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!