Backing up USB Installation

    • OMV 4.x
    • New

      shockwave wrote:

      What is the easiest way to remove the the files that were copied to that drive?
      Popping in an OS backup?

      Other than that, probably WinSCP if you have it installed.
      ______________________________________________________________________

      Other than the folders ZFS1 and Dockerparms (highlighted, they're part of my setup on this particular server), the remaining top level directories in an OMV4 boot drive are standard. Look for what doesn't belong.
      **But you might want to backup your boot drive, in case you make a mistake when deleting files.**


    • New

      Thank you for the awesome instructions. I was able to figure it out and run the scheduling, but I don't think it has finished successfully, yet. I get a long error list each time. I am running it again at the moment.

      I decided to make this backup before I did the OS backup so unfortunately, I don't have a backup for the OS yet. If I am unable to get everything working I may chalk it up to gaining experience and reinstall. At the moment, I only have Nextcloud and Jellyfin installed. They should be easier to install the second time around.

      It ended with an error again. I've pasted the code below minus the document names. My main raid drive is only 376GB and the drive I am trying to backup to is 2TB (seagate), but it only shows 74mb on the seagate drive. Meanwhile, the OS drive has now increased to 16.16GB. I attached a screenshot. I am using the following script. rsync -av /srv/dev-disk-by-label-Raid1/ /srv/dev-disk-by-id-usb-Seagate_Ultra_Slim_MT_NA95j42B-0-0part1/

      Source Code

      1. Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export SHELL=/bin/sh; sudo --shell --non-interactive --user=root -- rsync -av /srv/dev-disk-by-label-Raid1/ /srv/dev-disk-by-id-usb-Seagate_Ultra_Slim_MT_NA95j42B-0-0part1/ 2>&1' with exit code '11': sending incremental file list AppData/Nextcloud/log/nginx/access.log
      2. write failed on "/srv/dev-disk-by-id-usb-Seagate_Ultra_Slim_MT_NA95j42B-0-0part1/Downloads/takeout-20190910T154349Z-001.zip": No space left on device (28) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.2]
      3. #0 /usr/share/php/openmediavault/rpc/serviceabstract.inc(565): OMVRpcServiceCron->{closure}('/tmp/bgstatusSE...', '/tmp/bgoutputTk...')
      4. #1 /usr/share/openmediavault/engined/rpc/cron.inc(179): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
      5. #2 [internal function]: OMVRpcServiceCron->execute(Array, Array)
      6. #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      7. #4 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('execute', Array, Array)
      8. #5 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Cron', 'execute', Array, Array, 1)
      9. #6 {main}
      Display All
      Images
      • openmediavault control panel openmediavault.local.png

        25.42 kB, 1,367×233, viewed 8 times

      The post was edited 1 time, last by shockwave ().

    • New

      shockwave wrote:

      /srv/dev-disk-by-id-usb-Seagate_Ultra_Slim_MT_NA95j42B-0-0part1/
      This is wrong, it should be using the same as the source the drive hasn't been set up correctly for OMV, un mount it, go back to disks and wipe it using quick wipe, back to file systems, create, select the drive give a label then format.
      I've just tested this with an external USB giving it a label USB, and the mount is /srv/dev-disk-by-label-USB
      Raid is not a backup! Would you go skydiving without a parachute?
    • New

      As @geaves has noted, you can avoid issues with a quick wipe, using a drive label, format, then alter the command line accordingly.

      Unfortunately, there's no way to watch the progress of the Rsync Job other than the scrolling files list. If you leave that screen, or close the web page, the copy is still in progress, but you won't be able to pull it up again. If you really want to confirm progress, whether it's done, etc., the easiest way is to reboot and run the job again. (With the switches applied, a reboot is safe.)

      Once the first bulk copy is done, follow on jobs are fast.
      ________________________________________________

      shockwave wrote:

      I may chalk it up to gaining experience and reinstall. At the moment, I only have Nextcloud and Jellyfin installed. They should be easier to install the second time around.
      This is a good idea. Any long time user has rebuilt more than once. :) It's a learning process where, over time, users "arrive" at a good, clean, configuration for works for what they want to do. At that point, OS backup becomes more important due to the increasing complexity of the setup. Working builds can become an "evolution". A complicated build can take several hours to recreate and part of that is recalling/recreating how it was done in the first place. (Not to mention that OS backup can bail out a user, if they configure up something that is *BAD*. :) )

      Give the backup sections a good read through. (Both OS and Data.) There may be a few bits of info in them you can use.
    • New

      geaves wrote:

      shockwave wrote:

      /srv/dev-disk-by-id-usb-Seagate_Ultra_Slim_MT_NA95j42B-0-0part1/
      This is wrong, it should be using the same as the source the drive hasn't been set up correctly for OMV, un mount it, go back to disks and wipe it using quick wipe, back to file systems, create, select the drive give a label then format.I've just tested this with an external USB giving it a label USB, and the mount is /srv/dev-disk-by-label-USB
      After doing this, I was able to do the backup. Is there a way to view/delete the extra files that ended up on my os drive while in OMV or do I need to shutdown and stick the drive in my computer? My main OS is Manjaro.




      crashtest wrote:

      This is a good idea. Any long time user has rebuilt more than once.

      It's a learning process where, over time, users "arrive" at a good,
      clean, configuration for works for what they want to do. At that point,
      OS backup becomes more important due to the increasing complexity of
      the setup. Working builds can become an "evolution". A complicated
      build can take several hours to recreate and part of that is
      recalling/recreating how it was done in the first place. (Not to
      mention that OS backup can bail out a user, if they configure up
      something that is *BAD*. )
      Give the backup sections a good read through. (Both OS and Data.) There may be a few bits of info in them you can use.

      Thank you. It is certainly a learning process. Luckily, when you tend to
      learn a lot when things go wrong. I think installation will go much
      faster when doing it a second time.

      Post by shockwave ().

      This post was deleted by the author themselves ().
    • New

      I installed Filezilla on my computer (laptop) and tried to connect. So far it isn't working. Is there anything special I need to change. I enabled access of ftp and left it on port 21. I get the following error:

      Source Code

      1. Connection timed out after 20 seconds of inactivity
      2. Error: Could not connect to server
      3. Status: Waiting to retry...
      4. Status: Connecting to 192.168.1.8:24...
      5. Response: fzSftp started, protocol_version=8
      I am running a VPN on this computer, but that hasn't caused an issue with anything else I have done.
    • New

      Thank you. I got it to work. I realized that I was putting in the wrong port number-the one for ftp instead of ssh. Once I changed that it worked.

      Looking at the folders that were supplied earlier, shared folders seems to be there. I also have "dev-disk-by-label-Raid1" which contains all those same folders again that are in shared folders. Considering I keep all the shared folders on Raid1, which of those folders shouldn't be there?

      The post was edited 1 time, last by shockwave ().

    • New

      shockwave wrote:

      So as far as I can tell, it looks like my shared folders have been copied to my os drive, they are supposed to be on the Raid1 drive so they shouldn't show up in / right?
      No, you should see them under /srv/dev-disk-by-label-Raid1 and under /sharedfolders the sharedfolders being the 'bind' if there is anything under / then it shouldn't be there. But any folder under / will be the same name as the share you created.
      Raid is not a backup! Would you go skydiving without a parachute?
    • New

      Sorry, I think my wording may have been kind of confusing. In the OS drive, I have a folder titled "dev-disk-by-label-Raid1". That is the one I can delete correct? It is about 12GB or so (probably about the size of what was accidentally copied there). All the same folders also show up in the shared folders directory which is the same 366GB that is one that drive.
    • New

      shockwave wrote:

      Sorry, I think my wording may have been kind of confusing. In the OS drive, I have a folder titled "dev-disk-by-label-Raid1". That is the one I can delete correct?
      I hope not the last thing I want to do is to say yes and it's wrong, post the output of ls -l / post this using </> on the menu bar makes it easier to read.
      Raid is not a backup! Would you go skydiving without a parachute?
    • New

      This is the list

      Source Code

      1. total 92
      2. drwxr-xr-x 2 root root 4096 Sep 4 18:08 bin
      3. drwxr-xr-x 3 root root 4096 Sep 6 15:27 boot
      4. drwxr-xr-x 18 root root 3300 Sep 13 07:19 dev
      5. drwxr-xr-x 10 root root 4096 Sep 6 13:00 dev-disk-by-label-Raid1
      6. drwxrwxr-x 101 root root 4096 Sep 13 11:13 etc
      7. drwxr-xr-x 2 root root 4096 Apr 8 00:25 export
      8. drwxr-xr-x 3 root root 4096 Sep 4 18:15 home
      9. lrwxrwxrwx 1 root root 36 Sep 4 21:05 initrd.img -> boot/initrd.img-4.19.0-0.bpo.5-amd64
      10. lrwxrwxrwx 1 root root 36 Sep 4 18:02 initrd.img.old -> boot/initrd.img-4.19.0-0.bpo.4-amd64
      11. drwxr-xr-x 15 root root 4096 Sep 4 18:08 lib
      12. drwxr-xr-x 2 root root 4096 Sep 4 18:02 lib64
      13. drwx------ 2 root root 16384 Sep 4 18:02 lost+found
      14. drwxr-xr-x 3 root root 4096 Sep 4 18:03 media
      15. drwxr-xr-x 2 root root 4096 May 17 10:22 mnt
      16. drwxr-xr-x 3 root root 4096 Sep 4 21:41 opt
      17. dr-xr-xr-x 192 root root 0 Sep 4 20:56 proc
      18. drwx------ 3 root root 4096 Sep 4 22:23 root
      19. drwxr-xr-x 28 root root 1300 Sep 14 07:11 run
      20. drwxr-xr-x 2 root root 12288 Sep 6 23:04 sbin
      21. drwxr-xr-x 9 root root 4096 Sep 6 13:00 sharedfolders
      22. drwxr-xr-x 6 root root 4096 Sep 13 07:26 srv
      23. dr-xr-xr-x 13 root root 0 Sep 4 20:56 sys
      24. drwxrwxrwt 7 root root 340 Sep 14 07:09 tmp
      25. drwxr-xr-x 11 root root 4096 Sep 4 21:40 usr
      26. drwxr-xr-x 13 root root 4096 Sep 6 23:04 var
      27. lrwxrwxrwx 1 root root 33 Sep 4 21:05 vmlinuz -> boot/vmlinuz-4.19.0-0.bpo.5-amd64
      28. lrwxrwxrwx 1 root root 33 Sep 4 18:02 vmlinuz.old -> boot/vmlinuz-4.19.0-0.bpo.4-amd64
      Display All
    • New

      shockwave wrote:

      This may be because I don't have a root account. Is there a way to run sudo within filezilla?
      You be login as root through filezilla, I use winscp on W10 and similar to filezilla and that will allow me to delete, you could try rm -r dev-disk-by-label-Raid1 from the command line. I wonder if it's the name that's the problem, another option I use is cloud commander in docker never fails.
      Raid is not a backup! Would you go skydiving without a parachute?
    • New

      geaves wrote:

      you could try rm -r dev-disk-by-label-Raid1 from the command line
      This is what I eventually did.

      When I installed OMV I decided to avoid the root account so logging in with root wasn't an option for me. Now that the files are deleted, I will install Clonezilla and try to clone the USB that holds the OS.
    • New

      No go on closing the USB drive holding the OS. Apparently the 2 new 32GB drives I purchased are slightly smaller than the current 32GB drive.

      Is there a way to make this work, or should I just scrap the current USB and install on one of the new ones? That would allow me to have 2 backups (one I am currently using, and the one of the ones I just purchased)