Hey, I need to perform a backup restore because my sd card died.
I tried to follow the restore steps, but I don't have some of the files listed there.
Here are all the files that I have:
Is it possible to perform the restore with these?
Thanks.
Hey, I need to perform a backup restore because my sd card died.
I tried to follow the restore steps, but I don't have some of the files listed there.
Here are all the files that I have:
Is it possible to perform the restore with these?
Thanks.
Unfortunately you did not choose the dd-full option.
Best way forward is to do a fresh install (without configuration) and then write the dd.gz file over the root partition.
Unfortunately you did not choose the dd-full option.
Best way forward is to do a fresh install (without configuration) and then write the dd.gz file over the root partition.
F
I think I somewhat tried that but it didn't work, maybe I didn't do it properly.
Does fresh install = just clean raspbian or clean omv installation?
I have installed a clean raspbian lite system, then put the sd card on my other rpi(through usb), and tried to run this:
Quotesudo dd if=/mnt/sdc3/omvbackup/backup-omv-2024-02-24_05-46-17.dd.gz of=/dev/sdd2 bs=512
dd: error writing '/dev/sdd2': No space left on device
1+0 records in
0+0 records out
0 bytes copied, 0.00530771 s, 0.0 kB/s
sdd 8:48 1 59.7G 0 disk
├─sdd1 8:49 1 512M 0 part
└─sdd2 8:50 1 2.1G 0 part
/dev/sdd1 should be boot partition
/dev/sdd2 should be root partition
(I guess...)
I tried to increase the partition size of sdd2 to 100% or remaining storage, but still the same thing.
Any ideas?
Thanks.
Was your partition that was backed up on 2.1G? That is too small in my opinion for OMV but the partition has to be the same size as the backup.
You have to extract the .gz file and can't write directly to the system. You also can't write it to a live system.
gunzip -c /mnt/sdc3/omvbackup/backup-omv-2024-02-24_05-46-17.dd.gz | sudo dd of=/dev/sdd2 bs=512
(I guess...)
Have a look at the .blkid file
Quote
sudo dd if=/mnt/sdc3/omvbackup/backup-omv-2024-02-24_05-46-17.dd.gz of=/dev/sdd2 bs=512
You have to uncompress the file first and write it using dd.
Was your partition that was backed up on 2.1G? That is too small in my opinion for OMV but the partition has to be the same size as the backup.
You have to extract the .gz file and can't write directly to the system. You also can't write it to a live system.
gunzip -c /mnt/sdc3/omvbackup/backup-omv-2024-02-24_05-46-17.dd.gz | sudo dd of=/dev/sdd2 bs=512
That was a clean raspbian lite installation, I think the original system was 64gb card with 32gb root partition size.
I tried to increase the root partition size to 64gb~ but still had the same issue, does the size needs to be exact?..
I did extract the .gz file, and I didn't write it directly, maybe I didn't explain it properly.
I installed on an sd card a clean raspbian lite OS(64gb card), then put that sd card with a usb adapter on another rpi4 system and mounted the backup folder too, so the sd card I intended to write into wasn't live.
QuoteDisplay Morels -la
total 39339978
drwxrwxrwx 1 root root 4096 Jan 4 12:43 .
drwxrwxrwx 1 root root 8192 Jan 3 22:07 ..
-rwxrwxrwx 2 root root 773 Feb 24 2024 backup-omv-2024-02-24_05-46-17.blkid
-rwxrwxrwx 2 root root 268435456 Feb 24 2024 backup-omv-2024-02-24_05-46-17_boot.dd
-rwxrwxrwx 2 root root 21686928 Feb 24 2024 backup-omv-2024-02-24_05-46-17_boot.dd.gz
-rwxrwxrwx 2 root root 31642877952 Feb 24 2024 backup-omv-2024-02-24_05-46-17.dd
-rwxrwxrwx 2 root root 8351110451 Feb 24 2024 backup-omv-2024-02-24_05-46-17.dd.gz
-rwxrwxrwx 2 root root 447 Feb 24 2024 backup-omv-2024-02-24_05-46-17.fdisk
-rwxrwxrwx 2 root root 446 Feb 24 2024 backup-omv-2024-02-24_05-46-17.grub
-rwxrwxrwx 2 root root 512 Feb 24 2024 backup-omv-2024-02-24_05-46-17.grubparts
-rwxrwxrwx 2 root root 918 Feb 24 2024 backup-omv-2024-02-24_05-46-17.packages
-rwxrwxrwx 2 root root 212 Feb 24 2024 backup-omv-2024-02-24_05-46-17.sfdisk
Quotesdd 8:48 1 59.7G 0 disk
├─sdd1 8:49 1 512M 0 part
└─sdd2 8:50 1 59.2G 0 part
/dev/sdd is not mounted.
Have a look at the .blkid file
You have to uncompress the file first and write it using dd.
PostRE: How to restore from an omv-backup?Here is a basic restore procedure using fsarchiver for the same hard drive but a change went bad:…
- Boot systemrescuecd - if you installed the iso from omv-extras, it will be in your grub menu.
- Mount your data drive - example: mount /dev/sde1 /mnt/backup
- Figure out where your backup file is. fsarchiver uses the .fsa extension - example: /mnt/backup/omvbackup/backup-omv-07-May-2018_10-36-31.fsa
- Figure out what your OS drive root partiton is. Don't get this wrong! - example: /dev/sda1
- Restore the
Yeah I tried with the file uncompressed too, but still the same error.
Quote/dev/mmcblk0p1: LABEL_FATBOOT="bootfs" LABEL="bootfs" UUID="37CA-39EC" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="a34a2be1-01"
/dev/mmcblk0p2: LABEL="rootfs" UUID="a4af13c6-d165-4cbd-a9f6-c961fef8255d" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="a34a2be1-02"
This is the content of .blkid file.
Thank you both for trying to help
I tried to increase the root partition size to 64gb~ but still had the same issue, does the size needs to be exact?..
I did extract the .gz file, and I didn't write it directly, maybe I didn't explain it properly.
It is best to make it the exact same size. You don't need to extract the .gz file. The command I have you will do that. I don't understand why it isn't working but if you have the .gz file extracted, you could mount it and rsync the files over the fresh install. Then the partition size won't matter.
It is best to make it the exact same size. You don't need to extract the .gz file. The command I have you will do that. I don't understand why it isn't working but if you have the .gz file extracted, you could mount it and rsync the files over the fresh install. Then the partition size won't matter.
QuoteDisplay MoreDisk /dev/mmcblk0: 59.69 GiB, 64088965120 bytes, 125173760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa34a2be1
Device Boot Start End Sectors Size Id Type
/dev/mmcblk0p1 8192 532479 524288 256M c W95 FAT32 (LBA)
/dev/mmcblk0p2 532480 62333951 61801472 29.5G 83 Linux
This was in .fdisk file, I tried to match the partition sizes
Quotesdd 8:48 1 59.7G 0 disk
├─sdd1 8:49 1 252M 0 part
└─sdd2 8:50 1 29.5G 0 part
But that still didn't work, same error
Quotegunzip -c /mnt/sdc3/omvbackup/backup-omv-2024-02-24_05-46-17.dd.gz | sudo dd of=/dev/sdd2 bs=512
dd: error writing '/dev/sdd2': No space left on device
1+0 records in
0+0 records out
0 bytes copied, 0.000506236 s, 0.0 kB/s
Would you mind giving some instructions on how to mount the dd file and what rsync command to use?
Just trying to make sure I use the right arguments for each command as things don't work out for me anyhow, better eliminate possible mistakes.
Thanks.
Something weird is going on if it can't write a single byte without telling you that there is no space left.
To mount the extracted file:
sudo mkdir /mnt/backup
sudo mount -o loop /mnt/sdc3/omvbackup/backup-omv-2024-02-24_05-46-17.dd /mnt/backup
then rsync -avr --progress /mnt/backup /path/to/mounted/sdd2
Just trying to make sure I use the right arguments for each command as things don't work out for me anyhow, better eliminate possible mistakes.
this is why ddfull is so much easier but I'm not sure that would work either with the weird errors you are getting. It would be better to write the ddfull compressed image with usbimager.
Display MoreSomething weird is going on if it can't write a single byte without telling you that there is no space left.
To mount the extracted file:
sudo mkdir /mnt/backupsudo mount -o loop /mnt/sdc3/omvbackup/backup-omv-2024-02-24_05-46-17.dd /mnt/backup
then rsync -avr --progress /mnt/backup /path/to/mounted/sdd2
this is why ddfull is so much easier but I'm not sure that would work either with the weird errors you are getting. It would be better to write the ddfull compressed image with usbimager.
Thanks! rsync seems to be running now.
Sadly I never expected the system to die instantly like that without any symptoms etc, so my backups were not fully tested properly, and weren't actually done properly as I didn't fully set it up properly.
Not sure how old my current 'backup' is, but we learn from mistakes, so I'll surely know how to better set up my backups and test them.
Kinda ironic that my main setup is a NAS for backup with 3-2-1, but I didn't properly backup the system itself running it.
Oh well
Do I also need to rsync boot.dd to the boot partition?
Edit - rsync finished, got this output
sent 17,926,736,804 bytes received 3,556,784 bytes 16,733,825.09 bytes/sec
total size is 17,909,399,466 speedup is 1.00
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1338) [sender=3.2.7]
Looks like many things weren't moved properly, not sure why, any way to access some logs?
Ok so I managed to get 'some' progress, the rsync command should be like this:
rsync -avr --progress /mnt/backup/ /path/to/mounted/sdd2
with the '/' at the end otherwise it would just copy the folder itself and not the content inside the folder.
I did manage to copy everything to the root partition, but it seems like that screwed things up and didn't work.
I then tried to rsync _boot.dd but that didn't help either.
I guess technically by having a backup of both root and boot I should be able to restore it, but just not sure how to do it properly, maybe some things are different and needs to be adjusted manually?
Thanks.
Do I also need to rsync boot.dd to the boot partition?
No, that won't work. If you reinstalled the os and then rsync'd the OS, you shouldn't need to do anything with boot.
Looks like many things weren't moved properly, not sure why, any way to access some logs?
The reason is in the output. I couldn't tell you why or what wasn't moved. Most of the time it is a special file like the ones in /dev.
There is log for rsync unless you send the output to a log. I have my ssh session set to have unlimited scrollback (can do that in putty too) and then you would scroll up to look. If you don't have that, run rsync again. That is the great thing about rsync - it will only try the files that don't exist on the destination.
with the '/' at the end otherwise it would just copy the folder itself and not the content inside the folder.
That is incorrect. The r in -avr means recursive. I have used in hundreds of scripts and my own personal backups for a couple decades. That said, I wasn't trying to give you the exact command. Just an example.
I then tried to rsync _boot.dd but that didn't help either.
I guess technically by having a backup of both root and boot I should be able to restore it, but just not sure how to do it properly, maybe some things are different and needs to be adjusted manually?
It is a pain in the ass. That is why it so hard to document. I really don't like even trying to walk someone through it. That is why we tell people to reinstall and then just sync the os dd.
It is a pain in the ass. That is why it so hard to document. I really don't like even trying to walk someone through it. That is why we tell people to reinstall and then just sync the os dd.
By 'sync the os dd', do you mean the rsync part? Or some other sync through omv webui itself? Or just manually copying files from the mounted dd?
Also not sure if different raspbian versions could make a difference with the backup that I have which is a few months old.
Tbh I think I'm getting somewhat close, most of the issues were things like nuances or some partitions that I needed to play with.
I always started with a clean raspbian lite, should I try to start with a clean omv setup? The backup is of omv6, and latest would be omv7 it seems, so not entirely sure about that as well.
Thanks.
The backup is of omv6, and latest would be omv7 it seems, so not entirely sure about that as well.
If your backup is from OMV6, you should use RaspianOS light Bullseye for the initial installation. Especially when using the rsync path.
If your backup is from OMV6, you should use RaspianOS light Bullseye for the initial installation. Especially when using the rsync path.
Thanks, I'll try that
I did manage to get some progress and actually got it to work, but I'm getting stuff like this:
apt
apt: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /lib/aarch64-linux-gnu/libstdc++.so.6)
Likely due to the system itself or something like that, so I'll give this a go.
By 'sync the os dd', do you mean the rsync part? Or some other sync through omv webui itself? Or just manually copying files from the mounted dd?
I meant restoring the dd backup whether that is by using dd or mounting/rsync.
did manage to get some progress and actually got it to work, but I'm getting stuff like this:
The rsync command should've had the --delete flag.
I meant restoring the dd backup whether that is by using dd or mounting/rsync.
The rsync command should've had the --delete flag.
I tried to understand what the --delete flag does, but not sure what would be the actual difference in this case, would you mind explaining?
Also, I managed to get the system to work almost completely by using the latest Bullseye image as macon suggested, then just had to modify fstab to match the updated partitions which worked and booted properly.
Omv now runs fine, as well as all my shares and most of my services, but my main issue is with docker that isn't running, and reinstalling doesn't seem to help.
I wonder if somehow my backup what partially corrupted somehow. For example one of my containers had this for the config.v2.json:
Others seemed ok besides this single file.
systemctl status docker -l
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─waitAllMounts.conf
Active: failed (Result: exit-code) since Sun 2025-01-05 12:17:34 IST; 28s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Process: 23131 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=2)
Main PID: 23131 (code=exited, status=2)
CPU: 1.188s
Jan 05 12:17:34 pi systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Jan 05 12:17:34 pi systemd[1]: Stopped Docker Application Container Engine.
Jan 05 12:17:34 pi systemd[1]: docker.service: Consumed 1.188s CPU time.
Jan 05 12:17:34 pi systemd[1]: docker.service: Start request repeated too quickly.
Jan 05 12:17:34 pi systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 05 12:17:34 pi systemd[1]: Failed to start Docker Application Container Engine.
Jan 05 12:17:45 pi systemd[1]: docker.service: Start request repeated too quickly.
Jan 05 12:17:45 pi systemd[1]: docker.service: Failed with result 'exit-code'.
Jan 05 12:17:45 pi systemd[1]: Failed to start Docker Application Container Engine.
Display More
journalctl -xe
░░
░░ A start job for unit crowdsec-firewall-bouncer.service has finished successfully.
░░
░░ The job identifier is 94554.
Jan 05 16:44:12 pi sshd[73333]: Connection closed by authenticating user root 117.71.57.114 port 53132 [preauth]
Jan 05 16:44:13 pi crowdsec-firewall-bouncer[73349]: time="2025-01-05T16:44:13+02:00" level=fatal msg="process terminated >
Jan 05 16:44:13 pi systemd[1]: crowdsec-firewall-bouncer.service: Main process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An ExecStart= process belonging to unit crowdsec-firewall-bouncer.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Jan 05 16:44:13 pi systemd[1]: crowdsec-firewall-bouncer.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit crowdsec-firewall-bouncer.service has entered the 'failed' state with result 'exit-code'.
Jan 05 16:44:13 pi systemd[1]: crowdsec-firewall-bouncer.service: Scheduled restart job, restart counter is at 29.
░░ Subject: Automatic restarting of a unit has been scheduled
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ Automatic restarting of the unit crowdsec-firewall-bouncer.service has been scheduled, as the result for
░░ the configured Restart= setting for the unit.
Jan 05 16:44:13 pi systemd[1]: Stopped The firewall bouncer for CrowdSec.
░░ Subject: A stop job for unit crowdsec-firewall-bouncer.service has finished
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A stop job for unit crowdsec-firewall-bouncer.service has finished.
░░
░░ The job identifier is 94737 and the job result is done.
Jan 05 16:44:13 pi systemd[1]: Starting The firewall bouncer for CrowdSec...
░░ Subject: A start job for unit crowdsec-firewall-bouncer.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit crowdsec-firewall-bouncer.service has begun execution.
░░
░░ The job identifier is 94737.
Jan 05 16:44:13 pi sshd[73399]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=11>
Jan 05 16:44:16 pi sshd[73399]: Failed password for root from 117.71.57.114 port 56514 ssh2
Jan 05 16:44:17 pi sshd[73399]: Connection closed by authenticating user root 117.71.57.114 port 56514 [preauth]
Jan 05 16:44:19 pi sshd[73465]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=11
Display More
It seems like it could be somehow related to crowdsec-firewall-bouncer.service, tho not really sure why it would be related.
Do you think trying again with the --delete flag should change or improve anything?
Thanks.
I tried to understand what the --delete flag does, but not sure what would be the actual difference in this case, would you mind explaining?
Many, many files could potential be left from the standard install that might cause configuration issues. --delete will delete anything on the destination that doesn't exist in the backup.
Do you think trying again with the --delete flag should change or improve anything?
Hard to say. Try it. Since you have a backup, you can always try it again.
Many, many files could potential be left from the standard install that might cause configuration issues. --delete will delete anything on the destination that doesn't exist in the backup.
Hard to say. Try it. Since you have a backup, you can always try it again.
Alright, will try it now and see.
Any idea about the potentially corrupted file?
This is the only file I've seen, but no idea if possibly there are more such files, though most things do work so I guess the majority is fine.
Not sure if there is a way to test how many these files there are.
Don’t have an account yet? Register yourself now and be a part of our community!