How to restore OMV 4.X from backup-plugin to system SSD

  • The non-ddfull options do not backup a large enough chunk for uefi or the RPi

    Is there any way to backup missing chunks for UEFI manually? I'm using an x86 motherboard, not a RPi.


    I assume you are using a large OS disk and putting data on it and that is why you don't want to use dd full?

    Yes, my SSD is 240GB, so ddfull occupies around 190GB and takes a lot of time to finish the job. Since there is no possibility for incremental backup in this case, it will require a lot of storage to keep these images. I don't know if I really need daily backups, but even performing backups on a weekly basis will end my free space very quickly.


    Images created by Clonezilla or fsarchiver are relatively small ~4GB since they contain only actual data. The disadvantage of Clonezilla is obvious since it allows only manual backups with the server restarted into the Clonezilla kernel. As I said earlier I managed to restore fsarchiver image only on a freshly installed OMV. Please note, that all tests were done on a VM on my working machine, I didn't want to interrupt my main server. Of course it started with a lot of errors, but still I could track all my dockers with some of them running correctly (the ones which don't require other disks).

    • Offizieller Beitrag

    Is there any way to backup missing chunks for UEFI manually? I'm using an x86 motherboard, not a RPi.

    Sure, just dd a larger portion of the beginning of the disk. I think it is normally 2MB but I need to check.


    I don't know if I really need daily backups, but even performing backups on a weekly basis will end my free space very quickly.

    That is why the plugin allows you to delete old backups. Set the keep backups to 1 day and you won't fill your drive.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sure, just dd a larger portion of the beginning of the disk. I think it is normally 2MB but I need to check.

    I found even more elegant solution using the sgdisk tool. It easily backups/restores GUID Partition Table disks. It contains only 18KBytes of data. It allowed me to restore the partition table, but further applying fsarchiver command did not restore the OS, each boot still ends in the EFI shell. Even after reinstalling grub manually.


    Set the keep backups to 1 day and you won't fill your drive.

    There are three problems I can see here:

    1. In my opinion 1 day backup is not enough for safety reasons.

    2. It takes a lot of time to copy entire disk, especially of a working OS. I think it took around 5 hours and stressed a lot of my weak processor. I'm not worried about the CPU itself, but other tasks that might be delayed during high load.

    3. I'm backing up the main drive to the same NAS (different disk) which is obviously a bad idea, unless I were not backing up this data to another storage afterwards. The problem here that my upload link is limited by 5Mbit/sec (bits, not bytes) and to upload an image of this size (190GB) will occupy all my bandwidth for a looong time. Keep in mind I have other disks being backed up (incrementally) on a daily basis . So I need a more elegant solution which occupies significantly less space.

    • Offizieller Beitrag

    found even more elegant solution using the sgdisk tool. It easily backups/restores GUID Partition Table disks. It contains only 18KBytes of data. It allowed me to restore the partition table, but further applying fsarchiver command did not restore the OS, each boot still ends in the EFI shell. Even after reinstalling grub manually.

    I was looking at sgdisk as well. I may add that on all backups because it doesn't hurt either way.


    There are three problems I can see here:

    1. In my opinion 1 day backup is not enough for safety reasons.

    2. It takes a lot of time to copy entire disk, especially of a working OS. I think it took around 5 hours and stressed a lot of my weak processor. I'm not worried about the CPU itself, but other tasks that might be delayed during high load.

    3. I'm backing up the main drive to the same NAS (different disk) which is obviously a bad idea, unless I were not backing up this data to another storage afterwards. The problem here that my upload link is limited by 5Mbit/sec (bits, not bytes) and to upload an image of this size (190GB) will occupy all my bandwidth for a looong time. Keep in mind I have other disks being backed up (incrementally) on a daily basis . So I need a more elegant solution which occupies significantly less space.

    I think borgbackup is the best option. It compresses and dedupes and is incremental after the first backup. I run it arm boards and the cpu load isn't bad.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Do you think it will solve my issue with grub boot loader?

    No because borg is only backing up files. The boot loader stuff is the same. I just haven't had any time to do anymore work on this. Other than time, is there any reason restoring over a fresh install is not good enough?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Finally succeeded in restoring the system with fresh install avoided!


    1) First of all one has to restore GPT table using, for example, sgdisk tool. Then fsarchiver tool has to be applied to both boot and main partitions accordingly to the archive structure.

    2) After that I tried to apply a restore process of boot loader according to Ubuntu help page.

    • I mounted the main system as /mnt, then the boot partition was mounted as /mnt/boot/efi.
    • Next I applied the following command: for i in /dev /dev/pts /proc /sys /run; do sudo mount -B $i /mnt$i; done which mounts the critical virtual filesystems (according to the help page).
    • Then I chrooted the OMV and applied grub-install. Actually I had to specify full path: /usr/sbin/grub-install, otherwise command was not found. Probably there were some problems with PATH variable.
    • After /usr/sbin/update-grub ended successfully I exited chroot and did a restart. Unfortunately system rebooted into EFI shell of virtualbox.

    Nevertheless I tried to run grub manually by entering the corresponding partition (EFI/debian) and locating grubx64.efi file. Running this file manually I managed to enter the boot loader and the system was able to start. In the terminal. I repeated last two short commands (without specifying full path) which resulted in restored grub loader. It seems that everything described in step 2 is not necessary at all and can be skipped.


    At the moment it can be considered as a win, I assume!

  • Just wanted to say thanks for the guide.


    Had some SMART errors on my system ssd and before replacing it, I did a fsarchiver backup of my system ssd with the backup plugin. After replacing the ssd I tried restoring without installing OMV to the new ssd first, so on a blank unformatted disk. But that didn't work for me. Couldn't get it to boot.


    So i installed OMV5 fresh from USB on the ssd and afterwards booted systemrescue and did the fsarchiver restore on the ext4 partition. Everything worked from there on and all settings / docker containers were restored. Only had to edit /etc/fstab because the old uuid of the swap partition was in there from the restore I guess. But no big issues.


    Thanks for the plugin and the short guide!

  • Trying to move from running OMV 5.5.23-1(Usul) on USB thumb drive, back to hard drive, with no success.

    I'm currently running OMV 5 on a USB thumb drive. I've tried many ways to backup / restore to a sas hard drive in the server. Every method I try results in this upon trying to boot to the sas drive after the restore.

    **Note, fsarchver works if restoring to another USB thumb drive, but not hard disk**

    I've tried backing up / restoring using fsarchiver

    I've tried installing fresh OMV 5 on the sas drive, then restoring just the primary partition using fsarchiver

    I've tried using dd to create a direct clone from the USB to the SAS drive

    I've tried a direct clone with clonezilla to the sas drive

    I've tried an rsync backup, then restoring over the top of a fresh OMV 5 install on the sas drive.

    All methods give the result above.

    The grub bootloader starts, then when it starts to boot, I get (initramfs)

    I've tried running fsck /dev/sdc1 but it wants confirmation to try to rewrite a countless number of blocks, I keep hitting y but I've not reached the end, so I gave up on that.

    I can post all the exact commands I'm running, but I'm pretty sure they're all run correctly. My guess is that it is something to do with the 2 different media types. I'm hoping someone knows what to do.

    note the change in /dev/sdc1 and /dev/sdb1 as these screenshots were taken between different retries, but it's the same drive.


    Actually here are some of the commands I've run:

    fsarchiver restore:

    fsarchiver restfs /mnt/6TBbackup/OMVBackup/omvbackup/backup-omv-21-Jan-2021_10-59-06.fsa id=0,dest=/dev/sda1


    clone with DD:

    dd if=/dev/sdn of=/dev/sda bs=16M status=progress


    Restore of Rsync backup taken with the backup plugin:

    I ran this command while running the live OMV on the USB, restored over the fresh install of OMV 5 on the sas drive

    I first formatted the sas drive partition that was created during the OMV 5 install

    mkfs.ext4 /dev/sda1

    then ran

    rsync -aAXv --delete --exclude="lost+found" /srv/dev-disk-by-label-6TBBackup/OMVBackup/RsyncBackup/omvbackup/ /srv/dev-disk-by-uuid-d77fff66-c483-43d5-b0a5-cceef188de57/

  • Actually here are some of the commands I've run:

    fsarchiver restore:

    fsarchiver restfs /mnt/6TBbackup/OMVBackup/omvbackup/backup-omv-21-Jan-2021_10-59-06.fsa id=0,dest=/dev/sda1

    I think you should check first what is the structure of your backup file with fsarchiver archinfo /path/to/backup command. In my case id=0 refers to boot partition.

  • I think you should check first what is the structure of your backup file with fsarchiver archinfo /path/to/backup command. In my case id=0 refers to boot partition.


    Were you able to boot after that as expected?

    Yes, I've checked with fsarchiver archinfo, the corresponding partition is being restored to the correct partition on the SAS drive.

    Yes, the SAS drive boots fine as expected after a fresh OMV install.

    As I mentioned, the restore methods work fine if going to another USB stick.

    Even Rsyncing the files over to the sas drive do not work.

    I think this really a general debian question of moving debian from USB to hard disk. There must be some simple tweaks that need to be made, post restore, that I do not know about.

  • When you restored from fsarchiver you also restored /etc/fstab.

    So check if you need to correct /etc/fstab

    This reply will be in 2 posts due to character limit. I've included a video link of the failed boot process at the end of the 2nd post.

    I've had a chance to work on this a little more.

    I created a linux VM in vmware workstation to test with. I can successfully restore the fsarchiver backup of the USB stick, to the virtual hard disk, and boot without issues.


    I then proceeded to test on the physical server again.

    1. fsarchiver restores the UUID of the original primary boot partition from the USB stick, so it still matches the one in fstab.
    2. The swap partition UUID does not match, because the fsarchiver backup does not include the swap partition, but correcting the UUID for the swap in fstab does not make a difference.

    I again tried to restore the rsync backup to the hard disk.

    1. The UUID of both the boot and swap partitions do not match as expected
    2. Fixing the UUID's for both boot and swap partitions in fstab still gives the same failed boot result.

    The following output is what exists after the rsync restore, over the top of a fresh OMV 5 install.


    Here is output of lsblk (sdr is the USB stick which OMV is running on, sda is the SAS drive I'm trying to move to)


    Here's the output of blkid (sdr=USB stick, sda=SAS drive)

  • When you restored from fsarchiver you also restored /etc/fstab.

    So check if you need to correct /etc/fstab

    Here's the remaining output from my last post

    Here is fstab from the SAS drive, after rsync restore, with fixed UUID's, to match SAS drive partitions

    Output of fdisk -l

    Here is a google drive link to the video of the failed boot process.

    https://drive.google.com/file/…FwgkkZV2/view?usp=sharing

  • Curious, exactly how did you edit the fstab? Like, after accessing systemrescue, log back in to OMV in CLI and use nano to edit fstab. Do you simple delete the swap UUID? If you need to put in the old one, where do you get the old UUID from?

  • It is better practice not to delete things from fstab. Just comment them out instead. You can also leave comments as notes in there to explain to yourself later why you did things. Maybe after an extended period of time when you know for sure you won't need something in there you can delete it then.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Curious, exactly how did you edit the fstab? Like, after accessing systemrescue, log back in to OMV in CLI and use nano to edit fstab. Do you simple delete the swap UUID? If you need to put in the old one, where do you get the old UUID from?


    It is better practice not to delete things from fstab. Just comment them out instead. You can also leave comments as notes in there to explain to yourself later why you did things. Maybe after an extended period of time when you know for sure you won't need something in there you can delete it then.

    gderf has good advice regarding just commenting out original entries in fstab, by placing a # at the beginning of the line.

    You can edit fstab by booting into the systemrescuecd, or from ssh'd into OMV.

    If booting to the systemrescuecd, run the following to allow ssh access. By default the firewall is enabled in systemrescuecd so you can't ssh into it without disabling first.

    iptables -I INPUT -j ACCEPT (this allows all inbound connections)

    passwd root   (this will let you set the root password to whatever you want, necessary for ssh access)

    ifconfig (so you can see the ip address assigned to the interface, so you can ssh to that)


    In order to edit fstab, you need to first mount the correct partition that contains the /etc/fstab you want to edit.


    1. lsblk (this will show you an output of your drive sizes and partitions on each drive. You should be able to identify the correct drive / partition with this.)
    2. lets say the correct partition happens to be /dev/sda1
    3. You'll need to create a directory to use as a mount point, to mount /dev/sda1
    4. mkdir /mnt/sda1 (this will create the directory named "sda1" in the /mnt directory, although you could name it whatever you want)
    5. mount /dev/sda1 /mnt/sda1 (this will mount the partition to the /mnt/sda1 directory)
    6. blkid (this will list all current partition UUID's, you'll need to copy / paste to fstab if fstab doesn't match)
    7. nano /mnt/sda1/etc/fstab (this will open fstab in nano from the partition you just mounted)
    8. if UUID's do not match for the boot and swap, comment out the original lines in fstab, copy / paste from the originals to create new lines and edit the UUID's to match) ctrl + o to write changes in nano, crtl + x to exit.

    Basically what you're accomplishing by doing a fresh install of OMV and then overwriting with the backup, is you're just getting the partitions created by the OMV installer and getting grub installed. Grub is likely what is not working preventing boot after your fsarchiver restore You can fix this manually from the systemrescuecd by doing the following. The partition in question needs to be mounted before running the next commands. Also you should have already restored the OMV backup. The /sda1/boot directory must exist for the grub-install below to work.


    mount -o bind /proc /mnt/sda1/proc

    mount -o bind /dev /mnt/sda1/dev

    mount -o bind /sys /mnt/sda1/sys

    chroot /mnt/sda1 /bin/bash (these commands basically tell the system to treat /mnt/sda1 as the root directory, so that the following command will work correctly)

    grub-install /dev/sda (be sure to run this on the drive root /dev/sda, NOT the partition /sda1)

    now reboot and test

  • I just tried to join the debian forum to post my restore / boot issue there, but as a human, I literally cannot solve their captcha to join! I've tried everything I can think of to resolve this boot issue. I bet someone more knowledgeable could point out a simple fix.

  • When you restored from fsarchiver you also restored /etc/fstab.

    So check if you need to correct /etc/fstab


    I think you should check first what is the structure of your backup file with fsarchiver archinfo /path/to/backup command. In my case id=0 refers to boot partition.

    I just replaced all UUID references to the old USB partition in /boot/grub/grub.cfg with the UUID of the partition on the SAS drive. Now I get Kernel Panic Does anyone know the answer to this?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!