Restoring backup - to which device?

  • I need to restore my OMV system partition from a backup, so I've already mounted an Ubuntu live system and the USB stick containing the zst file


    I've already started zstdcat backupfile.dd.zst | sudo dd of=/dev/nvme0n1p2 status=progress


    However, is this the right device to restore the backup to?


    The .sfdisk file contains the following information:


    label: gpt

    label-id: 0518E42F-9245-4061-A822-F294B764FC98

    device: /dev/nvme0n1

    unit: sectors

    first-lba: 34

    last-lba: 976773134

    sector-size: 512


    /dev/nvme0n1p1 : start= 2048, size= 1048576, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=CE484F9A-DBF8-408E-BE17-BDC8849DF966

    /dev/nvme0n1p2 : start= 1050624, size= 973721600, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, uuid=D492A8D9-8309-4412-91C7-B4BF6A1CF6EC

    /dev/nvme0n1p3 : start= 974772224, size= 1998848, type=0657FD6D-A4AB-43C4-84E5-0933C84B4F4F, uuid=F5AB79E3-BDC9-4824-BFE3-2DBAAFB7C7B7


    So should I restore to /dev/nvme0n1 instead as the file probably contains all 3 partitions (boot, main, swap)? Or doesn't the dd file contain a straight sector by sector dump of the SSD which I can just send to the original device?


    Also I guess that there's no way to speed things up as the partition is very large (512GB)? I will certainly resize it but I guess that there's no way to skip the empty space? At this rate it will take over 5 hours.


    EDIT: I've changed the restore to target /dev/nvme0n1 and I've added conv=sparse to try to skip the empty parts


    EDIT2: the command completed successfully but the system doesn't boot


    EDIT3: trying to follow the restore instructions [How-To] Restore OMV system backup made with openmediavault-backup plugin


    I was able to restore the partitions although gparted gives a warning that the backup GPT partitions table is corrupt


    However, I think that the zstd -d command will try to extract the full 512GB to the 32GB usb stick which obviously won't work. I'll probably try to add an USB harddisk later


    Thanks!

  • Spoor12B

    Changed the title of the thread from “Restoring zstd backup - to which device?” to “Restoring backup - to which device?”.
    • New
    • Official Post

    dd backup is just the os partition. ddfull would be the full disk. You can't restore a dd image to a smaller drive.

    omv 7.7.19-2 sandworm | 64 bit | 6.14 proxmox kernel

    plugins :: omvextrasorg 7.0.3 | kvm 7.2.0 | compose 7.6.13 | cterm 7.8.7 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.3.2 | writecache 7.0.0-12


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks for your reply.


    So the first thing which I tried was the correct command:


    zstdcat backupfile.dd.zst | sudo dd of=/dev/nvme0n1p2 status=progress


    as /dev/nvme0n1p2 is the OS partition. Sadly, this resulted in an unbootable system. Could this be caused by the conv=sparse parameter?



    Because restoring without sparse would take 5 hours I've started to rebuild my NAS but now I've ran into a new problem. My setup uses two harddisks, encrypted with LUKS in a RAID 1 configuration. After installing OMV to my SSD I've successfully installed LUKS and decrypted the harddisks (setup automatic decryption at boot as well).


    So I have /dev/sda , unlocked , /dev/mapper/sda-crypt and /dev/sdb, unlocked, /dev/mapper/sdb-crypt


    However, I can't create a RAID 1 array using multiple device as both devices won't show up and there aren't any other options available.


    I'm not sure which mdadm command I should use now as I don't want to restore all the data.


    I've tried the following command:


    mdadm --assemble /dev/md0 /dev/mapper/sda-crypt /dev/mapper/sdb-crypt


    mdadm: no recogniseable superblock on /dev/mapper/sda-crypt

    mdadm: /dev/mapper/sda-crypt has no superblock - assembly aborted


    The output is surprising to me as /dev/mapper/sda-crypt is shown as unlocked under encryption


    And, if I remember correctly, during the original install I first encrypted both harrdisks with LUKS and created the array afterwards on both decrypted devices.



    EDIT: ok, I was able to just mount the existing filesystem under /dev/dm-0! So I guess that I didn't use MD to configure the array? It's confusing to me that I can't see the RAID 1 array there and under encryption only /dev/mapper/data-crypt1 is shown as referenced so it's probably not RAID1.


    EDIT2: according to filesystems RAID 1 is active:


    Label: none uuid: 82b09deb-bd8f-4ce1-91af-42cfd5824c14

    Total devices 2 FS bytes used 8.97TiB

    devid 1 size 16.37TiB used 9.31TiB path /dev/mapper/data-crypt1

    devid 2 size 16.37TiB used 9.31TiB path /dev/mapper/data-crypt2


    Data, RAID1: total=9.30TiB, used=8.96TiB

    System, RAID1: total=8.00MiB, used=1.66MiB

    Metadata, RAID1: total=12.00GiB, used=9.55GiB

    GlobalReserve, single: total=512.00MiB, used=0.00B


    # I/O error statistics

    [/dev/mapper/data-crypt1].write_io_errs 0

    [/dev/mapper/data-crypt1].read_io_errs 0

    [/dev/mapper/data-crypt1].flush_io_errs 0

    [/dev/mapper/data-crypt1].corruption_errs 0

    [/dev/mapper/data-crypt1].generation_errs 0

    [/dev/mapper/data-crypt2].write_io_errs 0

    [/dev/mapper/data-crypt2].read_io_errs 0

    [/dev/mapper/data-crypt2].flush_io_errs 0

    [/dev/mapper/data-crypt2].corruption_errs 0

    [/dev/mapper/data-crypt2].generation_errs 0


    # Scrub status

    UUID: 82b09deb-bd8f-4ce1-91af-42cfd5824c14


    Scrub device /dev/dm-0 (id 1)

    no stats available


    Scrub device /dev/dm-1 (id 2)

    no stats available


    mdadm --detail /dev/md-0 doesn't work so mdadm is probably not the right tool

  • I've decided to do a rebuild, it's still not clear how the RAID configuration works as I expected it to be a part of the (wiped) SSD. But happy that it works.


    Switched to the fsarchive format for backups and I will consider other backup options for the system SSD like clonezilla.

    Odroid H4+ , 32GB ECC, 512GB 980 SSD, HGST 18TB x 2 in RAID 1, 2.5 Gbit network

  • I use fsarchiver myself, although from a custom script I run. The only thing to be aware of with fsarchiver is that because it is a file backup, not volume image, if you want to restore to a blank drive, you need to create the partitions first. The easiest way is to install a minimal debian server on the new device and then restore fsarchiver to the device.

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

  • Thanks for your reply BernH. That's a huge difference compared to DD and I've already configured backup to use this format.


    I hope that it will take a long time before I'll need to use it.


    The RAID 1 issue (not showing in MD after recovery) is still unclear to me. I'll do some further research and make a new post if I can't figure it out.

    Odroid H4+ , 32GB ECC, 512GB 980 SSD, HGST 18TB x 2 in RAID 1, 2.5 Gbit network

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!