Posts by Spoor12B

    A couple of weeks ago, I succesfully upgraded my OMV7 installation to 8. However, I later read that the package zfs-dkms should have been removed before doing so. Now that I'm running OMV8 I'm trying to remove it by running apt purge zfs-dkms but then I get the error message:


    'zfs-dkms' is not installed, so not removed.


    I'm not sure how to proceed from here and I would like to prevent problems in the future. So what do I have to do now to make sure everything keeps running fine?


    Please note that I previously ran BTRFS and the standard kernels, I guess that the zfs-dkms package became installed because I added the zfs module while still on a non proxmox kernel. Because I migrated to ZFS, only the latest proxmox kernel is installed.


    Here's some info from my system which I hope to helps diagnose what's the current state.


    thanks!

    Thanks for your reply raulfg3.


    I know that I can see it in the gui just as from SSH with zpool status poolname. However, I'd like to know how I can configure autoshutdownplugin to detect this so the system won't be shutdown while a scrub is active. Thanks!

    I've configured autoshutdown to check for several IP addresses, uploads and HDD io over the (default) 401 KB/s.



    However, during the scrubbing of my ZFS pool, the IO rates are way above the 401KB/s value, but both harddisks are skipped:





    Is there a way to have autoshutdown detect scrubbing? I couldn't find a process with iotop so I guess a script is needed?

    I was trying to get RC6 powersaving working again but the current kernel (Debian GNU/Linux, with Linux 6.12.43+deb12-amd64) didn't show RC6 at all in Powertop.


    Can this be related to this error message which I see at bootup (and also during further operation)?



    RC6 used to work on a more recent kernel (Proxmox) but it also doesn't work anymore with the current version.


    DRM on a NAS isn't very useful but these messages weren't shown at bootup when RC6 was still working.

    Thanks for your reply BernH, that's certainly helpfull.


    I'm certainly interested in using a simpler setup however, I based my approach on this guide.


    I've tried your approach but the Encryption interface only allows for devices to be added:



    So I guess that I'll still do LUKS encryption at the disk level but at least I can use BTFRS RAID 1 which, as you already wrote, seems to be a better a choice for my setup.


    final edit: BTFRS doesn't currently offer support for volume encryption, I'll use ZFS with encryption for a future re-installation

    I'm a bit reluctant to try and do LUKS encryption from the command line on the volume as it seems that most people use it at a disk level (not thus making it the best approach).


    EDIT:


    I've decided to go with my previous and I'll spin up a VM to experiment with different configurations. ZFS with native encryption is also a possibility.

    Ok, it seems that I don't have any other choice and I've decided to wipe /dev/sda & /dev/sdb and restore everything.


    I'm still curious how this could be resolved if it should ever happen again.


    Maybe my original approach in the setup is wrong?


    encrypt both /dev/sda & /dev/sdb with luks


    create a mirror under Multiple Device: /dev/md0 (RAID1)


    create a BTRFS filesystem on /dev/md0 (single, no RAID)


    Result:


    Label: none uuid: 7c450192-5648-4826-bc41-4ef49d4301c7

    Total devices 1 FS bytes used 851.98GiB

    devid 1 size 16.37TiB used 863.02GiB path /dev/md0


    Data, single: total=861.01GiB, used=851.12GiB

    System, DUP: total=8.00MiB, used=112.00KiB

    Metadata, DUP: total=1.00GiB, used=881.56MiB

    GlobalReserve, single: total=512.00MiB, used=32.00KiB


    # I/O error statistics

    [/dev/md0].write_io_errs 0

    [/dev/md0].read_io_errs 0

    [/dev/md0].flush_io_errs 0

    [/dev/md0].corruption_errs 0

    [/dev/md0].generation_errs 0


    # Scrub status

    UUID: 7c450192-5648-4826-bc41-4ef49d4301c7


    Scrub device /dev/md0 (id 1)

    no stats available


    /etc/crypttab


    # <target name> <source device> <key file> <options>

    data-crypt1 UUID=454ff01a-68b0-4638-97cb-811a4a1ae085 /etc/luks-keys/wdkey luks

    data-crypt2 UUID=d156444a-a71a-4d60-8d07-da9afb5a4fdd /etc/luks-keys/wdkey luks


    blkid


    /dev/sda: UUID="454ff01a-68b0-4638-97cb-811a4a1ae085" LABEL="DISK1" TYPE="crypto_LUKS"

    /dev/sdb: UUID="d156444a-a71a-4d60-8d07-da9afb5a4fdd" LABEL="DISK2" TYPE="crypto_LUKS"

    /dev/mapper/sdb-crypt: UUID="3c4d34eb-2f77-59f8-ed34-0f2d34f889c0" UUID_SUB="a8f6e832-282e-7bee-1c50-dacfe4bee54a" LABEL="omv:0" TYPE="linux_raid_member"

    /dev/mapper/sda-crypt: UUID="3c4d34eb-2f77-59f8-ed34-0f2d34f889c0" UUID_SUB="f58d59c3-0dab-78fa-c4c6-7dd00f397dce" LABEL="omv:0" TYPE="linux_raid_member"


    It's also not yet clear to me how I can remove the old BTRFS filesystem (see screenshot). It must be the Referenced property but I don't know what. All my SMB shares are already pointing to the new one.

    After having some problems with my NAS I decided to restore a backup but in the end I was not able to successfully restore it So I wiped the system SSD and installed OMV from scratch.


    I was able to unlock both my harddisks which were part of a RAID 1 array. However, I couldn't create an array in Multiple Devices. It turned out that I could mount the file system on /dev/dm-0 and this seems to work perfectly. However, how can I replace a harddisk and further manage the array without it being shown in Multiple Device?


    What if I need to replace a failed disk?


    I tried to get some further info using mdadm but the /dev/dm-0 on which the file system is mounted isn't visible in the /dev folder. Even worse, the mdadm.conf doesn't contain any info. So now I'm not sure how to proceed from here. Any help would be much appreciated.


    EDIT: although the file system details (see screenshot) clearly show that the system is running RAID-1 I saw that data-crypt1 is mapped to dm-0. So I'm fairly sure that the system isn't using a RAID 1 array. So how can I recreate it now that the Multiple Device isn't giving me any possibilities? When /dev/dm-0 wasn't mounted as a file system I also couldn't create an array.

    Thanks for your reply BernH. That's a huge difference compared to DD and I've already configured backup to use this format.


    I hope that it will take a long time before I'll need to use it.


    The RAID 1 issue (not showing in MD after recovery) is still unclear to me. I'll do some further research and make a new post if I can't figure it out.

    I've decided to do a rebuild, it's still not clear how the RAID configuration works as I expected it to be a part of the (wiped) SSD. But happy that it works.


    Switched to the fsarchive format for backups and I will consider other backup options for the system SSD like clonezilla.

    Thanks for your reply.


    So the first thing which I tried was the correct command:


    zstdcat backupfile.dd.zst | sudo dd of=/dev/nvme0n1p2 status=progress


    as /dev/nvme0n1p2 is the OS partition. Sadly, this resulted in an unbootable system. Could this be caused by the conv=sparse parameter?



    Because restoring without sparse would take 5 hours I've started to rebuild my NAS but now I've ran into a new problem. My setup uses two harddisks, encrypted with LUKS in a RAID 1 configuration. After installing OMV to my SSD I've successfully installed LUKS and decrypted the harddisks (setup automatic decryption at boot as well).


    So I have /dev/sda , unlocked , /dev/mapper/sda-crypt and /dev/sdb, unlocked, /dev/mapper/sdb-crypt


    However, I can't create a RAID 1 array using multiple device as both devices won't show up and there aren't any other options available.


    I'm not sure which mdadm command I should use now as I don't want to restore all the data.


    I've tried the following command:


    mdadm --assemble /dev/md0 /dev/mapper/sda-crypt /dev/mapper/sdb-crypt


    mdadm: no recogniseable superblock on /dev/mapper/sda-crypt

    mdadm: /dev/mapper/sda-crypt has no superblock - assembly aborted


    The output is surprising to me as /dev/mapper/sda-crypt is shown as unlocked under encryption


    And, if I remember correctly, during the original install I first encrypted both harrdisks with LUKS and created the array afterwards on both decrypted devices.



    EDIT: ok, I was able to just mount the existing filesystem under /dev/dm-0! So I guess that I didn't use MD to configure the array? It's confusing to me that I can't see the RAID 1 array there and under encryption only /dev/mapper/data-crypt1 is shown as referenced so it's probably not RAID1.


    EDIT2: according to filesystems RAID 1 is active:


    Label: none uuid: 82b09deb-bd8f-4ce1-91af-42cfd5824c14

    Total devices 2 FS bytes used 8.97TiB

    devid 1 size 16.37TiB used 9.31TiB path /dev/mapper/data-crypt1

    devid 2 size 16.37TiB used 9.31TiB path /dev/mapper/data-crypt2


    Data, RAID1: total=9.30TiB, used=8.96TiB

    System, RAID1: total=8.00MiB, used=1.66MiB

    Metadata, RAID1: total=12.00GiB, used=9.55GiB

    GlobalReserve, single: total=512.00MiB, used=0.00B


    # I/O error statistics

    [/dev/mapper/data-crypt1].write_io_errs 0

    [/dev/mapper/data-crypt1].read_io_errs 0

    [/dev/mapper/data-crypt1].flush_io_errs 0

    [/dev/mapper/data-crypt1].corruption_errs 0

    [/dev/mapper/data-crypt1].generation_errs 0

    [/dev/mapper/data-crypt2].write_io_errs 0

    [/dev/mapper/data-crypt2].read_io_errs 0

    [/dev/mapper/data-crypt2].flush_io_errs 0

    [/dev/mapper/data-crypt2].corruption_errs 0

    [/dev/mapper/data-crypt2].generation_errs 0


    # Scrub status

    UUID: 82b09deb-bd8f-4ce1-91af-42cfd5824c14


    Scrub device /dev/dm-0 (id 1)

    no stats available


    Scrub device /dev/dm-1 (id 2)

    no stats available


    mdadm --detail /dev/md-0 doesn't work so mdadm is probably not the right tool

    I need to restore my OMV system partition from a backup, so I've already mounted an Ubuntu live system and the USB stick containing the zst file


    I've already started zstdcat backupfile.dd.zst | sudo dd of=/dev/nvme0n1p2 status=progress


    However, is this the right device to restore the backup to?


    The .sfdisk file contains the following information:


    label: gpt

    label-id: 0518E42F-9245-4061-A822-F294B764FC98

    device: /dev/nvme0n1

    unit: sectors

    first-lba: 34

    last-lba: 976773134

    sector-size: 512


    /dev/nvme0n1p1 : start= 2048, size= 1048576, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=CE484F9A-DBF8-408E-BE17-BDC8849DF966

    /dev/nvme0n1p2 : start= 1050624, size= 973721600, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, uuid=D492A8D9-8309-4412-91C7-B4BF6A1CF6EC

    /dev/nvme0n1p3 : start= 974772224, size= 1998848, type=0657FD6D-A4AB-43C4-84E5-0933C84B4F4F, uuid=F5AB79E3-BDC9-4824-BFE3-2DBAAFB7C7B7


    So should I restore to /dev/nvme0n1 instead as the file probably contains all 3 partitions (boot, main, swap)? Or doesn't the dd file contain a straight sector by sector dump of the SSD which I can just send to the original device?


    Also I guess that there's no way to speed things up as the partition is very large (512GB)? I will certainly resize it but I guess that there's no way to skip the empty space? At this rate it will take over 5 hours.


    EDIT: I've changed the restore to target /dev/nvme0n1 and I've added conv=sparse to try to skip the empty parts


    EDIT2: the command completed successfully but the system doesn't boot


    EDIT3: trying to follow the restore instructions [How-To] Restore OMV system backup made with openmediavault-backup plugin


    I was able to restore the partitions although gparted gives a warning that the backup GPT partitions table is corrupt


    However, I think that the zstd -d command will try to extract the full 512GB to the 32GB usb stick which obviously won't work. I'll probably try to add an USB harddisk later


    Thanks!

    According to the documentation the GUI of the docker image it can be accessed via a webbrowser:


    Quote

    The graphical user interface (GUI) of the application can be accessed through a modern web browser, requiring no installation or configuration on the client side, or via any VNC client.

    I don't have any experience in using crashplan. There are installation instructions at: https://support.crashplan.com/…Install-the-CrashPlan-app and they have a free trial available.


    However, I looked up the supported platforms and Debian isn't listed so that doesn't bode well: https://support.crashplan.com/…pported-operating-systems


    For myself I copy all data to my PC and use the backblaze personal backup client which works fine. I've also considered running crashplan directly on my OMV NAS but went for this solution instead.