Thanks, I'll check this later today and I'm counting on it not being installed just like the apt remove (or apt purge) said.
Posts by Spoor12B
-
-
A couple of weeks ago, I succesfully upgraded my OMV7 installation to 8. However, I later read that the package zfs-dkms should have been removed before doing so. Now that I'm running OMV8 I'm trying to remove it by running apt purge zfs-dkms but then I get the error message:
'zfs-dkms' is not installed, so not removed.
I'm not sure how to proceed from here and I would like to prevent problems in the future. So what do I have to do now to make sure everything keeps running fine?
Please note that I previously ran BTRFS and the standard kernels, I guess that the zfs-dkms package became installed because I added the zfs module while still on a non proxmox kernel. Because I migrated to ZFS, only the latest proxmox kernel is installed.
Here's some info from my system which I hope to helps diagnose what's the current state.
thanks!
Code
Display Moredpkg -l | grep openme ii openmediavault 8.0.4-1 all openmediavault - The open network attached storage solution ii openmediavault-anacron 8.0 all anacron plugin for OpenMediaVault. ii openmediavault-autoshutdown 8.0 all OpenMediaVault AutoShutdown Plugin ii openmediavault-backup 8.0.1 all backup plugin for OpenMediaVault. ii openmediavault-compose 8.1.1 all OpenMediaVault compose plugin ii openmediavault-cterm 8.0 all openmediavault container exec terminal plugin ii openmediavault-diskstats 8.0-4 all openmediavault disk monitoring plugin ii openmediavault-kernel 8.0.4 all kernel package ii openmediavault-keyring 1.0.2-2 all GnuPG archive keys of the openmediavault archive ii openmediavault-omvextrasorg 8.0.2 all OMV-Extras.org Package Repositories for OpenMediaVault ii openmediavault-salt 8.0 amd64 Extra Python packages required by Salt on openmediavault ii openmediavault-sharerootfs 8.0-1 all openmediavault share root filesystem plugin ii openmediavault-wetty 8.0-6 all openmediavault WeTTY (Web + TTY) plugin ii openmediavault-writecache 8.0.9 all openmediavault plugin to reduce OS writes using tmpfs+overlayfs ii openmediavault-zfs 8.0.1 amd64 OpenMediaVault plugin for ZFS apt list zfs* zfs-auto-snapshot/stable 1.2.4-2 all zfs-dkms/stable 2.3.2-2 all zfs-dracut/stable 2.3.2-2 all zfs-fuse/stable 0.7.0-30+b1 amd64 zfs-initramfs/stable 2.3.4-pve1 all zfs-test-dbgsym/stable 2.3.4-pve1 amd64 zfs-test/stable 2.3.4-pve1 amd64 zfs-zed-dbgsym/stable 2.3.4-pve1 amd64 zfs-zed/stable,now 2.3.4-pve1 amd64 [installed,automatic] zfsnap/stable 1.11.1-8.2 all zfsutils-linux-dbgsym/stable 2.3.4-pve1 amd64 zfsutils-linux/stable,now 2.3.4-pve1 amd64 [installed,automatic] apt list | grep dkms acpi-call-dkms/stable 1.2.2-2.1 all apfs-dkms/stable 0.3.13-1 all bbswitch-dkms/stable 0.8-17 amd64 broadcom-sta-dkms/stable 6.30.223.271-26 amd64 dahdi-dkms/stable 1:3.1.0+git20230717~dfsg-10.1 all ddcci-dkms/stable 0.4.5-1 all dh-dkms/stable 3.2.2-1~deb13u1 all digimend-dkms/stable 13-4 amd64 dkms-noautoinstall-test-dkms/stable 3.2.2-1~deb13u1 all dkms-replace-test-dkms/stable 3.2.2-1~deb13u1 all dkms-test-dkms/stable 3.2.2-1~deb13u1 all dkms/stable 3.2.2-1~deb13u1 all dm-writeboost-dkms/stable 2.2.18-0.1 all dpdk-kmods-dkms/stable 0~20230205+git-2 amd64 evdi-dkms/stable 1.14.8+dfsg-1 all ezurio-qcacld-2.0-dkms/stable 0.0~git20240408.aa96a9f+dfsg-5 all falcosecurity-scap-dkms/stable 0.20.0-3 all gost-crypto-dkms/stable 0.3.5-1.1 all iptables-netflow-dkms/stable 2.6-7.1 amd64 jool-dkms/stable 4.1.13-1.1 all langford-dkms/stable 0.0.20130228-7 all leds-alix-dkms/stable 0.0.1-5 all lenovolegionlinux-dkms/stable 0.0.20+ds-1 amd64 librem-ec-acpi-dkms/stable 0.9.2-3 all lime-forensics-dkms/stable 1.9.1-8 all lttng-modules-dkms/stable 2.13.18-1+deb13u1 all mimic-dkms/stable 0.7.0+ds-2 amd64 mstflint-dkms/stable 4.31.0+1-4 all nvidia-fs-dkms/stable 2.19.7~12.4.1-2 amd64 nvidia-kernel-dkms/stable 550.163.01-2 amd64 nvidia-open-kernel-dkms/stable 550.163.01-2 amd64 nvidia-tesla-535-kernel-dkms/stable 535.274.02-1~deb13u1 amd64 openafs-modules-dkms/stable 1.8.13.2-1 all openrazer-driver-dkms/stable 3.10.2+dfsg-1 all openvpn-dco-dkms/stable 0.0+git20241121-1 all osmocom-dahdi-dkms/stable 0.0~git20241003.b2ea348-4 all r8125-dkms/stable 9.015.00-1 all r8168-dkms/stable 8.055.00-1 all rapiddisk-dkms/stable 9.2.0-1 all reform2-lpc-dkms/stable 1.71-2 all rtpengine-kernel-dkms/stable 12.5.1.31-1 all sl-modem-dkms/stable 2.9.11~20110321-20 amd64 smifb2-dkms/stable 2.4.1-1 all tp-smapi-dkms/stable 0.44-1.2 amd64 v4l2loopback-dkms/stable 0.15.0-2 all vhba-dkms/stable 20250329-1 all vpoll-dkms/stable 0.1.1-1 all west-chamber-dkms/stable 20100405+svn20111107.r124-14.3 all xtables-addons-dkms/stable 3.27-4 all xtrx-dkms/stable 0.0.1+git20190320.5ae3a3e-3.7 all zfs-dkms/stable 2.3.2-2 all uname -a Linux omv 6.14.11-5-pve #2 SMP PREEMPT_DYNAMIC PMX 6.14.11-5 (2025-12-15T08:44Z) x86_64 GNU/Linux -
My setup once also had slow write speeds (measured with iperf). I finally discovered that this could be solved by powercycling my Vimin 2.5Gb switch.
-
I would only enable disk write cache if you're sure that the filesystem can handle this or if you have an UPS. Although I don't have an UPS I've enabled this for my ZFS pool because it should be safe to do so. ZFS - enable or disable disk cache
-
Thanks for your reply raulfg3.
I know that I can see it in the gui just as from SSH with zpool status poolname. However, I'd like to know how I can configure autoshutdownplugin to detect this so the system won't be shutdown while a scrub is active. Thanks!
-
I've configured autoshutdown to check for several IP addresses, uploads and HDD io over the (default) 401 KB/s.
However, during the scrubbing of my ZFS pool, the IO rates are way above the 401KB/s value, but both harddisks are skipped:
Code
Display MoreNov 04 23:01:56 omv autoshutdown[27361]: root: INFO: '_check_ul_dl_rate(): Network interface: br-b3bd14d9f56b (last 72s) DL: 0.0 kB/s, UL: 0.1 kB/s under 50 kB/s -> next chec> Nov 04 23:01:56 omv autoshutdown[27361]: root: DEBUG: '__run_check(): Calling: _check_hddio()' Nov 04 23:01:56 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): HDDIO_RATE: 401 kB/s' Nov 04 23:01:56 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): ========== Device: nvme0n1 ==========' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): /dev/nvme0n1p3: UUID="44bff031-969b-4b5f-8ab4-b45f675cf809"' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): /dev/nvme0n1p1: UUID="A579-7907"' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): /dev/nvme0n1p2: UUID="9b128b69-19c0-4927-b3af-3ca10f4d6ef0"' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): Actual: Read kB: 1925424, Write kB: 421553' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): Previous: Read kB: 1925424, Write kB: 421505' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): last_checked_sec: 72' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): hddio_increase: 28872' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): t_hddio_read: 1954296' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): diff_hddio_read: 0' Nov 04 23:01:58 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): t_hddio_write: 450377' Nov 04 23:01:59 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): diff_hddio_write: 48' Nov 04 23:01:59 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): Check: hdd_in: 1925424 <= t_hddio_read: 1954296' Nov 04 23:01:59 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): Check: hdd_out: 421553 <= t_hddio_write: 450377' Nov 04 23:01:59 omv autoshutdown[27361]: root: INFO: '_check_hddio(): Device: nvme0n1 (last 72s) Read: 0.0 kB/s, Write: 0.7 kB/s under: 401 kB/s -> next check' Nov 04 23:01:59 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): ========== Device: sda ==========' Nov 04 23:01:59 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): Skipping as no mount point' Nov 04 23:01:59 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): ========== Device: sdb ==========' Nov 04 23:01:59 omv autoshutdown[27361]: root: DEBUG: '_check_hddio(): Skipping as no mount point' Nov 04 23:01:59 omv autoshutdown[27361]: root: INFO: '_check_hddio(): All checks complete' Nov 04 23:01:59 omv autoshutdown[27361]: root: INFO: 'main(): All active checks passed, Shutting down system ...' Nov 04 23:01:59 omv autoshutdown[27361]: root: INFO: '_shutdown(): Shutdown issued: shutdown -h now' Nov 04 23:02:05 omv systemd-logind[1022]: The system will power off now!Is there a way to have autoshutdown detect scrubbing? I couldn't find a process with iotop so I guess a script is needed?
-
I was trying to get RC6 powersaving working again but the current kernel (Debian GNU/Linux, with Linux 6.12.43+deb12-amd64) didn't show RC6 at all in Powertop.
Can this be related to this error message which I see at bootup (and also during further operation)?
QuoteDisplay Morekernel: i915 0000:00:02.0: [drm] *ERROR* GT0: Failed to initialize GPU, declaring it wedged!
kernel: i915 0000:00:02.0: [drm] *ERROR* GT0: Enabling uc failed (-5)
kernel: i915 0000:00:02.0: [drm] *ERROR* GT0: GuC initialization failed -ENOENT
kernel: i915 0000:00:02.0: [drm] *ERROR* GT0: GuC firmware i915/tgl_guc_70.bin: fetch failed -ENOENT
kernel: i915 0000:00:02.0: Direct firmware load for i915/adlp_dmc_ver2_16.bin failed with error -2
kernel: i915 0000:00:02.0: Direct firmware load for i915/adlp_dmc.bin failed with error -2
RC6 used to work on a more recent kernel (Proxmox) but it also doesn't work anymore with the current version.
DRM on a NAS isn't very useful but these messages weren't shown at bootup when RC6 was still working.
-
Thanks for your reply BernH, that's certainly helpfull.
I'm certainly interested in using a simpler setup however, I based my approach on this guide.
I've tried your approach but the Encryption interface only allows for devices to be added:
So I guess that I'll still do LUKS encryption at the disk level but at least I can use BTFRS RAID 1 which, as you already wrote, seems to be a better a choice for my setup.
final edit: BTFRS doesn't currently offer support for volume encryption, I'll use ZFS with encryption for a future re-installation
I'm a bit reluctant to try and do LUKS encryption from the command line on the volume as it seems that most people use it at a disk level (not thus making it the best approach).
EDIT:
I've decided to go with my previous and I'll spin up a VM to experiment with different configurations. ZFS with native encryption is also a possibility.
-
Ok, it seems that I don't have any other choice and I've decided to wipe /dev/sda & /dev/sdb and restore everything.
I'm still curious how this could be resolved if it should ever happen again.
Maybe my original approach in the setup is wrong?
encrypt both /dev/sda & /dev/sdb with luks
create a mirror under Multiple Device: /dev/md0 (RAID1)
create a BTRFS filesystem on /dev/md0 (single, no RAID)
Result:
Label: none uuid: 7c450192-5648-4826-bc41-4ef49d4301c7
Total devices 1 FS bytes used 851.98GiB
devid 1 size 16.37TiB used 863.02GiB path /dev/md0
Data, single: total=861.01GiB, used=851.12GiB
System, DUP: total=8.00MiB, used=112.00KiB
Metadata, DUP: total=1.00GiB, used=881.56MiB
GlobalReserve, single: total=512.00MiB, used=32.00KiB
# I/O error statistics
[/dev/md0].write_io_errs 0
[/dev/md0].read_io_errs 0
[/dev/md0].flush_io_errs 0
[/dev/md0].corruption_errs 0
[/dev/md0].generation_errs 0
# Scrub status
UUID: 7c450192-5648-4826-bc41-4ef49d4301c7
Scrub device /dev/md0 (id 1)
no stats available
/etc/crypttab
# <target name> <source device> <key file> <options>
data-crypt1 UUID=454ff01a-68b0-4638-97cb-811a4a1ae085 /etc/luks-keys/wdkey luks
data-crypt2 UUID=d156444a-a71a-4d60-8d07-da9afb5a4fdd /etc/luks-keys/wdkey luks
blkid
/dev/sda: UUID="454ff01a-68b0-4638-97cb-811a4a1ae085" LABEL="DISK1" TYPE="crypto_LUKS"
/dev/sdb: UUID="d156444a-a71a-4d60-8d07-da9afb5a4fdd" LABEL="DISK2" TYPE="crypto_LUKS"
/dev/mapper/sdb-crypt: UUID="3c4d34eb-2f77-59f8-ed34-0f2d34f889c0" UUID_SUB="a8f6e832-282e-7bee-1c50-dacfe4bee54a" LABEL="omv:0" TYPE="linux_raid_member"
/dev/mapper/sda-crypt: UUID="3c4d34eb-2f77-59f8-ed34-0f2d34f889c0" UUID_SUB="f58d59c3-0dab-78fa-c4c6-7dd00f397dce" LABEL="omv:0" TYPE="linux_raid_member"
It's also not yet clear to me how I can remove the old BTRFS filesystem (see screenshot). It must be the Referenced property but I don't know what. All my SMB shares are already pointing to the new one.
-
After having some problems with my NAS I decided to restore a backup but in the end I was not able to successfully restore it So I wiped the system SSD and installed OMV from scratch.
I was able to unlock both my harddisks which were part of a RAID 1 array. However, I couldn't create an array in Multiple Devices. It turned out that I could mount the file system on /dev/dm-0 and this seems to work perfectly. However, how can I replace a harddisk and further manage the array without it being shown in Multiple Device?
What if I need to replace a failed disk?
I tried to get some further info using mdadm but the /dev/dm-0 on which the file system is mounted isn't visible in the /dev folder. Even worse, the mdadm.conf doesn't contain any info. So now I'm not sure how to proceed from here. Any help would be much appreciated.
EDIT: although the file system details (see screenshot) clearly show that the system is running RAID-1 I saw that data-crypt1 is mapped to dm-0. So I'm fairly sure that the system isn't using a RAID 1 array. So how can I recreate it now that the Multiple Device isn't giving me any possibilities? When /dev/dm-0 wasn't mounted as a file system I also couldn't create an array.
-
Thanks for your reply BernH. That's a huge difference compared to DD and I've already configured backup to use this format.
I hope that it will take a long time before I'll need to use it.
The RAID 1 issue (not showing in MD after recovery) is still unclear to me. I'll do some further research and make a new post if I can't figure it out.
-
I've decided to do a rebuild, it's still not clear how the RAID configuration works as I expected it to be a part of the (wiped) SSD. But happy that it works.
Switched to the fsarchive format for backups and I will consider other backup options for the system SSD like clonezilla.
-
Thanks for your reply.
So the first thing which I tried was the correct command:
zstdcat backupfile.dd.zst | sudo dd of=/dev/nvme0n1p2 status=progress
as /dev/nvme0n1p2 is the OS partition. Sadly, this resulted in an unbootable system. Could this be caused by the conv=sparse parameter?
Because restoring without sparse would take 5 hours I've started to rebuild my NAS but now I've ran into a new problem. My setup uses two harddisks, encrypted with LUKS in a RAID 1 configuration. After installing OMV to my SSD I've successfully installed LUKS and decrypted the harddisks (setup automatic decryption at boot as well).
So I have /dev/sda , unlocked , /dev/mapper/sda-crypt and /dev/sdb, unlocked, /dev/mapper/sdb-crypt
However, I can't create a RAID 1 array using multiple device as both devices won't show up and there aren't any other options available.
I'm not sure which mdadm command I should use now as I don't want to restore all the data.
I've tried the following command:
mdadm --assemble /dev/md0 /dev/mapper/sda-crypt /dev/mapper/sdb-crypt
mdadm: no recogniseable superblock on /dev/mapper/sda-crypt
mdadm: /dev/mapper/sda-crypt has no superblock - assembly aborted
The output is surprising to me as /dev/mapper/sda-crypt is shown as unlocked under encryption
And, if I remember correctly, during the original install I first encrypted both harrdisks with LUKS and created the array afterwards on both decrypted devices.
EDIT: ok, I was able to just mount the existing filesystem under /dev/dm-0! So I guess that I didn't use MD to configure the array? It's confusing to me that I can't see the RAID 1 array there and under encryption only /dev/mapper/data-crypt1 is shown as referenced so it's probably not RAID1.
EDIT2: according to filesystems RAID 1 is active:
Label: none uuid: 82b09deb-bd8f-4ce1-91af-42cfd5824c14
Total devices 2 FS bytes used 8.97TiB
devid 1 size 16.37TiB used 9.31TiB path /dev/mapper/data-crypt1
devid 2 size 16.37TiB used 9.31TiB path /dev/mapper/data-crypt2
Data, RAID1: total=9.30TiB, used=8.96TiB
System, RAID1: total=8.00MiB, used=1.66MiB
Metadata, RAID1: total=12.00GiB, used=9.55GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
# I/O error statistics
[/dev/mapper/data-crypt1].write_io_errs 0
[/dev/mapper/data-crypt1].read_io_errs 0
[/dev/mapper/data-crypt1].flush_io_errs 0
[/dev/mapper/data-crypt1].corruption_errs 0
[/dev/mapper/data-crypt1].generation_errs 0
[/dev/mapper/data-crypt2].write_io_errs 0
[/dev/mapper/data-crypt2].read_io_errs 0
[/dev/mapper/data-crypt2].flush_io_errs 0
[/dev/mapper/data-crypt2].corruption_errs 0
[/dev/mapper/data-crypt2].generation_errs 0
# Scrub status
UUID: 82b09deb-bd8f-4ce1-91af-42cfd5824c14
Scrub device /dev/dm-0 (id 1)
no stats available
Scrub device /dev/dm-1 (id 2)
no stats available
mdadm --detail /dev/md-0 doesn't work so mdadm is probably not the right tool
-
I need to restore my OMV system partition from a backup, so I've already mounted an Ubuntu live system and the USB stick containing the zst file
I've already started zstdcat backupfile.dd.zst | sudo dd of=/dev/nvme0n1p2 status=progress
However, is this the right device to restore the backup to?
The .sfdisk file contains the following information:
label: gpt
label-id: 0518E42F-9245-4061-A822-F294B764FC98
device: /dev/nvme0n1
unit: sectors
first-lba: 34
last-lba: 976773134
sector-size: 512
/dev/nvme0n1p1 : start= 2048, size= 1048576, type=C12A7328-F81F-11D2-BA4B-00A0C93EC93B, uuid=CE484F9A-DBF8-408E-BE17-BDC8849DF966
/dev/nvme0n1p2 : start= 1050624, size= 973721600, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4, uuid=D492A8D9-8309-4412-91C7-B4BF6A1CF6EC
/dev/nvme0n1p3 : start= 974772224, size= 1998848, type=0657FD6D-A4AB-43C4-84E5-0933C84B4F4F, uuid=F5AB79E3-BDC9-4824-BFE3-2DBAAFB7C7B7
So should I restore to /dev/nvme0n1 instead as the file probably contains all 3 partitions (boot, main, swap)? Or doesn't the dd file contain a straight sector by sector dump of the SSD which I can just send to the original device?
Also I guess that there's no way to speed things up as the partition is very large (512GB)? I will certainly resize it but I guess that there's no way to skip the empty space? At this rate it will take over 5 hours.
EDIT: I've changed the restore to target /dev/nvme0n1 and I've added conv=sparse to try to skip the empty parts
EDIT2: the command completed successfully but the system doesn't boot
EDIT3: trying to follow the restore instructions [How-To] Restore OMV system backup made with openmediavault-backup plugin
I was able to restore the partitions although gparted gives a warning that the backup GPT partitions table is corruptHowever, I think that the zstd -d command will try to extract the full 512GB to the 32GB usb stick which obviously won't work. I'll probably try to add an USB harddisk later
Thanks!
-
Are you using the flash plugin? Otherwise the SSD might wear out quickly.
-
According to the documentation the GUI of the docker image it can be accessed via a webbrowser:
QuoteThe graphical user interface (GUI) of the application can be accessed through a modern web browser, requiring no installation or configuration on the client side, or via any VNC client.
-
I think it would be best to use a docker container for this: https://hub.docker.com/r/jlesage/crashplan-pro/dockerfile
More info can be found in this reddit thread.
-
I don't have any experience in using crashplan. There are installation instructions at: https://support.crashplan.com/…Install-the-CrashPlan-app and they have a free trial available.
However, I looked up the supported platforms and Debian isn't listed so that doesn't bode well: https://support.crashplan.com/…pported-operating-systems
For myself I copy all data to my PC and use the backblaze personal backup client which works fine. I've also considered running crashplan directly on my OMV NAS but went for this solution instead.
-
Crashplan pro offers unlimited storage for $88 per year and, contrary to Backblaze, has support for Linux systems for the non-enterprise plan.
-
If you're looking for an alternative to Google drive you could consider Backblaze. It supports rclone according to: https://www.backblaze.com/docs…-rclone-with-backblaze-b2
I don't know if you can mount it but it should be fine for archiving purposes.