Lineal RAID wont work since i Update 7.7.0-1 (Sandworm)

  • Today, i update my OMV to the vertion 7.7.0-1 (Sandworm) and now my RAID Lineal wont work, i tried a lot of comands and nothing work, someone may know whta to do??


    root@SR388:~# mdadm --create --assume-clean --level=linear --raid-devices=2 /dev/md0 /dev/sdb /dev/sdc

    mdadm: /dev/sdb appears to be part of a raid array:

    level=linear devices=2 ctime=Fri Feb 21 18:58:20 2025

    mdadm: /dev/sdc appears to be part of a raid array:

    level=linear devices=2 ctime=Fri Feb 21 18:58:20 2025

    Continue creating array? y

    mdadm: Defaulting to version 1.2 metadata

    mdadm: RUN_ARRAY failed: Invalid argument


    i tried everyting and if i check my disck this i recive


    root@SR388:~# mdadm --examine /dev/sdb

    /dev/sdb:

    Magic : a92b4efc

    Version : 1.2

    Feature Map : 0x0

    Array UUID : ea38d636:f6e49b64:a4f90d44:3c669e61

    Name : SR388:0 (local to host SR388)

    Creation Time : Fri Feb 21 19:03:10 2025

    Raid Level : linear

    Raid Devices : 2


    Avail Dev Size : 3906764976 sectors (1862.89 GiB 2000.26 GB)

    Used Dev Size : 0 sectors

    Data Offset : 264192 sectors

    Super Offset : 8 sectors

    Unused Space : before=264112 sectors, after=0 sectors

    State : clean

    Device UUID : e779d369:1d3b9baf:f10133f8:5318106d


    Update Time : Fri Feb 21 19:03:10 2025

    Bad Block Log : 512 entries available at offset 8 sectors

    Checksum : 59bb2eb4 - correct

    Events : 0


    Rounding : 0K


    Device Role : Active device 0

    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

    root@SR388:~# mdadm --examine /dev/sdc

    /dev/sdc:

    Magic : a92b4efc

    Version : 1.2

    Feature Map : 0x0

    Array UUID : ea38d636:f6e49b64:a4f90d44:3c669e61

    Name : SR388:0 (local to host SR388)

    Creation Time : Fri Feb 21 19:03:10 2025

    Raid Level : linear

    Raid Devices : 2


    Avail Dev Size : 3906764976 sectors (1862.89 GiB 2000.26 GB)

    Used Dev Size : 0 sectors

    Data Offset : 264192 sectors

    Super Offset : 8 sectors

    Unused Space : before=264112 sectors, after=0 sectors

    State : clean

    Device UUID : 2a8a922a:8531109c:147bca0d:eb8d3fff


    Update Time : Fri Feb 21 19:03:10 2025

    Bad Block Log : 512 entries available at offset 8 sectors

    Checksum : aeb6241a - correct

    Events : 0


    Rounding : 0K


    Device Role : Active device 1

    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)


    any help. please??

    i canot lose al my data



    root@SR388:~# cat /proc/mdstat

    Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    unused devices: <none>

  • crashtest

    Approved the thread.
  • A linear array has no redundancy, so I hope you have back ups.


    The use of a "mdadm --create" on an existing array should always be a last resort and can easily cause data loss etc.


    Luckily even though you ignored the "continue" warning the command failed.


    The problem is with the backport kernel Linux 6.12.9+bpo-amd64 that was part of the update. If you check your logs you should see a error message like this:


    kernel: md: personality for level -1 is not loaded!


    The workaround is to revert to the previous stable kernel 6.1.0-31-amd64 that should still be on your system. If your system is attached to a screen and keyboard you can just re-boot the previous kernel via the grub menu.


    But it is more convenient to first install the kernel plugin ( openmediavault-kernel 7.1.4 ) and then set the previous kernel as the default and reboot.


    Your RAID array should then be recognised again

  • thank you so much, you are my savior!!


    Just install (openmediavault-kernel 7.1.4) and chose directly in grub the "kernel 6.1.0-31-amd64" and my disc works perfectly


    Thank you so much

  • I've updated my OMV installation as well. I noticed a KVM package being held back. So I enabled backports in OMV extras as per instruction here to be able to finish the updates. Unfortunately, that caused my zfs file system to vanish. The server boots fine from the separate SSD, but everything from the zfs pool on the two data storage SSDs is inaccessible. The drives themselves are fine according to SMART.


    I fear to make the wrong steps and kill something permanently if that's not already happened. Is my problem related to the abovementioned? Or shall I open a new thread?


    Attempting to access the zfs pool yields:

    I'd be very grateful for assistance!

  • I am confused about the latest kernel. I thought the pve version is the correct one. At least that's the one which was previously used in combination with proxmox.

  • That did the trick! Thank you very much. EVen though I have backups I was mightily scared!

    You could argue that once you switch from debian to pve kernels you should delete all debian kernels from OMV. The last OMV update included debian kernel 6.12.9 from backports which became the default boot kernel. Unfortunately you did not notice that the associated zfs modules did not build against this kernel which gave the appearance of losing your zfs pool. Of course, the whole point of using the PVE kernel is to avoid the use of the debian DKMS system which is only going to work if the version of zfs-dkms package ( and its dependencies) on your system is also from debian backports.


    Personally, I advise OMV ZFS users to start by installing the kernel plugin before the zfs plugin. Install a pve kernel, re-boot and delete any debian kernel and then install the zfs-plugin.

  • ...


    Personally, I advise OMV ZFS users to start by installing the kernel plugin before the zfs plugin. Install a pve kernel, re-boot and delete any debian kernel and then install the zfs-plugin.

    I beg your pardon for answering months later! Thanks for your explanation concerning the pve kernel and zfs! Since I obviously followed a different order during installation I wonder whether I can still "heal" this. Would it suffice just to uninstall all non-pve kernels?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!