Hi there, happy new year and thanks for OMV8 and the great support!
I've had the same issue, adding the symlink part to the udef file and a reboot solved the problem for my system.
MD Email every morning
-
- OMV 8.x
- resolved
- Upgrade 7.x -> 8.x
- CAH1982
-
-
I'm sorry, i read more than one of these shell instructions and the post of volker (which should be voldev here?)
So, as a incapable linux user, while i can read the solution provided into the blogpost, I don't know which steps I should take for analyze my current status, to edit correctly configuration files (or filesystem) and then resolve this issue.
Coderoot@myhost:~# mdadm --detail --scan ARRAY /dev/md0 metadata=1.2 UUID=67117265:66ed20fd:913b6886:90dfdfc5Please, consider to tell me what I should post on the forum.
-
Hello,
not to revive this already resolved thread unnecessarily, but since none of the information provided here resolved the issue for me and StreetBall has some questions, I figured I post my solution and maybe help somebody along the way.
I created a VirtualBox VM and freshly installed openmediavault on it to not mess up my actual NAS in case I mess something up. Even on a freshly installed openmediavault the issue was present (this might be of interest to votdev).
Coderoot@vnas:~# mdadm --monitor --scan --oneshot mdadm: DeviceDisappeared event detected on md device /dev/md/md0 mdadm: NewArray event detected on md device /dev/md0And let me just clarify what I mean by "issue": The required symlinks in /dev/md are not being created. The fact that they're missing is just a consequence, a symptom. The routine to create them is invoked but decides not to create them.
With that said, there's no need for more udev rules to create symlinks. Debian ships everything that's required. See
Code: /usr/lib/udev/rules.d/63-md-raid-arrays.rulesIMPORT{program}="/usr/sbin/mdadm --detail --no-devices --export $devnode" #... #... ENV{DEVTYPE}=="disk", ENV{MD_DEVNAME}=="?*", SYMLINK+="md/$env{MD_DEVNAME}"This means that, if everything is set up correctly, we should see symlinks in /dev/md which point to block devices in /dev. Mind the MD_DEVNAME property which should come from the IMPORT instruction.
Running the IMPORT instruction manually results in
Coderoot@vnas:~# mdadm --detail --no-devices --export /dev/md0 MD_LEVEL=raid1 MD_DEVICES=2 MD_METADATA=1.2 MD_UUID=f0a98703:e4367147:0cf6f3f1:7766647d MD_NAME=vnas:0Mind the absence of the MD_DEVNAME property. To me this made clear that it's an mdadm issue, not an openmediavault issue. The new mdadm version in Debian 13 does not print the required property. I tried to figure out why and read the mdadm code, but I'm not a C developer by trade so I gave up and took a different approach. How is mdadm configured?
Code: /etc/mdadm/mdadm.conf# <omitted for brevity> # definitions of existing MD arrays ARRAY /dev/md0 metadata=1.2 UUID=f0a98703:e4367147:0cf6f3f1:7766647dA quick search in mdadm.conf manpage revealed that the /dev/md0 part in the ARRAY definition is considered the name. Well, that sort of seems redundant, no? This effectively names the device to its block device path. That raised a question: What would mdadm do if I replaced the name with the path of the symlink (like suggested in the manpage)?
Code
Display Moreroot@vnas:~# ls -l /dev/md* brw-rw---- 1 root disk 9, 0 Jan 9 17:43 /dev/md0 root@vnas:~# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Wed Jan 7 19:35:11 2026 # <omitted for brevity> Name : vnas:0 (local to host vnas) # <omitted for brevity> Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc root@vnas:~# mdadm --stop /dev/md0 mdadm: stopped /dev/md0 root@vnas:~# mdadm --assemble /dev/md/0 /dev/sd[bc] mdadm: /dev/md/0 has been started with 2 drives. root@vnas:~# ls -l /dev/md* brw-rw---- 1 root disk 9, 0 Jan 9 17:46 /dev/md0 /dev/md: total 0 lrwxrwxrwx 1 root root 6 Jan 9 17:46 0 -> ../md0 root@vnas:~# mdadm --detail --no-devices --export /dev/md0 MD_LEVEL=raid1 MD_DEVICES=2 MD_METADATA=1.2 MD_UUID=f0a98703:e4367147:0cf6f3f1:7766647d MD_DEVNAME=0 MD_NAME=vnas:0Here I reassembled the array with its symlink path instead of block device path and holy cow, this guess resolved the issue. Well, not exactly. If you reboot, the symlink will be gone again. We can now put Volker's info from his blog post to good use and regenerate the file /etc/mdadm/mdadm.conf
Coderoot@vnas:~# omv-salt deploy run mdadm # <omitted for brevity> root@vnas:~# cat /etc/mdadm/mdadm.conf # <omitted for brevity> # definitions of existing MD arrays ARRAY /dev/md/0 metadata=1.2 UUID=f0a98703:e4367147:0cf6f3f1:7766647dNotice that the array name has been updated. However, if you reboot, the symlink will be gone again still. The final touch is to update the initramfs which uses the information from the file mdadm.conf
Coderoot@vnas:~# update-initramfs -u update-initramfs: Generating /boot/initrd.img-6.17.13+deb13-amd64Finally, if you reboot, you should be good.
Kind regards
-
-
Well, hint of dunno helped.
While having an issue during reboot (array disappeared, but "happened before" and might be an hardware issue due to mainboard or cables). after
this is output of mdadm --monitor --scan --oneshot
I also edited /etc/udev/rules.d/99-openmediavault-md-raid.rules
Code
Display Morewhich now looks like this ACTION=="add|change", \ SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \ ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192", SYMLINK+="md/md0" ACTION=="add|change", \ SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \ IMPORT{program}="import_env /etc/default/openmediavault", \ ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}", SYMLINK+="md/md0"but adding SYMLINK+="md/md0" and adding directory+symlink in /dev/ "workaround" was not surviving a reboot.
-
StreetBall, my post may have been a bit unclear here and there. After reassembling the arrays with their symlink paths instead of block device paths do not reboot. I.e., run
Bash# Reassemble your arrays with the correct paths/names. Do not reboot. Then run omv-salt deploy run mdadm # Make sure that the ARRAY definitions in your mdadm.conf are correct now. cat /etc/mdadm/mdadm.conf # If the ARRAY definitions are correct, continue with update-initramfs -u rebootUnder the hood the command omv-salt deploy run mdadm invokes mdadm --detail --scan >> /etc/mdadm/mdadm.conf which will regenerate the ARRAY definitions but it only picks up the correct paths/names if the currently running arrays have been assembled with them.
You can find the correct names by invoking mdadm --detail --scan -vv. The names come in the form of <homehost>:<name>. You just need the <name> part. Let's say you have one array and it's named myhost:mydata or myhost:0, then you reassemble your array with /dev/md/mydata or /dev/md/0 (technically you can use whatever name you like, it doesn't have to match the array metadata, but that's not clean and comprehensible). Also, if your system decides to use the block device name /dev/md127 instead of /dev/md0, that's nothing to worry about. Everything will work just fine.
And don't edit /etc/udev/rules.d/99-openmediavault-md-raid.rules. There's really no need for it. No new rules and no modification of existing rules are required.
Fingers crossed and kind regards
-
Thanks for the details.
Before running omv-salt deploy run mdadm I made myself sure that the raid definitions in my system were correct and consistent.
This is the output of mdadm --detail --scan -vv
Code
Display More/dev/md/wdred: Version : 1.2 Creation Time : Thu Jun 10 12:56:47 2021 Raid Level : raid1 Array Size : 1953382464 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Jan 11 07:39:22 2026 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : myhostname:wdred (local to host myhostname) UUID : 67117265:66ed20fd:913b6886:90dfdfc5and this is the current output of mdadm --monitor --scan --oneshot
No unwanted messages at morning about array detection, system is working "as before" update to OMV8, which sometimes is not fine because if I reboot the computer the raid disappear, but mostly should be the time that mainboard waits for populating the sata channels (current guess; powering off the raid re-appear working and untouched, not needing a fsck, rebuild or such).
Your post helped me solving the tedious message for a not disappearing array.
-
-
Hi everybody,
thank you for your deep inputs of how to get rid of the message.
I'm pretty new to linux and just started with OMV8. I set also up a RAID in my OMV and now receiving this warnings.
My question is: Will this warnings prevent to function the RAID as intended in a case of an error like a failing disk?
I will not start to try to fix this, if there's any chance that the problem will be fixed by a future update or the RAID will work as intended.

Best regards,
gto
-
-
votdev Are there any plans to fix this issue?
Sure, but what is the correct fix? This udev rule looks like a workaround, but not a fix for the origin problem.
-
-
Hi @ all,
thanks for the recipe to fix this problem. Unfortunately when I run "
"
I get the error: "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?". Can someone give me hint? I tried to kill the corresponding processes, but that didn't help.
Best regards!
Frederik
-
Hello, frederik,
as the message "Perhaps a running process, mounted filesystem or active volume group?" says, you probably just need to unmount all filesystems that are on your array and also stop all (if any) volume groups on it as well. Here's an example from my setup:
With lsblk (list block devices) you can see a hierarchy from the physical disks (sda and sdb) through the array md0 and the volume groups vg0-share and vg0-timemachine down to the mount points of the filesystems (/export/share and whatnot).
Code
Display Moreroot@nas:~# lsblk sda 8:0 0 1.8T 0 disk └─md0 9:0 0 1.8T 0 raid1 ├─vg0-share 253:0 0 1.3T 0 lvm /export/share │ /srv/dev-disk-by-uuid-0a26880d-83b3-4696-bf23-e4cbd2cb08ed └─vg0-timemachine 253:1 0 558.9G 0 lvm /export/timemachine /srv/dev-disk-by-uuid-06036ad8-5698-43a7-96d4-516bb9d4a1c9 sdb 8:16 0 1.8T 0 disk └─md0 9:0 0 1.8T 0 raid1 ├─vg0-share 253:0 0 1.3T 0 lvm /export/share │ /srv/dev-disk-by-uuid-0a26880d-83b3-4696-bf23-e4cbd2cb08ed └─vg0-timemachine 253:1 0 558.9G 0 lvm /export/timemachine /srv/dev-disk-by-uuid-06036ad8-5698-43a7-96d4-516bb9d4a1c9So, if I wanted to stop my array md0, I'd simply have to run
Coderoot@nas:~# umount /export/share root@nas:~# umount /export/timemachine root@nas:~# umount /srv/dev-disk-by-uuid-0a26880d-83b3-4696-bf23-e4cbd2cb08ed root@nas:~# umount /srv/dev-disk-by-uuid-06036ad8-5698-43a7-96d4-516bb9d4a1c9 root@nas:~# vgchange -an vg0 0 logical volume(s) in volume group "vg0" now active root@nas:~# mdadm --stop md0 mdadm: stopped md0And now you can see that my disks aren't used in anything anymore
Coderoot@nas:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 1.8T 0 disk sdb 8:16 0 1.8T 0 diskHope this help.
-
Hi all,
Just to say that the correct (or at least the most elegant...) way to resolve this issue is to use dunno 's solution above.
It just works. The result is clean and you don't need to mess around with udev rules.
Coderoot@labbackups:~# ls /dev/md* /dev/md0 /dev/md: 0 root@labbackups:~# mdadm --monitor --scan --oneshot root@labbackups:~#Thank you dunno
-
-
there's a workaround for this.
Didn't work.
-
Hi, but isn't that only a temporary solution, which will not survive the reboot?
Display MoreHi all,
Just to say that the correct (or at least the most elegant...) way to resolve this issue is to use dunno 's solution above.
It just works. The result is clean and you don't need to mess around with udev rules.
Coderoot@labbackups:~# ls /dev/md* /dev/md0 /dev/md: 0 root@labbackups:~# mdadm --monitor --scan --oneshot root@labbackups:~#Thank you dunno
-
Hi Dunno,
thanks a lot for your help! Unfortunately I only get this output from lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 5,5T 0 disk
└─md0 9:0 0 5,5T 0 raid1 /srv/dev-disk-by-uuid-6ca5189d-cb82-4a7e-bd8c-cfe09e026c36
sdb 8:16 0 5,5T 0 disk
└─md0 9:0 0 5,5T 0 raid1 /srv/dev-disk-by-uuid-6ca5189d-cb82-4a7e-bd8c-cfe09e026c36
nvme0n1 259:0 0 931,5G 0 disk
├─nvme0n1p1
│ 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2
│ 259:2 0 930,1G 0 part /var/lib/containers/storage/overlay
│ /
└─nvme0n1p3
259:3 0 977M 0 part [SWAP]
When I unmount /srv/dev-disk-by-uuid-6ca5189d-cb82-4a7e-bd8c-cfe09e026c36 I still get the same error message afterwards.
Best regards!
Frederik
Display MoreHello, frederik,
as the message "Perhaps a running process, mounted filesystem or active volume group?" says, you probably just need to unmount all filesystems that are on your array and also stop all (if any) volume groups on it as well. Here's an example from my setup:
With lsblk (list block devices) you can see a hierarchy from the physical disks (sda and sdb) through the array md0 and the volume groups vg0-share and vg0-timemachine down to the mount points of the filesystems (/export/share and whatnot).
Code
Display Moreroot@nas:~# lsblk sda 8:0 0 1.8T 0 disk └─md0 9:0 0 1.8T 0 raid1 ├─vg0-share 253:0 0 1.3T 0 lvm /export/share │ /srv/dev-disk-by-uuid-0a26880d-83b3-4696-bf23-e4cbd2cb08ed └─vg0-timemachine 253:1 0 558.9G 0 lvm /export/timemachine /srv/dev-disk-by-uuid-06036ad8-5698-43a7-96d4-516bb9d4a1c9 sdb 8:16 0 1.8T 0 disk └─md0 9:0 0 1.8T 0 raid1 ├─vg0-share 253:0 0 1.3T 0 lvm /export/share │ /srv/dev-disk-by-uuid-0a26880d-83b3-4696-bf23-e4cbd2cb08ed └─vg0-timemachine 253:1 0 558.9G 0 lvm /export/timemachine /srv/dev-disk-by-uuid-06036ad8-5698-43a7-96d4-516bb9d4a1c9So, if I wanted to stop my array md0, I'd simply have to run
Coderoot@nas:~# umount /export/share root@nas:~# umount /export/timemachine root@nas:~# umount /srv/dev-disk-by-uuid-0a26880d-83b3-4696-bf23-e4cbd2cb08ed root@nas:~# umount /srv/dev-disk-by-uuid-06036ad8-5698-43a7-96d4-516bb9d4a1c9 root@nas:~# vgchange -an vg0 0 logical volume(s) in volume group "vg0" now active root@nas:~# mdadm --stop md0 mdadm: stopped md0And now you can see that my disks aren't used in anything anymore
Coderoot@nas:~# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 1.8T 0 disk sdb 8:16 0 1.8T 0 diskHope this help.
-
-
Didn't work.
Because your disk array doesn't have "stripe_cache_size" and the rule doesn't work. Use this instead. Works fine.
Just found out after a reboot settings are lost
Adding the below seems to fix it
So full file looks like thisCode
Display MoreACTION=="add|change", \ SUBSYSTEM=="block", KERNEL=="md*", \ SYMLINK+="md/%k" ACTION=="add|change", \ SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \ ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192" ACTION=="add|change", \ SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \ IMPORT{program}="import_env /etc/default/openmediavault", \ ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}" -
Hello, frederik,
are you certain that the unmount succeeded? The unmount only works if the filesystems are not busy, even if you're root.
You could also try to boot into recovery mode and launch a shell. No filesystem (other than root as read-only) should be mounted then.
And if that doesn't work, you could boot your machine with Debian or Ubuntu or whatever from a USB drive and chroot into your actual system. That way you could operate on your actual system without booting it.
There are plenty of good tutorials out there for doing both, which is why I'm not going into details here.
Fingers crossed.
Display MoreHi Dunno,
thanks a lot for your help! Unfortunately I only get this output from lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 5,5T 0 disk
└─md0 9:0 0 5,5T 0 raid1 /srv/dev-disk-by-uuid-6ca5189d-cb82-4a7e-bd8c-cfe09e026c36
sdb 8:16 0 5,5T 0 disk
└─md0 9:0 0 5,5T 0 raid1 /srv/dev-disk-by-uuid-6ca5189d-cb82-4a7e-bd8c-cfe09e026c36
nvme0n1 259:0 0 931,5G 0 disk
├─nvme0n1p1
│ 259:1 0 512M 0 part /boot/efi
├─nvme0n1p2
│ 259:2 0 930,1G 0 part /var/lib/containers/storage/overlay
│ /
└─nvme0n1p3
259:3 0 977M 0 part [SWAP]
When I unmount /srv/dev-disk-by-uuid-6ca5189d-cb82-4a7e-bd8c-cfe09e026c36 I still get the same error message afterwards.
Best regards!
Frederik
Participate now!
Don’t have an account yet? Register yourself now and be a part of our community!