RAID5 Array Built in OMV 7.4.17, on Beelink ME Mini, Corrupted Very Quickly after I Built It

  • The RAID5 array that I built in OMV v7.4.17, on my Beelink ME Mini was the last attempt to get this setup working.


    Here are further details about my setup, for h/w and s/w, what I observed and the many ways I tried to get this setup working:


    I purchased a BeeLink ME Mini on 11 July. It arrived on 26 July 2025.


    I also purchased five Crucial Crucial P310 4TB PCIe Gen4 2280 NVMe M.2 SSDs and installed them in the ME Mini. The BeeLink ME Mini that I purchased came bundled with one, Crucial P3 Plus 2TB NVMe M.2 2280 SSD.



    I installed TrueNAS Scale 25.0.4 OS on that 2TB SSD and booted from it.



    Here are the Trouble Symptoms that I observed:



    1. No matter how many times I tried, I could never get 3, 4 or 5 of those, brand new, Crucial 4TB NVMe SSDs to live in a RAIDz1 array for very long without the array being reported as corrupt by the TrueNAS OS. It would report over 100 errors on 2-3 of the 4TB Crucial NVMe SSDs. I completely deleted and re-installed the TrueNAS Scale 25.0.4 OS and rebuilt the RAIDz1 array numerous times. I observed the same rather speedy deterioration of the 5-disk RAIDz1 arrays that I built and rebuilt. Sometimes the corruption would happen as the array was first being constructed. Other times the array corruption wouldn't start until I copied 300+ GB worth of my files onto it and then started to play my media, or open a file. But the array corruption happened every time.



    2. Further, every time, one-three of five SSD drives would get marked as failed, and marked as offline or unreachable, by the TrueNASCCE OS. Then I invarriably found that, at that point, I could no longer get data back, for each SSD that TrueNAS had marked as failed, with smartctl or sensors.



    3. Every time, I was able to take the Crucial 4 TB NVMe SSDs, that the TrueNAS OS had marked as failed - and that became unreachable via smartctl, out of the ME Mini and into a USBC3 external NVMe drive enclosure and move that reportedly failed drive over to another computer and partition and lay a filesystem on that reportedly failed SSD. I found that I could write the same 300+ GB of files onto it, and read and play files from that drive, for over 24 hours, before I stopped testing, with no problems at all.


    4. I found that different SSD drives, in different ME Mini drive bays, were getting marked as failed, as I re-tested building new RAIDz1 arrays. But each 3, 4 or 5 4TB Crucial NVMe SSD RAIDz1 array went corrupt and TrueNAS would mark, usually at least two, SSDs as failed.


    5. So I purchased five Kingston SKC3000D/4096G M.2 2280 SSDs and tried them in the ME Mini. I observed the same results, as I have reported above, - with the Crucial 4 TB SSDs, but this time with the Kingston 4TB NVMe SSDs.


    6. Even though it was not what I needed, and just for test purposes, I built first one and then two, 2 SKC3000D SSD RAID1 arrays. Then I tested writing and reading data to and from both arrays, also over a Samba share. I observed no troubles with either 2 SSD RAID1 array in the ME Mini, for over 24 hours worth of testing.


    7. Then I tried replacing the TrueNAS OS, on the ME Mini, with Fedora Server 42 and then with OpenMedia Vault 7.X. Both were the latest stable downloads available for those OSes, when I downloaded them. I tried building a RAID5 array on the ME Mini with 5 of the Kingston 4 TB SSDs and observed corruption of the array before, or very shortly after I ran the command to build that five SSD RAID5 array. This very similar pattern of reporteed array corruption, tons of drive errors reported, and 1-3 Kingston 4 TB SSD drive(s) being marked as failed, by each of those two additional OSes. happened again very quickly, with both OSess - on the ME Mini.


    I really like the ME Mini form factor and design - with the metal frame and thermal tape between that and all six internal NVMe drives. That said, I purchased the ME Mini to use as a NAS, with the drive failure resiliency of at least RAID5 or RAIDz1. That is why, so far this first time ME Mini ownership experience is not working out for me, at all.


    Is there some firmware patch or BIOS setting that I missed, to make this setup, as reported above, work?


    Have you ever heard of such a set of symptoms?


    Is there some OMV setting that I should, or could have tried?


    TIA for yourr ideas. :S

  • crashtest

    Approved the thread.
    • Official Post

    The only thing I can think of is trying a more recent Proxmox kernel. If it's a driver issue, a more recent kernel might solve it. To install a Proxmox kernel, use the openmediavault-kernel plugin. Install omv-extras first.

  • I also ordered a Beelink ME mini, it is scheduled to arrive in about 1.5 weeks. I intend to run raid-5 on it with omv with 4x 2 tb Lexar nm790 drives. Did you get in touch with beelink support?

    Seems a bug in kernel of truenas for the ASM2824 controller.


    Good to know I might run in to problems, once mine has arrived I will definately test it before filling it with actual data.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    Edited once, last by wiz101 ().

  • This is worrying, as I have just purchased and received my ME Mini to replace my CM3588 NAS that was using OMV 7 with next cloud server. This setup killed my 4x WD-SN770 2Tb NVME drives with all 4 devices being put in to read only mode (configured for Raid5). Got them replaced via SanDisk warranty so looked for a different device and settled on the ME mini. I hope this is not a recurring theme with a NVME NAS system?

  • Mine will arrive tomorrow, I intent do run a newer kernel on it than the debian original one. I have tried this in my hyper-V test OMV and that seemed to work ok.

    if that works fine with the ME and my raid set stays ok I will make a new topic on what I did and how.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    • Official Post

    I don't have one but a 45W power supply for the n150 (6-10w), two 2.5G network adapters (1-2w), 6 nvme sticks (10w each), and a fan (5w?) does not seem big enough to me since the components could pull almost 80w. Putting the six nvme sticks in raid 5 would make it even worse since all six sticks will be accessed every time it reads or writes.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.2 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I am going to try with 5 nvme's in it, it should arrive tomorrow, and hopefully this weekend there's time to tinker with it. The NM790 nvme drives use way less power:

    Lexar NM790 2 TB Review
    The Lexar NM790 2 TB offers fantastic performance at outstanding pricing. The 2 TB model sells for just $110, which makes it the most affordable high-end SSD…
    www.techpowerup.com


    That's mainly why I choose them. But you are right, the power unit feels a bit on the light side. I also indend to remove the wireless board to save some power as I won't use that anyways.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    • Official Post

    The Lexar sticks are dram-less. This means their performance will be even worse in raid 5 since they have to use system ram for cache. Will be interesting to see your power consumption and performance numbers.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.2 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The Lexar sticks are dram-less. This means their performance will be even worse in raid 5 since they have to use system ram for cache. Will be interesting to see your power consumption and performance numbers.

    I'll see what happens when it is here. Mind you, all nvme's have 1 pci lane except for slot 6 which has 2, so the nvme's will never run on max performance anyways, but this setup will hopefully use less power and has more performance than my old atom based qnap.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    • Official Post

    but this setup will hopefully use less power and has more performance than my old atom based qnap.

    It should but raid will always use more power than individual filesystems. I would pool them with mergerfs myself (and I do).

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.5 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.2 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks for the suggestion, I'll have a look at mergerfs

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

  • Ok, it seems it is like I expected, there's a bug in the default kernel that is shipped with debian (6.12.38).


    See this: https://lkml.iu.edu/hypermail/linux/kernel/2201.0/00333.html



    What I did is following:


    sudo curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc


    vi /etc/apt/sources.list.d/zabbly-kernel-stable.sources


    Put this in the zabbly-kernel-stable.sources

    Enabled: yes

    Types: deb

    URIs: https://pkgs.zabbly.com/kernel/stable

    Suites: bookworm

    Components: main

    Architectures: amd64

    Signed-By: /etc/apt/keyrings/zabbly.asc



    apt update


    apt install linux-zabbly


    inspiration came from this website:https://fostips.com/install-li…15-in-debian-12-bookworm/


    Be sure to have the kernel plug-in installed so you can check if the zabbly kernel gets selected as bootable kernel. My Beelink ME Mini has been rock solid for more than 48 hrs, and I have written 2.5 Tb to my Raid-5 set (In the end I opted for raid-5 as I am familiar with md raids)


    My Beelink ME mini runs with kernel 6.15-10-zabbly (just updated to 6.15.10, I started off with 6.15.09, and updates come as regular updates.


    Temps of the nvme's eventhough it was 31 C in the room not above 56 C when read/writes were done, overall I am happy with the little device and it is soo much more responsive compared to my qnap.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x


  • 1. No matter how many times I tried, I could never get 3, 4 or 5 of those, brand new, Crucial 4TB NVMe SSDs to live in a RAIDz1 array

    This is a very common problem. I don't have a real solution, but I would point out:


    1. The vast majority of complaints I see are about failures with Crucial NVME drives.


    2. The voltage for the 3.3V rail in BIOS is excessively high, like 4.2V.


    3. People claim that while they see a problem with TrueNAS Scale, they don't with TrueNAS core. I think the latter uses BSD not Linux.


    4. The BeeLink Mini ME only has 12 GB of RAM which is insufficient to run ZFS, which would include RAIDz1. btrFs might be a better choice, which is easier on the RAM.

    • Official Post

    The BeeLink Mini ME only has 12 GB of RAM which is insufficient to run ZFS

    I have no idea about the rest, but this statement is incorrect. ZFS can run on 2GB RAM systems without problems, just like the other file systems.

  • I have no idea about the rest, but this statement is incorrect. ZFS can run on 2GB RAM systems without problems, just like the other file systems.

    It will run on 2GB, but not well. A rule of thumb is 1GB RAM for every 1TB of disk. So if you don't have that much NVME storage, then it would indeed be possible to use ZFS on the Mini ME. There is also some ZFS tweaks to make it use less RAM, but those are above my pay grade.

    • Official Post

    A rule of thumb is 1GB RAM for every 1TB of disk.

    That general rule applies only if dedup is turned on. By default, in OMV and when using the ZFS plugin, dedup is off.

    Look at the answers in -> this thread, particularly the second answer where the admin was running a 30TB pool on 16GB of ram for years.

    • Official Post

    It will run on 2GB, but not well.

    What makes you think that? I suggest you check the OpenZFS documentation for the minimum requirements for this file system.


    A rule of thumb is 1GB RAM for every 1TB of disk.

    This is not correct.

  • It will run on 2GB, but not well. A rule of thumb is 1GB RAM for every 1TB of disk. So if you don't have that much NVME storage, then it would indeed be possible to use ZFS on the Mini ME. There is also some ZFS tweaks to make it use less RAM, but those are above my pay grade.

    Broad statements about how ZFS uses RAM which repeat common misconceptions are bound to draw criticism. This thread is worth reading by anyone who wants to improve their understanding of the role of the ZFS ARC: https://discourse.practicalzfs…zfs-linux-in-general/2420



    I wouldn't be too quick to point the figure at Crucial nvmes. Beelink themselves are selling these units with pre-installed Crucial 2TB p3 plus. The number of complaints about Crucial nvmes may simply reflect their popularity, not any inherent problem. What seems most important is that the sum of power draw of the nmves remains within the units overall power envelope.


    There's the additional question of whether some linux kernels - stable debian - are handling power control on the ASM chip correctly or not. wiz101 elected to use a zabbly kernel and has a stable nmve RAID ( MD RAID?). We'll never know if, as chente suggested, the installation of a 6.14 proxmox kernel would have led to a stable Beelink system.


    This thread has other tips for those who cannot use other kernels re: bios version, kernel boot params and various bios settings: https://forums.truenas.com/t/u…in-truenas-scale/47306/83

  • Krisbee

    Correct, mine runs MD Raid.

    What I read on the ASM chip was that it had something to do with the link speed switching between two values so the datalink layer would not get active on some of the ports. This has been addressed in newer kernels. I did not test it with the proxmox kernels, I could not find any details if this was fixed and with what kernel version so my guess was to take the highest kernel version I could easily get working correctly with OMV and then test if it would stay stable.


    Off course you have to make sure the total power usage has to stay within the 45 Watts. I had also removed the wireless board in order to save power (plus unnecessary wifi+bluetooth on a nas device imho). Compared with my qnap nas the load on my ups has dropped with 19.5 W and that's without the disks spinning, so overall I am happy with the result.

    Beelink ME mini OMV 8, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

    OMV 8 in Hyper-V for testing, OMV-Backup, OMV-Writecache, OMV-Kernel, OVM-LVM2, OMV-MD, OMV-Nut, OMV-Tftp, OMV-TGT, Zabbly kernel 6.17.x

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!