Posts by wiz101

    the statement SYMLINK+=md/md0 should create a subdir /dev/md.


    what happens if you create a md folder in /dev and create a symlink in /dev/md/ to /dev/md0 in it?


    so mkdir /dev/md # create subdir

    cd /dev/md #change into subdir

    ln -s /dev/md0 #create symlink


    test if that solves it.

    if so create symlink to /dev/md1 in /dev/md too

    (ln -s /dev/md1 in /dev/md)


    hope this helps. Issue is that your and mine raid exist as /dev/mdX, and mdadm --monitor --scan --oneshot looks in /dev/md for the devices. So if the udev rule did not create the symlink maybe doing so by hand will solve it.

    if you look in the /dev/ folder, do you have a subfolder /dev/md ?


    On mine I have the device file in /dev/, so /dev/md0 exists there. In the /dev/md subfolder there's a symbolic link to /dev/md0, so /dev/md/md0 points to /dev/md0.


    How does your /dev folder look?

    that is strange. I have removed the statements in my config I added and reloaded the udev rules, this is the output now:


    Code
     mdadm --monitor --scan --oneshot                 
    mdadm: DeviceDisappeared event detected on md device /dev/md/md0
    mdadm: NewArray event detected on md device /dev/md0

    when I revert my config and reload udev rules error is gone here.


    what happens if you do this:


    udevadm control --reload-rules && udevadm trigger <-reload udev rules

    then

    mdadm --monitor --scan --oneshot?

    there's a workaround for this.


    in /etc/udev/rules.d you will find 99-openmediavault-md-raid.rules. if you have updated to Openmediavault 8.0.4 you look for these line:

    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192"

    at the end add ", SYMLINK+="md/md0" so it will look like this:


    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192", SYMLINK+="md/md0"

    then look for this:

    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      IMPORT{program}="import_env /etc/default/openmediavault", \
      ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}"

    again add ", SYMLINK+="md/md0" at the end so that looks like this:

    Code
    ACTION=="add|change", \
      SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \
      IMPORT{program}="import_env /etc/default/openmediavault", \
      ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}", SYMLINK+="md/md0"


    you can test it with mdadm --monitor --scan --oneshot. if you don't change this you will get the DeviceDisappeared error.


    Since you have two raid sets, I think you have to add ", SYMLINK+=md/nd1" too, but as said, add above first, then test with mdadm --monitor --scan --oneshot and if then you only get an message about md1 then add SYMLINK+="md/nd1" too.


    I have found solution here: https://discourse.ubuntu.com/t…red-newarray-alerts/56076

    more background info here: https://linux.debian.bugs.dist…ssage-every-day-from-cron

    only had one niggle, got this email:


    Code
    /etc/cron.daily/openmediavault-mdadm:
    mdadm: DeviceDisappeared event detected on md device /dev/md/md0
    mdadm: NewArray event detected on md device /dev/md0

    turned out mdadm --monitor --scan --oneshot gave the same output. I checked my md array, nothing was wrong with that. After googling I found this workaround:

    edit /etc/udev/rules.d/99-openmediavault-md-raid.rules


    SUBSYSTEM=="block", KERNEL=="md*", ACTION=="add|change", TEST=="md/stripe_cache_size", ATTR{md/stripe_cache_size}="8192", SYMLINK+="md/md0"


    I've added the bold statements and now I don't get DeviceDisappeared event notifications anymore.


    found solution here: https://discourse.ubuntu.com/t…red-newarray-alerts/56076

    more background info here: https://linux.debian.bugs.dist…ssage-every-day-from-cron

    I can only confirm this. This morning I saw OMV8 was released, so I did upgrade my Beelink ME mini. I had made an image of the boot disk just to be sure. No issues to report.


    I did the upgrade on my vm install with the release candidate to see what would happen last week, that was also a smooth upgrade.

    Krisbee

    Correct, mine runs MD Raid.

    What I read on the ASM chip was that it had something to do with the link speed switching between two values so the datalink layer would not get active on some of the ports. This has been addressed in newer kernels. I did not test it with the proxmox kernels, I could not find any details if this was fixed and with what kernel version so my guess was to take the highest kernel version I could easily get working correctly with OMV and then test if it would stay stable.


    Off course you have to make sure the total power usage has to stay within the 45 Watts. I had also removed the wireless board in order to save power (plus unnecessary wifi+bluetooth on a nas device imho). Compared with my qnap nas the load on my ups has dropped with 19.5 W and that's without the disks spinning, so overall I am happy with the result.

    Ok, it seems it is like I expected, there's a bug in the default kernel that is shipped with debian (6.12.38).


    See this: https://lkml.iu.edu/hypermail/linux/kernel/2201.0/00333.html



    What I did is following:


    sudo curl -fsSL https://pkgs.zabbly.com/key.asc -o /etc/apt/keyrings/zabbly.asc


    vi /etc/apt/sources.list.d/zabbly-kernel-stable.sources


    Put this in the zabbly-kernel-stable.sources

    Enabled: yes

    Types: deb

    URIs: https://pkgs.zabbly.com/kernel/stable

    Suites: bookworm

    Components: main

    Architectures: amd64

    Signed-By: /etc/apt/keyrings/zabbly.asc



    apt update


    apt install linux-zabbly


    inspiration came from this website:https://fostips.com/install-li…15-in-debian-12-bookworm/


    Be sure to have the kernel plug-in installed so you can check if the zabbly kernel gets selected as bootable kernel. My Beelink ME Mini has been rock solid for more than 48 hrs, and I have written 2.5 Tb to my Raid-5 set (In the end I opted for raid-5 as I am familiar with md raids)


    My Beelink ME mini runs with kernel 6.15-10-zabbly (just updated to 6.15.10, I started off with 6.15.09, and updates come as regular updates.


    Temps of the nvme's eventhough it was 31 C in the room not above 56 C when read/writes were done, overall I am happy with the little device and it is soo much more responsive compared to my qnap.

    The Lexar sticks are dram-less. This means their performance will be even worse in raid 5 since they have to use system ram for cache. Will be interesting to see your power consumption and performance numbers.

    I'll see what happens when it is here. Mind you, all nvme's have 1 pci lane except for slot 6 which has 2, so the nvme's will never run on max performance anyways, but this setup will hopefully use less power and has more performance than my old atom based qnap.

    I am going to try with 5 nvme's in it, it should arrive tomorrow, and hopefully this weekend there's time to tinker with it. The NM790 nvme drives use way less power:

    Lexar NM790 2 TB Review
    The Lexar NM790 2 TB offers fantastic performance at outstanding pricing. The 2 TB model sells for just $110, which makes it the most affordable high-end SSD…
    www.techpowerup.com


    That's mainly why I choose them. But you are right, the power unit feels a bit on the light side. I also indend to remove the wireless board to save some power as I won't use that anyways.

    wiz101 Quacksalber Could you recommend some model that you have tested?

    I am starting out with 4 Lexar NM790 nvme drives. This is due to the fact in idle these take less power than most other nvme drives, and max writes is 1.5 PB per disk. Also 5 years guarantee from Lexar. In one of my laptops I have a Lexar n620 drive, and in my low power proxmox server there's also a Lexar N620 1 TB drive.


    You are right in getting a bigger spinning disk for the amount a nvme costs, but less noise and less power was one of my incentives to move to NVME instead of spinning disks. Also the speed gain was an incentive, my qnap does not saturate the network, while I expect my new setup to easily do that. I already have bought the 4 lexars, so once the Beelink arrives I am good to go. It will run with raid-5 for redundancy, so slightly less than 6 Tb useable space should be enough for what I want, and if it turns out it is not enough I can always add some more disks.


    As for reliability, I am not too worried about that, after all that's why it is setup in raid, and also I make backups of the data, so if a drive fails, that should not immediately turn into data loss.

    I have two qnap nas devices. One to hold the data, one for backups (switched off and only on when doing backups). I have ordered a BeeLink ME Mini. That's an Intel N150 based 6 nvme nas device with 12 Gb ram and 64 gig EMC in it. My nvme disks will arrive today, and hopefully the BeeLink will arrive in 1.5 weeks. It will run OMV and will replace my main nas. First thing I will do is remove the wifi6 card as I will not use it for my nas.

    yes I did create both from the plugin. Once with, and once without .img. without it does not show anymore. Anyways, now I know how to get it working.


    thanks again Ryecoaaron