Which version of MacOS-MacOS X you're using?
Posts by StreetBall
-
-
votdev thanks for sharing the info, I was not aware.
-
Nothing to report. Therefore... Problem solved

-
Backup
Before
Beginanyway
-
Usually business-grade desktop PCs and Workstation have a really good reliability on long term, but mind to search how much cost a new, used and refurbished power supply.
While being probably the most efficient option, considering your use I'd vote against i3 10100. Could be enough today but could fall short in future, while delivering more threads than i5 9500. Which have 2 more phisical cores.
I don't know in the future if SMT will be disabled for security reasons on OMV/PVE, but more phisical cores and bigger caches help reducing container and VM hiccups.
Intel Xeon E-2174G vs i5-9500 vs i3-10100 [cpubenchmark.net] by PassMark Software
This is only ONE benchmark. Workstation is not better than i5 9500, except probably reliability and PSU grade.
-
I've managed to solve it by another route,
Would you kinly share how you resolved your necessity?
Maybe is a "good enough" way for other persons!

-
cubemin both of your installations are ARM-based?
-
Most of times updates on OMV are smooth. I recently moved from 7 to 8 and I had only a minor hiccup about array that system was convinced disappeared and reappeared...
-
Thanks for the details.
Before running omv-salt deploy run mdadm I made myself sure that the raid definitions in my system were correct and consistent.
This is the output of mdadm --detail --scan -vv
Code
Display More/dev/md/wdred: Version : 1.2 Creation Time : Thu Jun 10 12:56:47 2021 Raid Level : raid1 Array Size : 1953382464 (1862.89 GiB 2000.26 GB) Used Dev Size : 1953382464 (1862.89 GiB 2000.26 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Jan 11 07:39:22 2026 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : myhostname:wdred (local to host myhostname) UUID : 67117265:66ed20fd:913b6886:90dfdfc5and this is the current output of mdadm --monitor --scan --oneshot
No unwanted messages at morning about array detection, system is working "as before" update to OMV8, which sometimes is not fine because if I reboot the computer the raid disappear, but mostly should be the time that mainboard waits for populating the sata channels (current guess; powering off the raid re-appear working and untouched, not needing a fsck, rebuild or such).
Your post helped me solving the tedious message for a not disappearing array.
-
Updated Kernel, and now md0 is... /dev/md127. Which is "fine", i'm not attached in any way to the array name but i find it somewhat... strange.
-
WD9895 your current installation is on a SD card or on some other "first drive" media?
-
Update on this "issue" is present here, waiting for tomorrow morning if something changes.
-
Well, hint of dunno helped.
While having an issue during reboot (array disappeared, but "happened before" and might be an hardware issue due to mainboard or cables). after
this is output of mdadm --monitor --scan --oneshot
I also edited /etc/udev/rules.d/99-openmediavault-md-raid.rules
Code
Display Morewhich now looks like this ACTION=="add|change", \ SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \ ENV{OMV_MD_STRIPE_CACHE_SIZE}="8192", SYMLINK+="md/md0" ACTION=="add|change", \ SUBSYSTEM=="block", KERNEL=="md*", TEST=="md/stripe_cache_size", \ IMPORT{program}="import_env /etc/default/openmediavault", \ ATTR{md/stripe_cache_size}="$env{OMV_MD_STRIPE_CACHE_SIZE}", SYMLINK+="md/md0"but adding SYMLINK+="md/md0" and adding directory+symlink in /dev/ "workaround" was not surviving a reboot.
-
For security and compatibility reasons, usually having the bios updated to latest version (before the OEM pull the plug as HP is doing) is considered best practice.
-
I'm sorry, i read more than one of these shell instructions and the post of volker (which should be voldev here?)
So, as a incapable linux user, while i can read the solution provided into the blogpost, I don't know which steps I should take for analyze my current status, to edit correctly configuration files (or filesystem) and then resolve this issue.
Coderoot@myhost:~# mdadm --detail --scan ARRAY /dev/md0 metadata=1.2 UUID=67117265:66ed20fd:913b6886:90dfdfc5Please, consider to tell me what I should post on the forum.
-
The OS drive is a 32GB Sandisk USB 3.2 drive.
If this it's your first deployiment of OMV, IMVHO this choice is widely not the best possible.
Mainboard have 6 SATA connectors and 2 M2 connectors, so even the smallest, older and cheaper NVME drive would be for performances and long term stability way better choice, in my opinion.
I can understand and relate that a cheap USB drive might leave you bigger opportunities for ZFS and RAID1 configurations but I don't see that valuable have this option with this higher risk SPOF.
-
what does it matter?
saves a ton of data transfer, and could be selected and searched, if anyone is willing to invest time for solving your issue.
I had a similar Intel mainboard, which had the same network card that behaved erratically on my test server. However it was RHEL7 and a far different kernel, so in no way my solution (which actually worked) could fit your current environment.
Learn to put good questions in a proper way to allow persons to dedicate time to your issues.
-
Thanks for the hint, however the suggested modification did not solve the issue for the OP.
Maybe moderators could merge the two topics?
-
Why not mounting your drive via shell script?
-
You're correct about my question on the distro, thanks for sharing.
IMVHO CPU is efficient, but mostly in single thread is a bit on a "short" side of performances. Ok, it has 8 phisical cores, but a 3 years older i5 3470 have more than six times the single thread performances, while having 4 cores and delivering more or less 2.2 times the global performances.
With these premises, I don't think that combining ZFS (CPU dependant) with adapter bonding (CPU dependant) and network transfer this system could sustain much more than 2gbe ethernet transfer speeds (probably not using SMB/Samba will increase 5 to 15% the performances). Intel NICs are far better than other brands in offloading the CPU, but the ZFS filesystem, while capable of take advantage of ram caching, I/O caching and fast-storage caching, still needs to manage parity and disk writings (while demanded to the HBA, that in IT mode mostly snore waiting for real task).
IMVHO step 0 should be create a test system with the array and a 2 card bonded connection, than do thorough tests on this setup. While wasting less time waiting for drives adding a write cache, in your environment after the first rClone run... impact could be negligible in subsequent. Unless you're considering a multi-replica writing (last month, last trimester, last semester, and such)... Writing the milestones having a SSD cache could boost the global performances, but cache drive would be really on tough load for wear.