Posts by StreetBall

    Usually business-grade desktop PCs and Workstation have a really good reliability on long term, but mind to search how much cost a new, used and refurbished power supply.


    While being probably the most efficient option, considering your use I'd vote against i3 10100. Could be enough today but could fall short in future, while delivering more threads than i5 9500. Which have 2 more phisical cores.

    I don't know in the future if SMT will be disabled for security reasons on OMV/PVE, but more phisical cores and bigger caches help reducing container and VM hiccups.


    Intel Xeon E-2174G vs i5-9500 vs i3-10100 [cpubenchmark.net] by PassMark Software


    This is only ONE benchmark. Workstation is not better than i5 9500, except probably reliability and PSU grade.

    Thanks for the details.

    Before running omv-salt deploy run mdadm I made myself sure that the raid definitions in my system were correct and consistent.

    This is the output of mdadm --detail --scan -vv

    and this is the current output of mdadm --monitor --scan --oneshot

    Code
    mdadm: NewArray event detected on md device /dev/md127

    No unwanted messages at morning about array detection, system is working "as before" update to OMV8, which sometimes is not fine because if I reboot the computer the raid disappear, but mostly should be the time that mainboard waits for populating the sata channels (current guess; powering off the raid re-appear working and untouched, not needing a fsck, rebuild or such).


    Your post helped me solving the tedious message for a not disappearing array.

    Well, hint of dunno helped.

    While having an issue during reboot (array disappeared, but "happened before" and might be an hardware issue due to mainboard or cables). after

    Code
    omv-salt deploy run mdadm
    update-initramfs -u
    reboot

    this is output of mdadm --monitor --scan --oneshot

    Code
    mdadm: NewArray event detected on md device /dev/md0

    I also edited /etc/udev/rules.d/99-openmediavault-md-raid.rules

    but adding SYMLINK+="md/md0" and adding directory+symlink in /dev/ "workaround" was not surviving a reboot.

    I'm sorry, i read more than one of these shell instructions and the post of volker (which should be voldev here?)


    So, as a incapable linux user, while i can read the solution provided into the blogpost, I don't know which steps I should take for analyze my current status, to edit correctly configuration files (or filesystem) and then resolve this issue.

    Code
    root@myhost:~#  mdadm --detail --scan
    ARRAY /dev/md0 metadata=1.2 UUID=67117265:66ed20fd:913b6886:90dfdfc5


    Please, consider to tell me what I should post on the forum.

    The OS drive is a 32GB Sandisk USB 3.2 drive.

    If this it's your first deployiment of OMV, IMVHO this choice is widely not the best possible.

    Mainboard have 6 SATA connectors and 2 M2 connectors, so even the smallest, older and cheaper NVME drive would be for performances and long term stability way better choice, in my opinion.

    I can understand and relate that a cheap USB drive might leave you bigger opportunities for ZFS and RAID1 configurations but I don't see that valuable have this option with this higher risk SPOF.

    what does it matter?

    saves a ton of data transfer, and could be selected and searched, if anyone is willing to invest time for solving your issue.


    I had a similar Intel mainboard, which had the same network card that behaved erratically on my test server. However it was RHEL7 and a far different kernel, so in no way my solution (which actually worked) could fit your current environment.


    Learn to put good questions in a proper way to allow persons to dedicate time to your issues.

    You're correct about my question on the distro, thanks for sharing.


    IMVHO CPU is efficient, but mostly in single thread is a bit on a "short" side of performances. Ok, it has 8 phisical cores, but a 3 years older i5 3470 have more than six times the single thread performances, while having 4 cores and delivering more or less 2.2 times the global performances.


    With these premises, I don't think that combining ZFS (CPU dependant) with adapter bonding (CPU dependant) and network transfer this system could sustain much more than 2gbe ethernet transfer speeds (probably not using SMB/Samba will increase 5 to 15% the performances). Intel NICs are far better than other brands in offloading the CPU, but the ZFS filesystem, while capable of take advantage of ram caching, I/O caching and fast-storage caching, still needs to manage parity and disk writings (while demanded to the HBA, that in IT mode mostly snore waiting for real task).


    IMVHO step 0 should be create a test system with the array and a 2 card bonded connection, than do thorough tests on this setup. While wasting less time waiting for drives adding a write cache, in your environment after the first rClone run... impact could be negligible in subsequent. Unless you're considering a multi-replica writing (last month, last trimester, last semester, and such)... Writing the milestones having a SSD cache could boost the global performances, but cache drive would be really on tough load for wear.