I suspect the hottest ones are not idling.
you are not revealing what software & settings are used to control the discs "idle" state.
I suspect the hottest ones are not idling.
you are not revealing what software & settings are used to control the discs "idle" state.
Is the temperature measurement confirmed via an external sensor measurement (Infrared laser thermometer)?
Anyway you might get better answers in a disc hardware or vendor related forum
re "use an lxc to run pihole" is there perhaps a pointer to a doc for getting it setup this way?
You can also use an lxc to run pihole.
Good to hear! Would you have details on memory consumption (which seems the most limiting factor on SBCs)?
CodeDisplay MoreUntagged: docker.io/filebrowser/filebrowser:latest Deleted: 6d86360308fdcc35e06984a791b482e40cea7b8fd72de312eee6f3dfa445ef6a Setting up openmediavault (6.4.0-3) ... Creating configuration database ... Migrating configuration database ... Running migration conf_6.4.0 INFO: The node 'sharedfoldersnapshotlifecycle' already exists at XPath '/config/system'. INFO: The node 'enable' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. INFO: The node 'retentionperiod' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. INFO: The node 'limitcustom' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. INFO: The node 'limithourly' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. INFO: The node 'limitdaily' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. INFO: The node 'limitweekly' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. INFO: The node 'limitmonthly' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. INFO: The node 'limityearly' already exists at XPath '/config/system/sharedfoldersnapshotlifecycle'. Setting up Salt environment ...
I'm not sure how to fix it, any help gratefully received, occurding to the update area I'm fully up to date.
please be specific about "what" you want to fix?
"INFO: " in above is not indicating an error!
I read somewhere (or thought I did) that not all PCIe SATA controllers worked with OMV?
HW compatibility is not depending on OMV but the underlying OS Debian 11 and it's HW drivers.
PCIE SATA is the most compatible class of HW from my view
You can avoid that possibility completely by using an LSI SAS HBA card instead
From my view I wouldn't fully confirm this statement, because these PCIE cards require much more vendor support for driver maintenance and for some LSI SAS HBA HW supported drivers are no longer or not yet available especially in combination with non-X86 CPUs.
See https://github.com/geerlingguy…=is%3Aissue+is%3Aopen+LSI
45 ish MB/s
no required technical details have been provide, so let me guess that a 2.5'' HDD connected via USB is involved.
What cables do you plan to buy?
Strange how this plugin has been around 9 years without people needing this.
there have been several post of users in the past that I remember. Do I need to dig them out in order to be heard?
it should be done
well I was looking for a reliable success or failure check. how would a "should" help to get there?
I used "dd full disk" too ( use dd to clone the entire drive to a compressed image file) on my RPi to safeguard a planned upgrade but the window closed after some long time hence I don't know if it completed successfully.
Is there a way to see a log or check the compressed image file for completion?
I've opened image file with 7-zip on Windows and that succeeded.
Is that a good enough indicator for a good image?
Hi Wusa,
würde vorschlagen dass Du das in der Referenzdokumentation von zB mdadm nachliest.
I'd open an issue on GitHub to get developer attention.
Issue will need several screenshots to show issue clearly!
STATUS_LOGON_FAILURE
Due to error code I'd assume permissions don't come into play yet.
Questions to ask:
What user login is used for connecting to OMV server share on your linux devices ?
Is the login configured on side of OMV6 server?
This thread should be unpinned.
why? Please elaborate on reasoning.
Note: NTFS is not recommended!
One of my OMV 6 instances seems to be having this problem intermittently.
Are you on latest OMV and OS patch versions?
Unfortunately I couldn't find anything suitable despite an intensive search on the net.
Did you limit search to this forum?
I used search string "site:forum.openmediavault.org +webdav +6" and instantly got this hit
Suggest to post there
I used the Raspberry PI 3b and OMV 4 for this. It worked perfectly until the Raspberry PI said goodbye.
A workaround should be to use the SD card from the PI3 in the PI4.
So the whole setup should still work
Any SDK ?
I would suggest to open anissue for enhancement in GitHub first and get main developer's agreement on "route of action".
This issue can be used for specific development related question afterwards
is this already in omv6 ?
Wrong question, because OMV is fully dependent on Debian to package and include new versions of applications in its repository.
Is Samba 4.17 available on Debian 11? No, but maybe some of the improvements will make it into 4.13 via back ports
https://packages.debian.org/se…uite=bullseye§ion=all will show what version is available
Samba release notes https://www.samba.org/samba/history/samba-4.17.0.html mention the root cause
"
SMB Server performance improvements
-----------------------------------
The security improvements in recent releases
(4.13, 4.14, 4.15, 4.16), mainly as protection against symlink races,
caused performance regressions for meta data heavy workloads.
With 4.17 the situation improved a lot again:
- Pathnames given by a client are devided into dirname and basename. The amount of syscalls to validate dirnames is reduced to 2 syscalls (openat, close) per component. On modern Linux kernels (>= 5.6) smbd makes use of the openat2() syscall with RESOLVE_NO_SYMLINKS, in order to just use 2 syscalls (openat2, close) for the whole dirname.
- Contended path based operations used to generate a lot of unsolicited wakeup events causing thundering herd problems, which lead to massive latencies for some clients. These events are now avoided in order to provide stable latencies and much higher throughput of open/close operations.
Display More