Beiträge von getName()

    If you transmit tcp, maybe look at netstat - s, and see if there is a loss rate.
    Iperf is benchmarking the network connection, so its completly independent of the hdds.
    The pc you connect with, does it have a usb ethernet interface?
    Edit: did not the its an imac. So it clearly looks like something broken in your network.


    If you got a laptop, connect it directly to the nas and measure with iperf. Than go to the next part on the way to your mac until speeds drop.

    The iperf test shows clearly that ethernet is the limiting factor. So now we know at what to look at.
    The results also show that its not fast ethernet but gbit, it clearly looks like something like a bad cable, bad isolation or something like that.

    It is kind of common to allow a root user inside docker containers to run it. The environmant variables you named are used by a startup script inside the container, it only works if devs did it this exact way.
    A lot of containers even require the user to be root. You may want to have a look at openshift docu for workarounds in those containers, openshift does not allow root in docker and therefore its a commen task there.
    If you want to try, start docker with the - - user flag, but it depends on the container if you need further changes. You will propably run into rights issues inside the containet. If the container is not started in privileged mode I dont see too much of an issue going root inside docker, depending on the specific container used of course.

    Diese Kombination, nur unsicher konfigurierbares iot Equipment und freier root share ist die größte Einladung, die man aussprechen kann. Das bitte unbedingt (!) nur dann machen, wenn das Netzwerk vollkommen autark ohne Internetzugang ist. Alles andere stufe ich als Gefährdung der Öffentlichkeit ein.
    Unter der Annahme, dass alles gekapselt ist, würde ich unter einer Subdomain einen Container laufen lassen, der root freigibt, statt das bei openmediavault zu tun. Man kann ftp auch sicher so konfigurieren, dass es userspeziefische Verzeichnisse gibt, die als root freigegebwn werden. Ich habe das auf Nutzerseite schon gesehen, habe aber selbst zu lange nicht mehr richtig mit ftp gearbeitet um die config da noch zu kennen.

    I am afraid we need a lot more information if you want to get help.
    What kind of hardware is the NAS running on?
    What kind of drives are we talking about?
    How are the drives connected to the NAS?
    How do you share the filesystem? Cifs or nfs?
    The next step is to test drive and network seperated. Do you know how to use the shell on the nas?
    iperf for network and iozone for drive benchmarks would be a good way to start.

    Well, then adjust vm.swappiness to 0 if you want to benchmark. When I started to collect benchmark results for various SBC I also took swapping into account but with zram the influence often is negligible.

    Right, in most cases it is neglicible, but there are some other cases. Its of course to no value for usecases here. Swapiness is good if I got administration rights, but most useres benchmarking code on cluster dont have that. The only thing working for them is to pre touch all those data so they get back to ram. But than again this may influence l3 cache.

    In this configuration, the rebuilt chance is 94% with 10^-15 error rate disks.

    This is assuming that disk fails are completly independent. As there is in reality at least one coupling mode for most setups, that being the age and usetime of the disk, not even talking about temperature and vibrations, the propability of disk fails during rebuilt increases significantly.
    In raids that dont face heavy usage, there is also a great chance a nother disk is already broken but it did not get recognized before. I personally whitnessed many disk fails during rebuilts. Statistics with coupling are way more complex as you need to get the exact differential coupling equation which is pretty hard most times. This is the sole reason it most times gets neglected and false statistical data are presented.

    Actually a versionized copy on a second host would be way better to protect from accidentaly removing data or simply corrupted data. Does not help to have a backup when the copy gets overwritten every time, including the corruption.

    Offline short tests ended with read error.
    This might be quite bad, or smart tests have problems with spin downs (I have seen that before).
    Deactivate all power savings for the disk and reboot. Unmount the drive and run a long smart test on it. If it fails again, throw the hdd in the bin.

    Did I understand correctly, that the target medium is a thumb drive? This would be totally expected speeds with long copies than.
    Your network seems fine.
    You can test the drive using dd.
    dd if=/dev/zero of=path/inside/thumdrive/test.img bs=100M count=20 oflag=direct
    I am confident this will output the write speeds in which you saturated in long copies. This will create a 2gb file test.img, you can delete it after.