Beiträge von myscha

    I have nearly the same messages, but no problems up to now.


    There is no possibility to adjust the CPU voltage in the UEFI. Only the RAM voltage and that's already set to minimum. It allows only 1.5V and 1.35V

    myscha: Do you use your setup headless? 21W vor 2 drives with this board is moderat, but could be lower. Did you deactive the soundchip within UEFI? Do you use DDR3L SO-Dimms? Whats your power supply? Does UEFI offer a voltage adjustment?

    No it's not headless. I also installed XFCE, but about 99.5% of the time there won't be anyone using it.
    Sound is active.
    DIMMs are low-power ones (2pcs.).
    The power-supplies came with my Chenbro ES34069: a FSP180-ABA for AC=>19V and an internal PCB for 19V => 12V/5V/3.3V.
    I'll check the UEFI settings later.


    How or where did you get that error message? Never noticed it anywhere.

    Yes, it works: see Nginx Plugin Vhost Website auf Port 80 in einem Unterordner installieren. There part-db is mentioned, but it also works with Baikal.


    EDIT:


    This is my version for baikal. It slightly differs from the one for part-db, but either should work.

    I have configured OMV 4 to force SSL usage and I'm running Baikal for syncing calendars to some Android 4.2/4.4 smartphones. Those old Android versions don't work with the default ssl_ciphers, so I have to add additional ones to /etc/nginx/openmediavault-webgui.d/security.conf.


    In addition to this I'm running some PHP services and I have setup the paths and usage as described there. To make this working another include directive has to be added to /usr/share/openmediavault/mkconf/nginx.d/10webgui.


    With every update, my manual additions are being deleted, so I'd like to know if there is any way to customize Nginx to allow such customization?

    By watching the variable 'sockets-enqueued' in /proc/fs/nfsd/pool_stats (according to knfsd-stats.txt) I found out that 8 threads are not enough and result in ten-thousands of enqueued sockets. When I increased the number of threads to 64, the number of enqueued sockets remained constant.


    Then I've made more benchmarks, but I still don't know why NFS is so much slower.

    protocolrate [MB/s]export options
    SMB/CIFS108.97
    NFS84,51rw,async,no_subtree_check,all_squash,anonuid=1028,anongid=100
    NFS82.06rw,subtree_check,secure,async
    NFS72.16rw,subtree_check,secure



    The test file was about 3.5GB large and each test was run 3 times and the results were averaged. The file was copied using rsync as dd is not able to work over SMB/CIFS. You can find the test script below.


    The NFS client options were always the same:
    rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.178.35,local_lock=none,addr=192.168.178.2
    where most values are defaults on my machine (Arch Linux).

    Hello guys,


    I've built up my own NAS system based on:
    - ASRock J4205-ITX
    - 8GB DDR3 RAM
    - 64GB Samsung 470 as system disk
    - 2TB WD Red
    - 3TB WD Red
    - Debian Stretch 9.3
    - OMW Arrakis 4.0.16-1


    Now I'd like to share some folders with some Linux machines (Arch, ubuntu mate & xubuntu). As I have no Windows machines and therefore no real need for SMB shares I'd favour to share it via NFS. But unfortunately the NFS performance seems quite poor according to some benchmarks I've made:


    ProtocolSpeed [MB/s]
    SMB/CIFS115.27
    NFS371.38
    NFS4, sync72.08
    NFS4, async66.63
    NFS4, several options, see code below67.99


    While performing the test neither the server nor the client did something challenging.


    The client was a Lenovo T61 with Arch on a 256GB Samsung 750 disk, which may be nearly on the edge as the T61 only offers SATA1 interfaces.
    All software on server and client was up-to-date.
    NFS ran with 8 threads and the export options were: rw,subtree_check,secure
    SMB/CIFS was shared with OMV default options


    Are there options thatcould speed up NFS transmission or is SMB really faster?


    Here's another update as it works now:


    Assuming that your custom PHP services shall go to /srv/dev-disk-by-label-vol1/www.


    Make two additional folders according to the Nginx style: /srv/dev-disk-by-label-vol1/www/sites-available and /srv/dev-disk-by-label-vol1/www/sites-enabled.
    Create configs in /srv/dev-disk-by-label-vol1/www/sites-available.
    Make a symlink to /srv/dev-disk-by-label-vol1/www/sites-enabled to enable a service.
    That's like the common Nginx mechanism, but in custom folders.


    A single config in /srv/dev-disk-by-label-vol1/www/sites-available may look like this

    But another sock should be taken, I think.


    To load the additional configurations /srv/dev-disk-by-label-vol1/www/sites-enabled must be included in the OMV Nginx server configuration (/etc/nginx/sites-enabled/openmediavault-webgui) like this:

    Code
    server {
        # stock configuration here
    
    
        include /srv/dev-disk-by-label-vol1/www/sites-enabled/*;
    }

    This is also the main drawback of this method: this last line has to be manually added after each update of the OMV Nginx plugin. If you don't use the plugin this shouldn't be a problem.


    Now you can open the OMV webinterface with omv.local in the browser and the service with omv.local/part-db. No more need to add DNS entries or work with different ports.



    EDIT:
    To make the include persistent you have to edit /usr/share/openmediavault/mkconf/nginx.d/10webgui at about line 127:

    Code
    -b \
      -o "    include ${OMV_NGINX_SITE_WEBGUI_INCLUDE_DIR}/*.conf;" -n \
      -o "    include /srv/dev-disk-by-label-vol1/www/sites-enabled/*;" -n \
      -o "}" \

    Add line 127 here.

    I have a version which works at least for the php files:


    It's based on /etc/nginx/openmediavault-webgui.d/openmediavault-mysql-management-site.conf, but unfortunately it doesn't work with css and js files that are located in the according subfolders. It still searches those in /var/www/openmediavault/js instead of /srv/dev-disk-by-label-vol1/www/part-db/js.


    How could I fix this?

    Thanks for the link.


    So for every service I'd add in that way I'd also have to add a DNS entry, is that correct?



    Would it also be possible to get something like

    Bash
    location /part-db/ {
       alias /srv/dev-disk-by-label-vol1/www/part-db/;
       fastcgi_index index.php;
    }

    to work? This way no DNS entries would have to be added.



    With the above I see

    Bash
    018/01/02 09:52:13 [error] 6571#6571: *1 open() "/srv/dev-disk-by-label-vol1/www/part-db/css/omv.css" failed (2: No such file or directory), client: 192.168.144.235, server: openmediavault-webgui, request: "GET /part-db/css/omv.css HTTP/1.1", host: "nas.home.", referrer: "http://nas.home./part-db/"

    in the errorlog (/var/log/nginx/openmediavault-webgui_error.log). This makes sense, as there is no file omv.css anywhere on my system. But what goes wrong here?



    It works with the following server config

    when I open http://nas.home.:8080/part-db

    I startet the server about 3 hours ago.

    Recently I installed above mentioned OMV version on a fresh installation of Debian Stretch 9.3 (firmware-9.3.0-amd64-netinst.iso).


    When I applied the configuration after creating a new NFS share, I got the following message for the first time.


    I got nearly the same message after creating and applying a shared folder. Since then OMV shows a changed configuration all the time, but when I click on the 'Apply' button I only see a message box after some time that tells me that there was an error. But there are no error details any more. When I close the box by clicking 'OK' the 'configuration changed' banner appears again.


    Does anyone know the problem or can tell me how I can find the cause for this?