Posts by Capusjon

    For anybody having this issue and coming here for a fix, I managed to get it working eventually:


    It turned out to be a fastCGI timeout in LetsEncrypt or Nextcloud, I am not really shure which one because I edited both nginx.conf files to allow for a bigger fastcgi timeout.


    nginx.conf http section
    http {


    ##
    # Basic Settings
    ##


    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 3600;
    types_hash_max_size 2048;


    # Timeout tryouts
    #proxy_connect_timeout 600;
    #proxy_send_timeout 600;
    #proxy_read_timeout 600;
    #send_timeout 600;
    fastcgi_read_timeout 3600;
    fastcgi_send_timeout 3600;
    #fastcgi_connect_timeout 300;
    #client_header_timeout 300;
    #client_body_timeout 300;



    I put the bold fastcgi settings in the conf file and the timeouts after 1 minute disappeared. Mabey there is a better way, I hope someone hovers over and can point it out to me. For me this seems to work at least.

    Thanks for the video, made it really easy to set-up!


    Had to make a change to the Nextcloud.subdomain.conf to allow bigger file downloads but after that everything seems to work just fine.


    I still have a small problem though:
    Whenever I transfer a big file from "SMB/CIFS A" to "SMB/CIFS B" I get a timeout in the android app or a could not move FILENAME when trying via Web interface.
    The transfer does complete eventually though so it seems to do the job in spite of the errors.


    Letsencrypt/NGINX error log:


    2020/03/03 10:05:33 [error] 392#392: *1878 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: XXX.XXX.XXX.XXX, server: _, request: "GET ///user/recordings.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "XXX.XXX.XXX.XXX"


    2020/03/03 11:18:52 [error] 392#392: *3369 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 5.101.0.209, server: _, request: "GET /index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.xxx.xxx.xxx", referrer: "http://xxx.xxx.xxx.xxx:80/index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP"


    Nexcloud/NGINX error log:
    2020/03/03 11:55:20 [error] 352#352: *4854 upstream timed out (110: Operation timed out) while reading response header from upstream, client: XXX.XXX.XXX.XXX, server: _, request: "MOVE /remote.php/dav/files/Colin/Discovery/Downloads/21%20Jump%20Street%202012%20BluRay.720p.DTS.x264-CHD HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "host.hosting.host"


    Any clue as to what I might have missed so I can fix this as well?

    Hi all,


    I used the youtube guide from Technodad to get the installation up and going. Most things went in a breeze, easy to setup.
    I did find a couple points that I had to change for my setup to run as intended:


    Nextcloud.subdomain.conf <- Had to change this file to allow for bigger uploads and downloads
    client_max_body_size 0;
    location / {
    include /config/nginx/proxy.conf;
    resolver 127.0.0.11 valid=30s;
    set $upstream_nextcloud nextcloud;
    proxy_max_temp_file_size 2040m; made this proxy_max_temp_file_size 0;


    proxy_pass https://$upstream_nextcloud:443;


    With this setting in place the buffer file when sharing a link can get as big as needed. (keep in mind that this will 'grow' your docker image so you need enough space on the disk where docker runs or you need to map the tmp folder of the image to another drive)
    This was actually the only thing I needed to change to get bigger files (movies, series, etc) to download.


    I still have a problem though:
    Whenever I transfer a big file from "SMB/CIFS A" to "SMB/CIFS B" I get a timeout in the android app or a could not move FILENAME when trying via Web interface.
    The transfer does complete eventually though so most things seem to be working.


    Letsencrypt/NGINX error log:


    2020/03/03 10:05:33 [error] 392#392: *1878 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: XXX.XXX.XXX.XXX, server: _, request: "GET ///user/recordings.php HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "XXX.XXX.XXX.XXX"


    2020/03/03 11:18:52 [error] 392#392: *3369 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 5.101.0.209, server: _, request: "GET /index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "xxx.xxx.xxx.xxx", referrer: "http://xxx.xxx.xxx.xxx:80/index.php?s=/Index/\think\app/invokefunction&function=call_user_func_array&vars[0]=md5&vars[1][]=HelloThinkPHP"


    Nexcloud/NGINX error log:
    2020/03/03 11:55:20 [error] 352#352: *4854 upstream timed out (110: Operation timed out) while reading response header from upstream, client: XXX.XXX.XXX.XXX, server: _, request: "MOVE /remote.php/dav/files/Colin/Discovery/Downloads/21%20Jump%20Street%202012%20BluRay.720p.DTS.x264-CHD HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "host.hosting.host"


    Any clue as to what I might have missed so I can fix this as well?


    Kind regards,
    Capusjon

    Hi all!
    Been lurking on this forum for the past 3 weeks now getting my feet wet in OMV and all it's greatness and quirks :)
    Really happy so far with the set-up. As a long time windows user I had to do some googling and trial and error (installation 4 atm) to get everything set up.


    After reading all kinds of post on getting google drive working in OMV I found this docker: https://github.com/mitcdh/docker-google-drive-ocamlfuse It seems this docker makes it possible to mount google Drive on the system which I think is interesting because of flexibility and endless backups.


    I enabled the drive api in google developers, got my id, key and verifiaction. Put them all in the docker UI as a host, used my docker PGID an PUID in the dockergui but then I'm at a loss..
    The log told me /mnt/gdrive must be an existing location. But what to do about this is unclear.


    What do I do to get this working or will my efforts be futile?


    I know this may seem like a lot of effort for 15gigs but my login from work grants me unlimited storage so I would really like to get this working :)


    Hope you guys can help me out!


    Kind regards,
    Capusjon