Laravel Forge download limit (1024MB) - php

When trying to download file larger than 1024MB from server using PHP "return response()->download(...);" it doesn't download more than 1024MB.
When downloading from public_html not via PHP, larger files can be downloaded.
Nginix error:
2016/09/15 13:23:58 [error] 5801#5801: *198201 readv() failed (104: Connection reset by peer) while reading upstream, client: xx.xx.xx.xx, server: xxxxxx.com, request: "GET /test HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "xxxxxx.com"
Using: Laravel Forge, Laravel 5.1, Nginx, PHP 7.

Increasing in Nginx fastcgi_max_temp_file_size solved the problem.

Related

How do I enable php error log on Laravel Homestead

I'm using Laravel Homestead to develop a laravel 5.2 site.
It's working fine, but recently I started getting the utterly useless Laravel / nginx 502 Bad Gateway error, which could mean anything from a php error to a server misconfiguration.
I'm looking in the homestead box's /etc/php5/fpm/php-fpm.conf and php error logging is active and is set to log to error_log = /var/log/php5-fpm.log
This log file is empty however, and I can't find any info on how to enable basic PHP error logging.
The basic laravel log at storage/logs/laravel.log is getting written to but it has no information on these 502 errors.
From /var/log/nginx/myapp.log file:
2017/03/21 17:50:42 [error] 5239#0: *12 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.10.1, server: replication-interface.app, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "myapp.app"

Getting the following error in PHP "PDOException - SQLSTATE[HY000] [1040] Too many connections "

We are using the Nginx web server with PHP and MySQL 5.6 Amazon RDS server.
While accessing a page of our app we are getting the following error.
PHP message:
[1040] PDOException - SQLSTATE[HY000] [1040] Too many connections -" while reading response header from upstream, client: 127.0.0.1, server: localhost, request: "GET /manager/applications.php HTTP/1.0", upstream: "fastcgi://127.0.0.1:9000", host: "
We have checked max_connections parameter in MySQL and its 600. We have only 2 active connections in our MySQL RDS server. And also we have enough RAM and hard disk space.

Sylius: 504 Gateway Time-out on dev environment with Standard-Edition

I've followed the installation steps here: http://docs.sylius.org/en/latest/book/installation.html
Everything it's ok until I don't try to access the /app_dev.php.
I'm running it on Vagrant v1.8.1 with a Ubuntu 14.04 box, memory=1024 and cpus=2. The physical host is a MacbookPro 10.11.3 (i5 2.7 GHz, Ram 8 Gb).
Even tried to install apcu extension and changing the memory_limit in php.ini file to 512 Mb but nothing changes.
Nginx logs says:
2016/03/16 11:31:06 [error] 1292#0: *1 upstream timed out (110:
Connection timed out) while reading response header from upstream,
client: 192.168.10.1, server: test.dev, request: "GET /app_dev.php/
HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "test.dev",
referrer: "http://test.dev/app_dev.php/login"
The application log doesn't show any error. Also tried to reprovision the vm and install Sylius from start but I get always the same behavior.
I think it's related to the host configuration but don't know where to start.
After many attempt with different box configuration I finally managed to make Sylius working. Just removed and reinstalled plugin vagrant-bindfs.
It's still really slow and use much memory (about 60MB), I will dig more on NFS configuration for Vagrant-OSX and then I will update this answer.

Raspbian owncloud nginx [error] 2667#0: *4 connect() failed (111: Connection refused) while connecting to upstream

I am using a Raspberry pi to host a owncloud server. It uses nginx, and when I configured the pi with this tutorial, the page came up with a 502 Bad Gateway error. I checked the logs and found this:
2015/10/22 05:18:03 [error] 2667#0: *4 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.101, server: 192.168.1.102, request: "GET /favicon.ico HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.1.102", referrer: "https://192.168.1.102/"
EDIT: Nevermind this question anymore, I have moved on to using a different platform. However, I am still urious as to what this problem is.
I tried to fix it with several other solutions found in posts on this site like here and here, as well as ensuring that php was installed, but none worked. I am stumped as I am a relative newbie to linux and know nothing about nginx. Any help is appreciated. Thanks in advance.
The php-fpm is not running. You should run the php-fpm.
If you has add the php-fpm as a service : service php-fpm restart
Or {PHP_PATH}/sbin/php-fpm {PHP_PATH} is the --with-prefix=/path/ when you ./configure the php

PHP-FPM Xdebug causes SIGSEGV

Enabling Xdebug on nginx server with php-fpm causes SIGSEGV error logged in /var/log/nginx/error.log:
2015/07/26 20:30:37 [error] 28452#28452: *326 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 192.168.50.1, server: mydomain.dev, request: "POST /admin/soap/add/95 HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "mydomain.dev", referrer: "http://mydomain.dev/admin/soap/add/95"
When I disable Xdebug errors are gone.
Script which is causing this error sends SOAP request to the external service.
In my opinion this might be some kind of buggy configuration e.g. exceeding max connection number or something like this, but I have no idea how to find it.
The server which I'm working on is Vagrant box https://github.com/Varying-Vagrant-Vagrants/VVV.

Categories