I can't figure out why my nginx is crashing each time I try to use curl:
Code snippet to reproduce a crash:
$request = new \Buzz\Message\Request('GET', '/', 'https://google.com');
$response = new \Buzz\Message\Response();
$client = new \Buzz\Client\Curl();
$client->send($request, $response);
application log
2015/12/29 11:42:30 [error] 213#0: *416 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: dev-fr.local.xxxx.com, request: "GET /login/check-vkontakte?code=xxxxx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dev-ru.local.xxxx.com", referrer: "https://dev-ru.local.xxxx.com/"
/var/log/system.log
Dec 29 11:40:54 Alains-MacBook-Pro.local ReportCrash[75875]: Saved crash report for php-fpm[75864] version 0 to /Users/alain/Library/Logs/DiagnosticReports/php-fpm_2015-12-29-114054_Alains-MacBook-Pro.crash
crash report (beginning)
Process: php-fpm [75865]
Path: /usr/local/Cellar/php56/5.6.15/sbin/php-fpm
Identifier: php-fpm
Version: 0
Code Type: X86-64 (Native)
Parent Process: php-fpm [75858]
Responsible: php-fpm [75858]
User ID: 501
Date/Time: 2015-12-29 11:42:30.733 +0100
OS Version: Mac OS X 10.10.5 (14F1021)
Report Version: 11
Anonymous UUID: 1DC6CEB0-0479-4A5E-FFD2-E48BA3961196
Sleep/Wake UUID: A704AF01-8AE8-44D2-BBF3-DED65D834B0B
Time Awake Since Boot: 29000 seconds
Time Since Wake: 3000 seconds
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000110
I searched for hours and always given up, but now I'm tired and really want to fix it.
Complete crash report: http://ninsuo.com/crash-nginx-xxxx.html
My phpinfo(): http://ninsuo.com/phpinfo-xxxx.html
Better practice is to reinstall php with
brew install --with-fpm --with-homebrew-curl --with-homebrew-openssl --with-imap --with-homebrew-libxslt --without-snmp php56
and then reinstall curl with
brew install --with-openssl curl
I experienced same problem early and fixed it with running php-fpm as root.
Also I found links which confirms this solution:First link, Second link. Hope it will help you too.
Related
I know this may seem similar to many questions already out there, but I've read dozens and I am still experiencing the same issue. My code works fine on my local machine, and my Raspberry Pi was configured by the exact same setup script, but it doesn't work on the Pi.
My OS is Raspbian 10 (Buster) with kernel version 5.10.103-v7l+. I am running PHP 8.1, along with the other packages in the following command: sudo apt install -y php8.1-common php8.1-cli php8.1-fpm php8.1-xml php8.1-curl
php -m lists curl and php -i and phpinfo() show that cURL is installed and enabled:
cURL support => enabled
cURL Information => 7.64.0
I know cURL is installed and enabled, so how do I get my PHP to recognize and use the package? It works perfectly on my local Linux box but on the Pi it just doesn't. I have restarted my server and my entire machine and neither made a difference. I have ran sudo apt update && sudo apt upgrade and everything is at the most recent version.
The 'problematic' snippet of code on my server is the following:
require "vendor/autoload.php";
use Symfony\Component\Panther\Client;
$options = [
"--headless",
"--window-size=1200,1100",
"--no-sandbox",
"--disable-gpu",
"--disable-dev-shm-usage",
];
$query = urlencode($_GET["q"]);
$url ="/search?hl=en&tbo=d&site=&source=hp&q=".$query;
$client = Client::createChromeClient("/usr/lib/chromium-browser/chromedriver", $options, [], "http://www.google.com");
$client->request("GET", $url);
My full error log is:
*168 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught Error: Call to undefined function Facebook\WebDriver\Remote\curl_init() in /var/lib/jenkins/workspace/url-here.com/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:190
Stack trace:
#0 /var/lib/jenkins/workspace/url-here.com/vendor/php-webdriver/webdriver/lib/Remote/RemoteWebDriver.php(100): Facebook\WebDriver\Remote\HttpCommandExecutor->__construct()
#1 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/ProcessManager/ChromeManager.php(75): Facebook\WebDriver\Remote\RemoteWebDriver::create()
#2 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(117): Symfony\Component\Panther\ProcessManager\ChromeManager->start()
#3 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(521): Symfony\Component\Panther\Client->start()
#4 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(273): Symfony\Component\Panther\Client->get()
#5 /var/lib/jenkins/workspace" while reading response header from upstream, client: 96.244.127.121, server: url-here.com, request: "GET /googleSearch.php?q=test HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.0-fpm.sock:", host: "url-here.com", referrer: "https://url-here.com/apps/browser.html"
Any and all help would be appreciated, I've been struggling with this one for a while. I would just avoid using curl_init() if I could, but unfortunately as you can see the function is called in a library that my project depends on.
This line from the error log indicates what the problem most likely is:
#5 /var/lib/jenkins/workspace" while reading response header from upstream, client: 96.244.127.121, server: url-here.com, request: "GET /googleSearch.php?q=test HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.0-fpm.sock:", host: "url-here.com", referrer: "https://url-here.com/apps/browser.html"
You've got PHP 8.0 here, but you've installed Curl for PHP 8.1. You need to check your FPM or server configuration.
I think you're using Nginx. If that's correct then this article might be helpful. Specifically you'll want to look in:
/etc/nginx/sites-available to check which socket is used
/etc/php/VERSION/fpm/pool.d to check what sockets are available
This only happens on Google Chrome and Chromium with a fresh install of Laravel.
The page shows blank and in the console it says:
(failed) net::ERR_INCOMPLETE_CHUNKED_ENCODING
Instead of the default hello view that which says “You have arrived.”
My server is Debian Wheezy with ISPConfig, Apache 2.2 and PHP 5.4
Does anybody knows how can I fix this?
Had the same problem on a Ubuntu 14.04 Vagrant box running nginx. The site is a Laravel 5 that one day surprisingly started throwing those errors.
After reading this comment:
https://github.com/barryvdh/laravel-debugbar/issues/262#issuecomment-74385850
I've checked my /var/log/nginx/vagrant.com-error.log.1 and saw:
[crit] 1020#0: *774 open() "/var/lib/nginx/fastcgi/3/03/0000000033" failed (13: Permission denied) while reading upstream, client: 192.168.56.1, server: 192.168.56.102.xip.io, request: "GET /_debugbar/assets/javascript?1423122680 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "my-host", referrer: "http://url/that/fails"
Double-checked my Vagrant box nginx lib directory permissions ll /var/lib/
drwxr-xr-x 7 root root 4096 feb 9 11:28 nginx/
... where internally was using www-data user: ll /var/lib/nginx/
drwx------ 12 www-data root 4096 may 5 13:32 fastcgi/
So I ran:
chown -R www-data:www-data /var/lib/nginx
And the error in Chrome disappeared.
Just posting here to notice the solution, even all the credits should go to the linked original comment.
I had the exact same problem that you have. I found a work-around over here on this forum thread:
http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
The code used by the person who provided a the workaround: http://laravel.io/bin/eyyDj#4,7
The gist of it is to just up and tell Chrome how much data to expect for every request, so it doesn't have to chunk the data.
I'm seeing reports that upgrading to PHP 5.5 also fixes this problem, but not all of us can have that kind of control over our servers.
Edit: It looks as if blindly applying this work-around causes errors on redirects. This is the code that I'm now using:
App::after(function($request, $response) {
// Fixes a strange issue with Chrome. Should theoretically be removeable
// after upgrading PHP to 5.5 from 5.4
if ($response instanceof Illuminate\Http\Response) {
$response->header('Content-Length', strlen($response->getOriginalContent()));
}
});
Note that JSON responses are a separate type and may still have the chunking issue, so this solution may need to evolve somewhat to accommodate that.
I'm setting up Continues Integration and I'm wondering if everything should take so damn long.
My build is running for over a day in the mean time and still it's not finished.
It is a normal Laravel app with around 20 controllers, so a little time is granted, but over a day?
My config is fairly simple in my opinion:
build_settings:
ignore:
- "vendor"
setup:
composer:
action: "install"
test:
php_mess_detector:
allow_failures: true
php_code_sniffer:
standard: "PSR2"
php_cpd:
allow_failures: true
php_docblock_checker:
allowed_warnings: 10
skip_classes: true
php_loc:
directory: "src"
No errors, only the (by now) pesky status "Pending"
When I check the logs I get this error:
2016/01/28 08:01:32 [error] 6702#0: *4 FastCGI sent in stderr: "PHP message: PHP Fatal error: Class 'PHPCI\Controller' not found in /var/www/vendor/block8/b8framework/b8/Application.php on line 93" while reading response header from upstream, client: someipaddress, server: green.somedomain.com, request: "GET /assets/js/plugins/datepicker/locales/bootstrap-datepicker.en.js HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "green.somedomain.com", referrer: "http://green.somedomain.com/build/view/5"
I did composer update / install and I also added the following rule to the nginx configuration:
fastcgi_param SCRIPT_NAME index.php;
My question is, is this normal? Is my config good? Am I forgetting something?
You've not set up the build runner when you set up PHPCI. The web interface merely creates the build and displays the results, you need to run the command line tool to run the builds.
There are three ways to set this up:
(New in 1.7 beta) PHPCI Worker w/ beanstalkd.
Install beanstalkd
Use supervisord (or similar) to run /path/to/phpci/console phpci:worker
(Recommended for 1.6 and below) PHPCI Daemon: https://www.phptesting.org/wiki/Run-Builds-Using-a-Daemon
(Fallback option) Cron: https://www.phptesting.org/wiki/Run-Builds-Using-Cron
Our application runs in a Docker container on AWS:
Operating system: Ubuntu 14.04.2 LTS (Trusty Tahr)
Nginx version: nginx/1.4.6 (Ubuntu)
Memcached version: memcached 1.4.14
PHP version: PHP 5.5.9-1ubuntu4.11 (cli) (built: Jul 2 2015 15:23:08)
System Memory: 7.5 GB
We get blank pages and a 404 Error less frequently. While checking the logs, I found that the php-child process is killed and it seems that memory is mostly used by memcache and php-fpm process and very low free memory.
memcache is configured to use 2 GB memory.
Here is php www.conf
pm = dynamic
pm.max_children = 30
pm.start_servers = 9
pm.min_spare_servers = 4
pm.max_spare_servers = 14
rlimit_files = 131072
rlimit_core = unlimited
Error logs
/var/log/nginx/php5-fpm.log
[29-Jul-2015 14:37:09] WARNING: [pool www] child 259 exited on signal 11 (SIGSEGV - core dumped) after 1339.412219 seconds from start
/var/log/nginx/error.log
2015/07/29 14:37:09 [error] 141#0: *2810 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: x.x.x.x, server: _, request: "GET /suggestions/business?q=Selectfrom HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "example.com", referrer: "http://example.com/"
/var/log/nginx/php5-fpm.log
[29-Jul-2015 14:37:09] NOTICE: [pool www] child 375 started
/var/log/nginx/php5-fpm.log:[29-Jul-2015 14:37:56] WARNING: [pool www] child 290 exited on signal 11 (SIGSEGV - core dumped) after 1078.606356 seconds from start
Coredump
Core was generated by php-fpm: pool www.Program terminated with signal SIGSEGV, Segmentation fault.#0 0x00007f41ccaea13a in memcached_io_readline(memcached_server_st*, char*, unsigned long, unsigned long&) () from /usr/lib/x86_64-linux-gnu/libmemcached.so.10
dmesg
[Wed Jul 29 14:26:15 2015] php5-fpm[12193]: segfault at 7f41c9e8e2da ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:28:26 2015] php5-fpm[12211]: segfault at 7f41c966b2da ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:29:16 2015] php5-fpm[12371]: segfault at 7f41c9e972da ip 00007f41ccaea13a sp 00007ffcc5730b70 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:35:36 2015] php5-fpm[12469]: segfault at 7f41c96961e9 ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:35:43 2015] php5-fpm[12142]: segfault at 7f41c9e6c2bd ip 00007f41ccaea13a sp 00007ffcc5730b70 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:37:07 2015] php5-fpm[11917]: segfault at 7f41c9dd22bd ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:37:54 2015] php5-fpm[12083]: segfault at 7f41c9db72bd ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
While googling for this same issue, and trying hard to find a solution that was not related to sessions (because I have ruled that out) nor to bad PHP code (because I have several websites running precisely the same version of WordPress, and none have issues... except for one), I came upon an answer telling that a possible solution did involve removing some buggy extension (usually memcache/d, but could be something else).
Since I had this same site working flawlessly on one Ubuntu server, when switching to a newer server, I immediately suspected that it was the migration from PHP 5.5 to 7 that caused the problem. It was just strange because no other website was affected. Then I remembered that another thing was different on this new server: I had also installed New Relic. This is both an extension and a small server that runs in the background and sends a lot of analytics data to New Relic for processing. Allegedly, it's a PHP 5 extension, but, surprisingly, it loads well on PHP 7, too.
Now here comes the tricky bit. At some point, I had installed W3 Total Cache for the WordPress installation of that particular website. Subsequently, I saw that the performance of that server was so stellar that W3TC was unnecessary, and simply stuck to a much simpler configuration. So I could uninstall W3TC. That's all very nice, but... I forgot that I had turned New Relic on W3TC, too (allegedly, it adds some extra analytics data to be sent to New Relic). When uninstalling W3TC, probably there was 'something' left on the New Relic configuration in my server which was still attempting to send data through the W3TC interface (assuming that W3TC has an interface... I really have no idea how it works at that level), and, because that specific bit of code was missing, the php_fpm handler for that website would fail... some of the time. Not all the time, because I'm assuming that, in most cases, nginx was sending static pages back. Or maybe php_fpm, set to 'recycle' after 100 calls or so, would crash-on-stop. Whatever exactly was happening, it was definitely related to New Relic — as soon as I removed the New Relic extension from PHP, that website went back to working normally.
Because this is such a specific scenario, I'm just writing this as an answer, in the remote chance that someone in the future googles for the exact problem.
In my case it was related to zend debug/xdebug. It forwards some TCP packets to the IDE (PhpStorm), that was not listening on this port (debugging was off). The solution is to either disable these extensions or enable debug listening on the debugging port.
I had this problem after installing xdebug, adding some properties to /etc/php/7.1/fpm/php.ini and restarting nginx. This is running on a Homestead Laravel box.
Simply restarting the php7.1-fpm service solved it for me.
It can happen if PHP is unable to write the session information to a file. By default it is /var/lib/php/session. You can change it by using configuration session_save_path.
phpMyAdmin having problems on nginx and php-fpm on RHEL 6
In my case it was Xdebug. After uninstalling it, it got back to normal.
In my case, it was caused by the New Relic PHP Agent. Therefore, for a specific function that caused a crash, I added this code to disable New Relic:
if (function_exists('newrelic_ignore_transaction')) {
newrelic_ignore_transaction();
}
Refer to: https://discuss.newrelic.com/t/how-to-disable-a-specific-transaction-in-php-agent/42384/2
In our case it was caused by Guzzle + New Relic. In the New Relic Agent changelog they've mentioned that in version 7.3 there was some Guzzle fix, but even using the 8.0 didn't work, so there is still something wrong. In our case this was happening only in two of our scripts that were using Guzzle. We found that there are two solutions:
Set newrelic.guzzle.enabled = false in newrelic.ini. You will lose data in the External Services tab this way, but you might not need it anyway.
Downgrade New Relic Agent to version 6.x that somehow also works
If you are reading this when they've released something newer than version 8.0, you could also try to update New Relic Agent to the latest and maybe they fixed that
In my case I had deactivated the buffering function ob_start("buffer"); in my code ;)
A possible problem is PHP 7.3 + Xdebug. Please change Xdebug 2.7.0beta1 to Xdebug 2.7.0rc1 or the latest version of Xdebug.
For some reason, when I remove profile from my xdebug.ini modes, it fixes it for me.
i.e. change
xdebug.mode=debug,develop,profile
to
xdebug.mode=debug,develop
This only happens on Google Chrome and Chromium with a fresh install of Laravel.
The page shows blank and in the console it says:
(failed) net::ERR_INCOMPLETE_CHUNKED_ENCODING
Instead of the default hello view that which says “You have arrived.”
My server is Debian Wheezy with ISPConfig, Apache 2.2 and PHP 5.4
Does anybody knows how can I fix this?
Had the same problem on a Ubuntu 14.04 Vagrant box running nginx. The site is a Laravel 5 that one day surprisingly started throwing those errors.
After reading this comment:
https://github.com/barryvdh/laravel-debugbar/issues/262#issuecomment-74385850
I've checked my /var/log/nginx/vagrant.com-error.log.1 and saw:
[crit] 1020#0: *774 open() "/var/lib/nginx/fastcgi/3/03/0000000033" failed (13: Permission denied) while reading upstream, client: 192.168.56.1, server: 192.168.56.102.xip.io, request: "GET /_debugbar/assets/javascript?1423122680 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "my-host", referrer: "http://url/that/fails"
Double-checked my Vagrant box nginx lib directory permissions ll /var/lib/
drwxr-xr-x 7 root root 4096 feb 9 11:28 nginx/
... where internally was using www-data user: ll /var/lib/nginx/
drwx------ 12 www-data root 4096 may 5 13:32 fastcgi/
So I ran:
chown -R www-data:www-data /var/lib/nginx
And the error in Chrome disappeared.
Just posting here to notice the solution, even all the credits should go to the linked original comment.
I had the exact same problem that you have. I found a work-around over here on this forum thread:
http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
The code used by the person who provided a the workaround: http://laravel.io/bin/eyyDj#4,7
The gist of it is to just up and tell Chrome how much data to expect for every request, so it doesn't have to chunk the data.
I'm seeing reports that upgrading to PHP 5.5 also fixes this problem, but not all of us can have that kind of control over our servers.
Edit: It looks as if blindly applying this work-around causes errors on redirects. This is the code that I'm now using:
App::after(function($request, $response) {
// Fixes a strange issue with Chrome. Should theoretically be removeable
// after upgrading PHP to 5.5 from 5.4
if ($response instanceof Illuminate\Http\Response) {
$response->header('Content-Length', strlen($response->getOriginalContent()));
}
});
Note that JSON responses are a separate type and may still have the chunking issue, so this solution may need to evolve somewhat to accommodate that.