I am working on a Symfony2 project using Doctrine and MongoDB. To say the least, everything has been working great, until today when I uncovered a problem.
Queries that return one or more records/documents are apparently causing the PHP process to die. To be more specific, this is only happening with queries where "file" results are returned. I do not get a PHP error, and there are also no errors logged to the apache error log.
When I hit the URL that causes this query to run, I get net::ERR_EMPTY_RESPONSE in Chrome. I can output content with echo 'test';exit() just before the query, and I see the content in my browser. If I put the same echo 'test';exit(); line right after the query, I get the empty response error.
I have a development environment setup on my computer, which includes the LAMP stack. However, I have this configured to connect to my remote MongoDB instance. I have no problems when querying for files using my local setup. The various service versions are slightly different between my computer and server. Based on this observation, it seems as if it is not a MongoDB service issue, but maybe a PHP extension issue?
I add that I am able to successfully store files using the services on my server. But I can only query/retrieve the data on my local setup.
Is any log content generated when PHP dies like this?
I am running the following service versions:
OS: Ubuntu 12.04 LTS
Apache: 2.2.22
PHP: 5.3.10-1ubuntu3.1
Mongo PHP extension: 1.2.10
MongoDB-10gen: 2.0.5
Any help would be greatly appreciated. I have tried everything I know and have yet to find any clue as to what is actually causing this to happen.
--
My model looks like this:
<?php
namespace Project\Bundle\Document;
use Doctrine\ODM\MongoDB\Mapping\Annotations as MongoDB;
use Symfony\Component\Validator\Constraints as Assert;
/**
* #MongoDB\Document
*/
class File {
/**
* #MongoDB\Id(strategy="auto")
*/
protected $id;
/**
* #MongoDB\ObjectId
* #MongoDB\Index
* #Assert\NotBlank
*/
protected $userId;
/**
* #MongoDB\ObjectId
* #MongoDB\Index
*/
protected $commonId;
/**
* #MongoDB\File
*/
public $file;
/**
* #MongoDB\String
*/
public $mimeType;
/**
* #MongoDB\Hash
*/
public $meta;
... getters / setters ...
?>
I turned on verbose logging for the MongoDB server, and the query appears to run fine:
Wed May 9 20:04:29 [conn1] queryd dbdev.File.files query: { $query: { commonId: ObjectId('4fab01396bd985c215000000'), meta.size: "large" }, $orderby: {} } ntoreturn:1 nreturned:1 reslen:258 0ms
Wed May 9 20:04:29 [conn1] end connection 127.0.0.1:42087
Wed May 9 20:04:30 [DataFileSync] flushing mmap took 0ms for 5 files
Wed May 9 20:04:30 [DataFileSync] flushing diag log
Wed May 9 20:04:30 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 0ms
Wed May 9 20:04:30 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 0ms
Wed May 9 20:04:30 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 0ms
Wed May 9 20:04:30 [clientcursormon] mem (MB) res:46 virt:997 mapped:160
UPDATE
I used strace to find the following segmentation fault in Apache:
en("/opt/dev/app/cache/dev/doctrine/odm/mongodb/Hydrators/ProjectBundleDocumentFileHydrator.php", O_RDONLY) = 28
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
mmap(NULL, 2462, PROT_READ, MAP_SHARED, 28, 0) = 0x7fa3ae356000
munmap(0x7fa3ae356000, 2462) = 0
close(28) = 0
--- SIGSEGV (Segmentation fault) # 0 (0) ---
chdir("/etc/apache2") = 0
rt_sigaction(SIGSEGV, {SIG_DFL, [], SA_RESTORER|SA_INTERRUPT, 0x7fa3b3ce4cb0}, {SIG_DFL, [], SA_RESTORER|SA_RESETHAND, 0x7fa3b3ce4cb0}, 8) = 0
kill(5020, SIGSEGV) = 0
rt_sigreturn(0x139c) = 0
--- SIGSEGV (Segmentation fault) # 0 (0) ---
Process 5020 detached
This sounds like a bug which I've fixed on May 3rd. I would suggest you try the latest version of github (the v1.2 branch!). It also helps if you would include your phpinfo() section on "mongodb". If you still have an issue, please file a bug report with a small reproducible script at http://jira.mongodb.org/browse/PHP.
Related
I'm using Laravel in a project and I want to use broadcasting with laravel-echo-server and Redis. I have set up both in a docker container. Output below:
Redis
redis_1 | 1:C 27 Sep 06:24:35.521 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 27 Sep 06:24:35.577 # Redis version=4.0.2, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 27 Sep 06:24:35.577 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 27 Sep 06:24:35.635 * Running mode=standalone, port=6379.
redis_1 | 1:M 27 Sep 06:24:35.635 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 27 Sep 06:24:35.635 # Server initialized
redis_1 | 1:M 27 Sep 06:24:35.635 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 27 Sep 06:24:35.636 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 27 Sep 06:24:35.715 * DB loaded from disk: 0.079 seconds
redis_1 | 1:M 27 Sep 06:24:35.715 * Ready to accept connections
A few warnings but nothing breaking.
laravel-echo-server
laravel-echo-server_1 | L A R A V E L E C H O S E R V E R
laravel-echo-server_1 |
laravel-echo-server_1 | version 1.3.1
laravel-echo-server_1 |
laravel-echo-server_1 | ⚠ Starting server in DEV mode...
laravel-echo-server_1 |
laravel-echo-server_1 | ✔ Running at localhost on port 6001
laravel-echo-server_1 | ✔ Channels are ready.
laravel-echo-server_1 | ✔ Listening for http events...
laravel-echo-server_1 | ✔ Listening for redis events...
laravel-echo-server_1 |
laravel-echo-server_1 | Server ready!
laravel-echo-server_1 |
laravel-echo-server_1 | [6:29:38 AM] - dG0sLqG9Aa9oVVePAAAA joined channel: office-dashboard
The client seems to join the channel without any problems.
However, if I kick of an event laravel-echo-server doesn't receive the event.
I did a bit of research and found something about a queue worker. So I decided to run that (php artisan queue:work) and see if that did anything. According to the docs it should run only the first task in the queue and then exit (as opposed to queue:listen). And sure enough it began processing the event I kicked of earlier. But it didn't stop and kept going until I killed it:
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
etc..
The following output showed in the redis container:
redis_1 | 1:M 27 Sep 06:39:01.562 * 10000 changes in 60
seconds. Saving...
redis_1 | 1:M 27 Sep 06:39:01.562 * Background saving started by pid 19
redis_1 | 19:C 27 Sep 06:39:01.662 * DB saved on disk
redis_1 | 19:C 27 Sep 06:39:01.663 * RDB: 2 MB of memory used by copy-on-write
redis_1 | 1:M 27 Sep 06:39:01.762 * Background saving terminated with success
Now I either did so many api calls that the queue is so massive, or something is going wrong. Additionally, laravel-echo-server didn't show any output after the jobs were 'processed'.
I have created a hook in my Model which kicks of the event:
public function __construct(array $attributes = []) {
parent::__construct($attributes);
parent::created(function( $model ){
//event(new CompanyCreated($model));
});
parent::updated(function( $model ){
event(new CompanyUpdated($model));
});
parent::deleted(function( $model ){
event(new CompanyDeleted($model));
});
}
Then this is the event it kicks off:
class CompanyUpdated implements ShouldBroadcast {
use Dispatchable, InteractsWithSockets, SerializesModels;
/**
* Create a new event instance.
*
* #return void
*/
public function __construct(Company $company) {
$this->company = $company;
}
/**
* Get the channels the event should broadcast on.
*
* #return Channel|array
*/
public function broadcastOn() {
return new Channel('office-dashboard');
}
}
And finally, this is the code on the front-end that's listening for the event:
window.Echo.channel('office-dashboard')
.listen('CompanyUpdated', (e) => {
console.log(e.company.name);
});
.env file:
BROADCAST_DRIVER=redis
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=redis
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
Why isn't the event passed to laravel-echo-server? Anything I'm missing or forgetting?
Started working out of the blue.
For me, #jesuisgenial's comment pointed me in the right direction.
You can easily identify if it's not subscribing by checking the Websockets tab under the Networking tab in Chrome developer tools
Without 1 second delay (no subscribe event):
Echo.private(`users.${context.getters.getUserId}`)
.listen('.conversations.new_message', function(data) {
console.log(data.message);
})
With 1 second delay (subscribe event present):
setTimeout(() => {
Echo.private(`users.${context.getters.getUserId}`)
.listen('.conversations.new_message', function(data) {
console.log(data.message);
})
}, 1000)
I have been using qdPM v9.0 (from qdpm/core/apps/qdPM/config/app.yml) on Centos 6.7 server with PHP 7.0.6 and apache 2.2.x and MariaDB 5.5.x for over a year now without any issues.
It seems to be using legacy Symfony 1.4.
I tried to install Let's Encrypt SSL certificates and this upgraded Apache/httpd to 2.2.15, no change in PHP or MariaDB versions.
Upon restarting httpd after SSL certificate installation, suddenly I get 500 Internal Server Error and the httpd error log shows:
...
[Wed Aug 09 14:55:22 2017] [error] [client x.x.x.x] Empty response header name, aborting request
[Wed Aug 09 14:55:32 2017] [error] [client x.x.x.x] Empty response header name, aborting request
...
Also, this is not a misconfiguration of SSL/Apache because other apps on other sub-domains continue work fine, both with and without Let's Encrypt SSL certificates.
Google does not help except some German discussion suggests to use PHP 5.3:
https://www.php.de/forum/webentwicklung/php-frameworks/1508593-installation-symfony-framework
Symfony 1 geht nur mit maximal PHP 5.3... Deswegen sagte ich doch hol dir Symfony 3!!!
I cleared the cache several times.
I removed all Let's Encrypt SSL configuration as well as restored old self-signed SSL certificates and restored Apache configuration to earlier working state.
And because we take backups daily, I even restored the entire code backup from a few hours before.
This should definitely have worked.
I still get the same error and no clue / hint as to how to debug it.
Symfony logging documentation is for its current version, not for 1.4
What could have caused this issue?
How do I enable debugging so I could find where the error "Empty response header name" is being created so that maybe I can patch it?
I modified the function and it works: (php 7.0+)
.../core/lib/vendor/symfony/lib/response/sfWebResponse.class.php on line 407
/**
* Retrieves a normalized Header.
*
* #param string $name Header name
*
* #return string Normalized header
*/
protected function normalizeHeaderName($name)
{
$out = [];
array_map(function($record) use (&$out) {
$out[] = ucfirst(strtolower($record));
}, explode('-',$name));
return implode('-',$out);
}
This version works fine too:
/**
* Retrieves a normalized Header.
*
* #param string $name Header name
*
* #return string Normalized header
*/
protected function normalizeHeaderName($name)
{
return preg_replace_callback('/\-(.)/', function ($matches) { return '-'.strtoupper($matches[1]); }, strtr(ucfirst(strtolower($name)), '_', '-'));
}
I was able to trace down the problem to the sent headers
core/lib/vendor/symfony/lib/response/sfWebResponse.class.php
on line 357
something is wrong with the values.
qdPM 9.0 was running fine for over a year on PHP7, until an Apache 2 update for Ubuntu 16.04 was coming along.
However, I found the issue:
E_WARNING: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in
.../core/lib/vendor/symfony/lib/response/sfWebResponse.class.php on line 409
but I do not get the old line:
return preg_replace('/\-(.)/e', "'-'.strtoupper('\\1')", strtr(ucfirst(strtolower($name)), '_', '-'));
converted so that it will work. As far as I understand, it shall replace anything behind a dash with uppercase-letters. But I can't get it to work with preg_replace_callback:
return preg_replace_callback('\-(.)', function($m) { return '-'.strtoupper($m[1]); } , strtr(ucfirst(strtolower($name)), '_', '-'));
the anonymous function wont get called at all. I have removed the preg replace completely and now it is working fine. Maybe we will get an update here how to solve it correctly.
When I run PHPUnit and it involves testing a class I wrote which either logs something via PHP' error_log() or via Monolog (to a custom file), the logged content is dumped on the command-line.
Even in its simplest form I can reproduce the issue. Not even sure if it is an issue, bottomline is that I want to turn off dumping of every logged item on the command-line. I've been banging my head against a wall for a while now but I can't seem to figure it out. How do I turn this off?
facts;
every item I log is shown on the cli
before running the test the following returns an empty string; ini_get('error_log');
I'm on nginx (1.10.1) + php-fpm (using PHP 7.0.7)
Related issue but not helpful; Stop log messages from appearing in the command line
An example;
class OutputOnCliTest extends PHPUnit_Framework_TestCase
{
/** #test */
public function assertAndLogSomething()
{
$this->assertEquals(true, true);
error_log('I do not want to show this on the command line');
}
/** #test */
public function assertAndLogSomethingElse()
{
$logger = new \Monolog\Logger('loggerName');
// no need to set a Monolog handler for this example, just want to log right away
$logger->info('I do not want to show this either');
$this->assertEquals(true, true);
}
}
CLI:
$ ls -la
-rw-r--r-- 1 user group 1150 Dec 7 18:47 phpunit.xml
drwxr-xr-x 4 user group 136 Dec 7 17:44 tests
drwxr-xr-x 17 user group 578 Dec 7 10:05 vendor
$ ./vendor/bin/phpunit
PHPUnit 5.7.2 by Sebastian Bergmann and contributors.
I do not want to show this on the command line
.[2016-12-07 16:53:58] loggerName.INFO: I do not want to show this either
. 2 / 2 (100%)
Time: 24 ms, Memory: 4.00MB
OK (2 tests, 2 assertions)
$
PHPUnit.xml
<phpunit bootstrap="vendor/autoload.php">
<testsuites>
<testsuite name="Unit-Tests">
<directory>./tests</directory>
</testsuite>
</testsuites>
</phpunit>
Can someone please help me out so I can stop banging my head. :-)
Maybe more of a design solution but I'd mock the Logger object so it doesn't actually log anything, other than the logger test itself (but then if you are using Monolog like I do you don't need to test this anyway).
If you actually do want to test if output to the browser (or somewhere else) was received, you could try this:
/**
* #preserveGlobalState disabled
* #runInSeparateProcess
*/
public function testOutputCanSend()
{
ob_start();
// Do some stuff here that outputs directly, $logger->addInfo() etc
ob_end_clean();
}
I am trying to setup functional tests on my Centos Server using Selenium Web Server and Phpunit.
When I run the tests, I get an error in the command line :
PHPUnit_Extensions_Selenium2TestCase_WebDriverException:
Unable to connect to host vmdev-pando-56 on port 7055 after 45000 ms.
Firefox console output: Error: no display specified
I've been doing research for more than three days and I couldn't find a solution. I read many posts, including SOverflow. As per my understanding, everything is properly set up, and yet I am experiencing the same problem as many other people, and the solutions that work for them seem to be not working for me.
This is my setup:
OS: Centos 6.5 x86 in command line (no GUI)
PHP: 5.6
Phpunit: 3.7, although I also tried with 5.3
Selenium standalone web server: 2.53, downloaded from here, although I also tried with 2.9
Xvfb system: xorg-x11-server-Xvfb
Firefox: 38.0.1, although I also tried with 38.7
I also set the DISPLAY to :99 in my bash profile:
This is what I do to set up the environment:
First, I launch the Xvfb system: /usr/bin/Xvfb :99 -ac -screen 0 1280x1024x24 &
Then I launch the Selenium server: /usr/bin/java -jar /usr/lib/selenium/selenium-server-standalone-2.53.0.jar &
I launch Firefox: firefox & (although I know this is not necessary, but just in case)
All of the three processes are running in background.
At this point, I know that Firefox is operative, as well as the X buffer. I can run the command firefox http://www.stackoverflow.com & and then take an snapshot of the buffer by executing import -window root /tmp/buffer_snapshot.png, which happens to be something like this:
I of course received a warning on the terminal: Xlib: extension "RANDR" missing on display ":99", but I read countless of times that this is not a problem.
Anyway, the problem begins just now.
I've written a rather simple functional test (please notice that other tests I've written other than functional, they work just fine, so the environment in that respect seem to be properly configured):
<?php
namespace My\APP\BUNDLE\Tests\Functional\MyTest;
use PHPUnit_Extensions_Selenium2TestCase;
class HelloWorldTest extends PHPUnit_Extensions_Selenium2TestCase {
protected function setUp() {
$this->setBrowser('firefox');
$this->setHost('localhost');
$this->setPort(4444);
$this->setBrowserUrl('http://www.stackoverflow.com');
}
public function testTitle() {
$this->url('/');
$this->assertEquals("1", "1");
}
}
And when I run the test by issuing phpunit HelloWorldTest.php, I get the following error:
PHPUnit_Extensions_Selenium2TestCase_WebDriverException:
Unable to connect to host vmdev-pando-56 on port 7055
after 45000 ms. Firefox console output:
Error: no display specified
Checking the log file generated by selenium, I found the following (interesting) lines:
21:55:46.135 INFO - Creating a new session for Capabilities [{browserName=firefox}]
[...]
java.util.concurrent.ExecutionException:
org.openqa.selenium.WebDriverException:
java.lang.reflect.InvocationTargetException
Build info: version: '2.53.0',
revision: '35ae25b',
time: '2016-03-15 17:00:58'
System info: host: 'vmdev-pando-56',
ip: '127.0.0.1',
os.name: 'Linux',
os.arch: 'i386',
os.version: '2.6.32-431.el6.i686',
java.version: '1.7.0_99'
Driver info: driver.version: unknown
[...]
(The file contains the complete stack trace dump, and the original message of no display specified)
No errors in the Xvfb log file.
At this point I have no clue of what I am doing wrong.
Can anyone help?
Thanks a lot
A reason for the Unable to connect error is that the version of Selenium Server does not know how to work with the version of Firefox you have installed. Selenium standalone web server 2.53 is the latest and greatest. selenium-firefox-driver is also 2.53. Firefox version 38 is old. I am running firefox 45.0.1 with selenium 2.53.
Our application runs in a Docker container on AWS:
Operating system: Ubuntu 14.04.2 LTS (Trusty Tahr)
Nginx version: nginx/1.4.6 (Ubuntu)
Memcached version: memcached 1.4.14
PHP version: PHP 5.5.9-1ubuntu4.11 (cli) (built: Jul 2 2015 15:23:08)
System Memory: 7.5 GB
We get blank pages and a 404 Error less frequently. While checking the logs, I found that the php-child process is killed and it seems that memory is mostly used by memcache and php-fpm process and very low free memory.
memcache is configured to use 2 GB memory.
Here is php www.conf
pm = dynamic
pm.max_children = 30
pm.start_servers = 9
pm.min_spare_servers = 4
pm.max_spare_servers = 14
rlimit_files = 131072
rlimit_core = unlimited
Error logs
/var/log/nginx/php5-fpm.log
[29-Jul-2015 14:37:09] WARNING: [pool www] child 259 exited on signal 11 (SIGSEGV - core dumped) after 1339.412219 seconds from start
/var/log/nginx/error.log
2015/07/29 14:37:09 [error] 141#0: *2810 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: x.x.x.x, server: _, request: "GET /suggestions/business?q=Selectfrom HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "example.com", referrer: "http://example.com/"
/var/log/nginx/php5-fpm.log
[29-Jul-2015 14:37:09] NOTICE: [pool www] child 375 started
/var/log/nginx/php5-fpm.log:[29-Jul-2015 14:37:56] WARNING: [pool www] child 290 exited on signal 11 (SIGSEGV - core dumped) after 1078.606356 seconds from start
Coredump
Core was generated by php-fpm: pool www.Program terminated with signal SIGSEGV, Segmentation fault.#0 0x00007f41ccaea13a in memcached_io_readline(memcached_server_st*, char*, unsigned long, unsigned long&) () from /usr/lib/x86_64-linux-gnu/libmemcached.so.10
dmesg
[Wed Jul 29 14:26:15 2015] php5-fpm[12193]: segfault at 7f41c9e8e2da ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:28:26 2015] php5-fpm[12211]: segfault at 7f41c966b2da ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:29:16 2015] php5-fpm[12371]: segfault at 7f41c9e972da ip 00007f41ccaea13a sp 00007ffcc5730b70 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:35:36 2015] php5-fpm[12469]: segfault at 7f41c96961e9 ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:35:43 2015] php5-fpm[12142]: segfault at 7f41c9e6c2bd ip 00007f41ccaea13a sp 00007ffcc5730b70 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:37:07 2015] php5-fpm[11917]: segfault at 7f41c9dd22bd ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:37:54 2015] php5-fpm[12083]: segfault at 7f41c9db72bd ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
While googling for this same issue, and trying hard to find a solution that was not related to sessions (because I have ruled that out) nor to bad PHP code (because I have several websites running precisely the same version of WordPress, and none have issues... except for one), I came upon an answer telling that a possible solution did involve removing some buggy extension (usually memcache/d, but could be something else).
Since I had this same site working flawlessly on one Ubuntu server, when switching to a newer server, I immediately suspected that it was the migration from PHP 5.5 to 7 that caused the problem. It was just strange because no other website was affected. Then I remembered that another thing was different on this new server: I had also installed New Relic. This is both an extension and a small server that runs in the background and sends a lot of analytics data to New Relic for processing. Allegedly, it's a PHP 5 extension, but, surprisingly, it loads well on PHP 7, too.
Now here comes the tricky bit. At some point, I had installed W3 Total Cache for the WordPress installation of that particular website. Subsequently, I saw that the performance of that server was so stellar that W3TC was unnecessary, and simply stuck to a much simpler configuration. So I could uninstall W3TC. That's all very nice, but... I forgot that I had turned New Relic on W3TC, too (allegedly, it adds some extra analytics data to be sent to New Relic). When uninstalling W3TC, probably there was 'something' left on the New Relic configuration in my server which was still attempting to send data through the W3TC interface (assuming that W3TC has an interface... I really have no idea how it works at that level), and, because that specific bit of code was missing, the php_fpm handler for that website would fail... some of the time. Not all the time, because I'm assuming that, in most cases, nginx was sending static pages back. Or maybe php_fpm, set to 'recycle' after 100 calls or so, would crash-on-stop. Whatever exactly was happening, it was definitely related to New Relic — as soon as I removed the New Relic extension from PHP, that website went back to working normally.
Because this is such a specific scenario, I'm just writing this as an answer, in the remote chance that someone in the future googles for the exact problem.
In my case it was related to zend debug/xdebug. It forwards some TCP packets to the IDE (PhpStorm), that was not listening on this port (debugging was off). The solution is to either disable these extensions or enable debug listening on the debugging port.
I had this problem after installing xdebug, adding some properties to /etc/php/7.1/fpm/php.ini and restarting nginx. This is running on a Homestead Laravel box.
Simply restarting the php7.1-fpm service solved it for me.
It can happen if PHP is unable to write the session information to a file. By default it is /var/lib/php/session. You can change it by using configuration session_save_path.
phpMyAdmin having problems on nginx and php-fpm on RHEL 6
In my case it was Xdebug. After uninstalling it, it got back to normal.
In my case, it was caused by the New Relic PHP Agent. Therefore, for a specific function that caused a crash, I added this code to disable New Relic:
if (function_exists('newrelic_ignore_transaction')) {
newrelic_ignore_transaction();
}
Refer to: https://discuss.newrelic.com/t/how-to-disable-a-specific-transaction-in-php-agent/42384/2
In our case it was caused by Guzzle + New Relic. In the New Relic Agent changelog they've mentioned that in version 7.3 there was some Guzzle fix, but even using the 8.0 didn't work, so there is still something wrong. In our case this was happening only in two of our scripts that were using Guzzle. We found that there are two solutions:
Set newrelic.guzzle.enabled = false in newrelic.ini. You will lose data in the External Services tab this way, but you might not need it anyway.
Downgrade New Relic Agent to version 6.x that somehow also works
If you are reading this when they've released something newer than version 8.0, you could also try to update New Relic Agent to the latest and maybe they fixed that
In my case I had deactivated the buffering function ob_start("buffer"); in my code ;)
A possible problem is PHP 7.3 + Xdebug. Please change Xdebug 2.7.0beta1 to Xdebug 2.7.0rc1 or the latest version of Xdebug.
For some reason, when I remove profile from my xdebug.ini modes, it fixes it for me.
i.e. change
xdebug.mode=debug,develop,profile
to
xdebug.mode=debug,develop