After php upgrade pcntl_fork causing "errno=32 Broken pipe" - php

I recently upgraded from php 5.4.26 to 5.4.28 after the upgrade I am getting this error
Notice: Unknown: send of 6 bytes failed with errno=32 Broken pipe in Unknown on line 0
When ever I run the following code:
<?php
$tasks = array(
'1' => array(),
'2' => array(),
);
ini_set('display_errors', true);
class RedisClass {
private $redis;
public function __construct()
{
$this->redis = new Redis();
$this->redis->connect('localhost', 6379);
}
}
$redis = new RedisClass();
foreach ($tasks as $index => $task)
{
$pid = pcntl_fork();
// This is a child
if($pid == 0)
{
echo "Running ".$index." child in ". getmypid() ."\n";
break;
}
}
switch($pid)
{
case -1 :
die('could not fork');
break;
case 0:
// do the child code
break;
default:
while (pcntl_waitpid(0, $status) != -1)
{
$status = pcntl_wexitstatus($status);
echo "Child completed with status $status\n";
}
echo "Child Done (says: ". getmypid() .")";
exit;
}
If I only fork one child then I do not get the PHP Notice. If I run any more than 1 child I get the PHP Notice for every child except the first child.
Does anyone have any clues as to what is going on here?
I am assuming it is trying to close the Redis connection multiple times but this is code I have been running for at least 4 months with out any issues.
It only starting displaying these notices after the upgrade to 5.4.28.
I have looked at the PHP change logs but I cannot see anything that I think may explain this issue.
Should I report this to PHP as a bug?
UPDATE:
Looks like it "may" be a Redis issue, I am using phpredis I tested the same code with a mysql connection instead of loading Redis and I do not get the error.
class MysqlClass {
private $mysqli;
public function __construct()
{
$this->mysqli = mysqli_init(); //This is not the droid you are looking for
$this->mysqli->real_connect('IP_ADDRESS',
'USER_NAME',
'PASSWORD');
}
}
$mysql = new MysqlClass();

The problem here is that you do not reconnect Redis in the child process. Like Michael had said, you do not have an active connection from the second child onwards. That mysql example should not work if you also make some queries.
I have had the exact problematic behaviour with the "MySQL server has gone away" error and also with Redis.
The solution is to create a new connection to MySQL and to Redis in the child. Make sure if you have a singletone that handles the MySQL/Redis connection to reset the instances ( that was also a problem for me ).

Related

Connection management in MongoDB using PHP

I am using PHP 7.2 on a website hosted on Amazon. I have a code similar to this one that writes a record in the MongoDB:
Database connection class:
class Database {
private static $instance;
private $managerMongoDB;
private function __construct() {
#Singleton private constructor
}
public static function getInstance() {
if (!self::$instance) {
self::$instance = new Database();
}
return self::$instance;
}
function writeMongo($collection, $record) {
if (empty($this->managerMongoDB)) {
$this->managerMongoDB = new MongoDB\Driver\Manager(DB_MONGO_HOST ? DB_MONGO_HOST : null);
}
$writeConcern = new MongoDB\Driver\WriteConcern(MongoDB\Driver\WriteConcern::MAJORITY, 1000);
$bulk = new MongoDB\Driver\BulkWrite();
$bulk->insert($record);
try {
$result = $this->managerMongoDB->executeBulkWrite(
DB_MONGO_NAME . '.' . $collection, $bulk, $writeConcern
);
} catch (MongoDB\Driver\Exception\BulkWriteException $e) {
// Not important
} catch (MongoDB\Driver\Exception\Exception $e) {
// Not important
}
return $result->getInsertedCount() > 0;
}
}
Execution:
Database::getInstance()->writeMongo($tableName, $dataForMongo);
The script is working as intended and the records are added in MongoDB.
The problem is that connections are not being closed at all and once there are 500 inserts (500 is the limit of connections in MongoDB on our server) it stops working. If we restart php-fpm the connections are also reset and we can insert 500 more records.
The connection is reused during the request, but we have requests coming from 100s of actual customers.
As far as I can see there is no way to manually close the connections. Is there something I'm doing wrong? Is there some configuration that needs to be done on the driver? I tried setting socketTimeoutMS=1000&wTimeoutMS=1000&connectTimeoutMS=1000 in the connection string but the connections keep staying alive.
You are creating a client instance every time the function is invoked, and never closing it, which would produce the behavior you are seeing.
If you want to create the client instance in the function, close it in the same function.
Alternatively create the client instance once for the entire script and use the same instance in all of the operations done by that script.

Integration test that intentionally creates a deadlock (PHP)

I'm trying to test a class MyConnection I've created that extends Doctrine\DBAL\Connection and handles deadlock cases (or lock waits) by releasing any deadlocks and resolving them.
Therefore I need to create a deadlock on my test, in order to confirm that it works as it should.
The idea is to create a temporary table in the database, and try to access it in two connections at once.
The challenge is that I need to do this without using multiple threads. (threads in php are created with pthreads that needs PHP7.2+ and I have 7.1, and it's not an option to upgrade for now)
What I've already tried is using multiple connections on the database. However this created a lock wait instead. The problems with the lock wait were that:
1. it makes the integration test too slow
2. I can't delete the temporary table I create on my test. Even though the test runs on a test database, it's good practice to clean up after your tests
I have also tried forking my process. The problems with that were
1. it still creates a lock wait instead
2. I had to exit the child process, and have the parent handling everything and making assertions. But since the deadlock happens in the child, I can't catch the errors thrown from the child through the parent
The idea in the code below is that when my executeQuery fails to handle the dead lock, it won't be able to write 'I am handling the deadlock or the lock wait' in the logger, therefore I put that shouldReceive('critical') that will throw an error when things don't work out. Otherwise the test should pass
Class extending Doctrine\DBAL\Connection
class MyConnection extends Connection
{
public function executeQuery($query, array $params = [], $types = [], QueryCacheProfile $qcp = null)
{
try {
$result = parent::executeQuery($query, $params, $types, $qcp);
} catch (\Doctrine\DBAL\DBALException $exception ) {
$this->logger->critical('I am handling the deadlock or the lock wait');
}
}
}
Integration Test
class DeadlockCreationTest extends Test
{
const SERIALIZE_TRANSACTION = 'SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;';
const START_TRANSACTION = 'START TRANSACTION;';
const SELECT_QUERY = 'SELECT * FROM %s.innodb_deadlock_maker WHERE a = %d;';
const UPDATE_QUERY = 'UPDATE %s.innodb_deadlock_maker SET a=%d WHERE a <> %d';
public function setUp()
{
parent::setUp();
$this->currentDbConnection = $this->getConnection();
$this->newDbConnection = $this->getNewIdenticalDBConnection($this->currentDbConnection);
$this->logger = Mockery::mock(Logger::class);
$this->currentDbConnection->executeQuery(
'CREATE TABLE IF NOT EXISTS ' . $this->dbName .
'.innodb_deadlock_maker(b INT PRIMARY KEY AUTO_INCREMENT, a INT NOT NULL) engine=innodb;'
);
$this->currentDbConnection->executeQuery(
'insert into ' . $this->dbName . '.innodb_deadlock_maker(a) values(0), (1);'
);
}
public function testCreateDeadlock()
{
$pid = pcntl_fork();
if ($pid === -1) {
$this->fail('Could not fork child process');
}
if ($pid === 0) {
$this->process = 'child';
// In child process
try {
$this->logger->shouldReceive('critical')
->with(
'I am handling the deadlock or the lock wait'
);
$this->newDbConnection->setLogger($this->logger);
$this->newDbConnection->executeQuery(self::SERIALIZE_TRANSACTION);
$this->newDbConnection->executeQuery(self::START_TRANSACTION);
$this->newDbConnection->executeQuery(sprintf(self::SELECT_QUERY, $this->dbName, 1));
$this->newDbConnection->executeUpdate(sprintf(self::UPDATE_QUERY, $this->dbName, 1, 1));
} catch (\Doctrine\DBAL\DBALException $exception) {
if (stripos($exception->getMessage(), 'try restarting transaction') !== false) {
exit(123);
}
exit(100); // Our exit code to indicate a different error occurred
}
exit(0); // Our exit code to indicate no error occurred
}
$this->process = 'parent';
// Otherwise in parent process:
$this->currentDbConnection->executeQuery(self::SERIALIZE_TRANSACTION);
$this->currentDbConnection->executeQuery(self::START_TRANSACTION);
$this->currentDbConnection->executeQuery(sprintf(self::SELECT_QUERY, $this->dbName, 0));
$this->currentDbConnection->executeUpdate(sprintf(self::UPDATE_QUERY, $this->dbName, 0, 0));
pcntl_waitpid($pid, $status); // Wait for forked child to exit
$childExitCode = pcntl_wexitstatus($status);
switch ($childExitCode) {
case 0:
echo "\nNo error occurred, or deadlock occurred in child but was handled correctly - success!\n";
break;
case 100:
$this->fail('A different error occurred');
break;
case 123:
$this->fail('Deadlock occurred in child and was not handled correctly');
break;
default:
$this->fail('Child exited with unexpected exit code');
}
}
public function tearDown()
{
$this->newDbConnection->executeQuery('DROP TABLE ' . $this->dbName . '.innodb_deadlock_maker;');
$this->newDbConnection->close();
parent::tearDown();
}
}
What I need is ideally to create a deadlock instead of a lock wait.
Also I need to somehow confirm that my Connection class worked as it should.
Another issue I have with the code above is that I get the error message PDO::exec(): MySQL server has gone away and I don't know if and how can I stop that.
PS: The idea of the test is based on this article

PHP pthreads: the lack of resource error

Just got PHP pthreads error
pthreads has detected that the multihread could not be started, the system lacks the necessary resources or the system-imposed limit would be exceeded
and
Cannot initialize zend_mm storage [win32]
in my script...
The PHP code looks like this:
class multihread extends Worker {
public $result;
function __construct($e) {
$this->e = $e;
}
public function run() {
$this->result=file($this->e);
}
}
$threads = 15;
do {
for($i=1; $i<=$threads; $i++) {
if(empty($thread[$i])){
$e=generate_e($i);
if($e==false){
echo "Warning: no more job for e. exiting A"; exit;
}
echo "Starting new thread $i \n";
$thread[$i] = new multihread($e);
$thread[$i]->start(PTHREADS_INHERIT_NONE);
}
elseif($thread[$i]->isWorking()===false) {
if($thread[$i]->result===false){
echo "ERROR:Something wrong with thread $i, exit.";
exit;
}
$thread[$i]->shutdown();
$eval=generate_e($i);
if($e==false){
echo "Warning: no more job exiting B"; exit;
}
$thread[$i] = new multihread($e);
$thread[$i]->start(PTHREADS_INHERIT_NONE);
}
}
usleep(100);
}
while(1);
So, script open new thread, start this thread, then close thread with shutdown() and do it in the loop. It's worked like charm, but after 16000+ opened\closed threads got this error. Seems some resources stay locked? How to fix this?
One shall either detach() the thread before it ends, or call join() on it to wait until it ended.
Missing the one or the other leads to the thread's resources not getting freed and though the system runs out of resources, as observed.

PHP - MongoClient connections stacking up, not closing

I've got a class that is pulling and aggregating data from a Mongo DB. This is all working fine...I have multiple queries being run for a connection under the same connection (in the same class). However, every time I refresh the page, a new connection is opened...I can see this in my mongod output:
[initandlisten] connection accepted from 127.0.0.1:46770 #12 (6 connections now open)
I see this count up and up with every page refresh. This should be fine, I think; however, the connections never seem to close.
After a while, the connections / locks take up too much memory in Mongo, and I can no longer run the queries.
This dev environment is on a 32-bit version of Mongo, so I don't know if this is only happening because of that. (Our prod env is 64-bit, and I cannot change the dev env right now.)
My question is: Should I be closing the connection after all the queries have been made for a particular user? How should I be handling the connection pool?
The service class:
class MongoService{
protected $mongoServer;
public function setSpokenlayerMongoServer($mongoServer)
{ $this->mongoServer = $mongoServer; }
protected $mongoUser;
public function setSpokenlayerMongoUser($mongoUser)
{ $this->mongoUser = $mongoUser; }
protected $mongoPassword;
public function setSpokenlayerMongoPassword($mongoPassword)
{ $this->mongoPassword = $mongoPassword; }
protected $conn;
public function setServiceConnection($conn)
{ $this->conn = $conn; }
public function connect(){
try {
$this->conn = $this->getMongoClient();
} catch(Exception $e) {
/* Can't connect to MongoDB! */
// logException($e);
die("Can't do anything :(");
}
}
public function getDatabase($name){
if(!isset($this->conn)){
$this->connect();
}
return $this->conn->$name;
}
protected function getMongoClient($retry = 3) {
$connectString= "mongodb://".$this->mongoUser.":".$this->mongoPassword."#". $this->mongoServer."";
try {
return new MongoClient($connectString);
} catch(Exception $e) {
/* Log the exception so we can look into why mongod failed later */
// logException($e);
}
if ($retry > 0) {
return $this->getMongoClient(--$retry);
}
throw new Exception("I've tried several times getting MongoClient.. Is mongod really running?");
}
}
And in the service class where the queries are:
protected function collection(){
if(!isset($this->collection)){
$this->collection = $this->db()->selectCollection($this->collectionName);
}
return $this->collection;
}
And then a query is done like so:
$results = $this->collection()->aggregate($ops);
Is this correct behavior?
Known issue if you're using Azure IaaS, possible issue on other platforms.
One option would be to change the Mongo configuration:
MongoDefaults.MaxConnectionIdleTime = TimeSpan.FromMinutes(5);
This would kill all idle connections after 5 minutes.

What is "New transaction is not allowed" error in PHP and SQLSRV driver for?

I'm working on a web application written with PHP and uses SQL Server 2008. To connect to database, I used SQLSRV driever of Microsoft. In a part of this application, I have to use SQL Transactions. As Microsoft suggested, I did it exactly based on this article. The main processes in my codes follow these steps:
1- starting sql transaction
2- send information to PHP files through jQuery and check the result sent by JSON
3- rollback if the result was false and go to the next query if it was true.
4- commit transactions if no error occurred and all results were ok.
// This is my pseudo code
if (sqlsrv_begin_transaction( $sqlsrv->sqlsrvLink ) === true) {
$firstQuery = sqlsrv_query($stmt1);
if (!$firstQuery) {
sqlsrv_rollback();
} else {
$nextQuery = sqlsrv_query($stmt2);
if (!$nextQuery) {
sqlsrv_rollback();
} else {
sqlsrv_commit();
}
}
} else {
print_r(sqlsrv_errors()); // Here is where I get the error below.
}
The problem I have is this error:
[Microsoft][SQL Server Native Client 10.0][SQL Server] New transaction is not allowed because there are other threads running in the session
I'm using SQLSRV driver ver.2.
What is this error for? How can I solve it?
I included the my own sqlsrv class to the first part of index.php containing the methods below:
function __construct($dbServerName,$dbUsername,$dbPassword,$dbName)
{
$connectionInfo = array("Database"=> $dbName, "CharacterSet" => "UTF-8");
$this->sqlsrvLink = sqlsrv_connect($dbServerName, $connectionInfo);
if ($this->sqlsrvLink === false) {
$this->sqlsrvError = sqlsrv_errors();
}
}
function __destruct()
{
sqlsrv_close($this->sqlsrvLink);
}
i think you should set MultipleActiveResultSets to true when you want to connect to sql server :
$conn = sqlsrv_connect('127.0.0.1', array
(
'Database' => 'Adventureworks',
'MultipleActiveResultSets' => true, // MARS ENABLED
));
http://php.net/manual/de/ref.pdo-sqlsrv.connection.php
From your error, It seems like $nextQuery = sqlsrv_query($stmt2); is starting a new transaction in the same session. Can you commit !$firstQuery before starting the second?

Categories