I have simple queue worker based on standard AMQP Class from PHP. It works with RabbitMQ as a server. I have Queue Class for initialize AMQP connection wirh RabbitMQ. Everything works fine with code below:
$queue = new Queue('myQueue');
while($envelope = $queue->getEnvelope()) {
$command = unserialize($envelope->getBody());
if ($command instanceof QueueCommand) {
try {
if ($command->execute()) {
$queue->ack($envelope->getDeliveryTag());
}
} catch (Exception $exc) {
// an error occurred so do some processing to deal with it
}
}
}
However I wanted to fork queue command execution, but in this case queue goes endless with the first command over and over again. I can't acknowledge RabbitMQ that message was recieved with $queue->ack(); My forked version (simplified with only one child for testing sake) looks like this :
$queue = new Queue('myQueue');
while($envelope = $queue->getEnvelope()) {
$command = unserialize($envelope->getBody());
if ($command instanceof QueueCommand) {
$pid = pcntl_fork();
if ($pid) {
//parent proces
//wait for child
pcntl_waitpid($pid, $status, WUNTRACED);
if($status > 0) {
// an error occurred so do some processing to deal with it
} else {
//remove Command from queue
$queue->ack($envelope->getDeliveryTag());
}
} else {
//child process
try {
if ($command->execute()) {
exit(0);
}
} catch (Exception $exc) {
exit(1);
}
}
}
}
any help will be appreciated...
I finally solved the problem! I had to run ack command from child process, it works this way!
This is correct code:
$queue = new Queue('myQueue');
while($envelope = $queue->getEnvelope()) {
$command = unserialize($envelope->getBody());
if ($command instanceof QueueCommand) {
$pid = pcntl_fork();
if ($pid) {
//parent proces
//wit for child
pcntl_waitpid($pid, $status, WUNTRACED);
if($status > 0) {
// an error occurred so do some processing to deal with it
} else {
// sucess
}
} else {
//child process
try {
if ($command->execute()) {
$queue->ack($envelope->getDeliveryTag());
exit(0);
}
} catch (Exception $exc) {
exit(1);
}
}
}
}
Related
About the following code, how can I go to finally without throw an Exception in PHP?
try {
$db = DataSource::getConnection();
if (some condition here is TRUE) {
// go to finally without throw an exception
}
$stmt = $db->prepare($sql);
$stmt->saveMyData();
} catch (Exception $e) {
die($e->getMessage());
} finally {
$db = null;
}
Please don't do this, but here's an option:
try {
if (TRUE){
goto ugh;
}
echo "\ndid not break";
ugh:
} catch (Exception $e){
echo "\ndid catch";
} finally {
echo "\ni'm so tired";
}
I strongly encourage against using a goto. I think it's just really easy for code to get sloppy & confusing if you're using goto.
I'd recommend:
try {
if (TRUE){
echo "\nThat's better";
} else {
echo "\ndid not break";
}
} catch (Exception $e){
echo "\ndid catch";
} finally {
echo "\ni'm so tired";
}
You just wrap the rest of the try into an else in order to skip it.
Another option could be to declare a finally function, call that, and return.
//I'm declaring as a variable, as to not clutter the declared methods
//If you had one method across scripts, naming it `function doFinally(){}` could work well
$doFinally = function(){};
try {
if (TRUE){
$doFinally();
return;
}
echo "\ndid not break";
} catch (Exception $e){
echo "\ndid catch";
} finally {
$doFinally();
}
If you needed to continue the script, you could declare $doFinally something like:
$doFinally = function($reset=FALSE){
static $count;
if ($reset===TRUE){
$count = 0;
return;
} else if ($count===NULL)$count = 0;
else if ($count>0)return;
}
Then after the finally block, you could call $doFinally(TRUE) to reset it for the next try/catch
I have php7 CLI daemon which serially parses json with filesize over 50M. I'm trying to save every 1000 entries of parsed data using a separate process with pcntl_fork() to mysql, and for ~200k rows it works fine.
Then I get pcntl_fork(): Error 35.
I assume this is happening because mysql insertion becomes slower than parsing, which causes more and more forks to be generated until CentOS 6.3 can't handle it any more.
Is there a way to catch this error to resort to single-process parsing and saving? Or is there a way to check child process count?
Here is the solution that I did based on #Sander Visser comment. Key part is checking existing processes and resorting to same process if there are too many of them
class serialJsonReader{
const MAX_CHILD_PROCESSES = 50;
private $child_processes=[]; //will store alive child PIDs
private function flushCachedDataToStore() {
//resort to single process
if (count($this->child_processes) > self::MAX_CHILD_PROCESSES) {
$this->checkChildProcesses();
$this->storeCollectedData() //main work here
}
//use as much as possible
else {
$pid = pcntl_fork();
if (!$pid) {
$this->storeCollectedData(); //main work here
exit();
}
elseif ($pid == -1) {
die('could not fork');
}
else {
$this->child_processes[] = $pid;
$this->checkChildProcesses();
}
}
}
private function checkChildProcesses() {
if (count($this->child_processes) > self::MAX_CHILD_PROCESSES) {
foreach ($this->child_processes as $key => $pid) {
$res = pcntl_waitpid($pid, $status, WNOHANG);
// If the process has already exited
if ($res == -1 || $res > 0) {
unset($this->child_processes[$key]);
}
}
}
}
}
I have this transaction and try catch statement that checks the value of each $_POST['count'] is not less than 0. *the rollback only happens when the if statement is false it doesn't include the past update which is incorrect
$this->db->trans_begin();
foreach($_POST['ID'] as $val => $r)
{
//Gets Count
$this->db->select('count');
$this->db->from('table1');
$this->db->where('table_id', $r);
$oc = $this->db->get()->row('count');
$nc = (($oc) - $_POST['count'][$val]);
//Gets Total
$this->db->select('cost');
$this->db->from('table1');
$this->db->where('table_id', $r);
$ot = $this->db->get()->row('cost');
$total = ($ot + $total);
try{
if($nc > 0){
//Updates Quantity
$process = array(
'current_count' => $nc,
);
$this->db->where('product_id', $rm);
$this->db->update('products', $process);
}
}
catch(Exception $e){
$this->db->trans_rollback();
$this->session->set_flashdata('error','Production Failed. 1 or more Raw Materials required for production is insufficient.');
exit;
}
}
$this->db->trans_commit();
there are insert and update statements afterwards but what i want is to stop the whole process if an exception is caught apparently the exit; statement doesn't do so
There are a several ways to do this. Perhaps the easiest is this.
Model:
//I'm making the function up cause you don't show it
public function do_something(){
//all code the same up to here...
if($nc > 0){
//Updates Count
$process = array('count' => $nc);
$this->db->where('table_id', $r);
$this->db->update('table1', $process);
} else {
$this->db->trans_rollback();
throw new Exception('Model whats_its_name has nothing to process');
}
The catch statement in the model will catch any exception that the database class(es) might throw. (Do they throw any exceptions? I don't know.)
Controller
try{
$this->some_model->do_something();
}
catch (Exception $e) {
//catch the exception thrown in do_something()
//additional handling here
}
All that said, it might be wise to check $_POST['count'] for the appropriate values before you call $this->some_model->do_something();
I'm trying to retrieve database values using a PHP daemon and fork (using pcntlfork) an instance for each id that was retrieved.
Each fork is supposed to do some work and then alter the database value, so it won't be retrieved again.
However, when I fork a child and let it sleep for 10 seconds for example (realistic processing time), it seems the MySQL connection times out. How do I prevent this from happening? The try/catch doesn't seem to prevent the error.
#!/usr/bin/php
<?php
ini_set('memory_limit','256M');
gc_enable();
function sig_handler($signo) {
global $child;
switch ($signo) {
case SIGCHLD:
echo "SIGCHLD received\n";
$child--;
}
}
// install signal handler for dead kids
pcntl_signal(SIGCHLD, "sig_handler");
global $PIDS; $PIDS = array();
global $maxforks; $maxforks = 5;
global $child; $child = 1;
global $boot; $boot = true;
date_default_timezone_set('Europe/Brussels');
// figure command line arguments
if($argc > 0){
foreach($argv as $arg){
$args = explode('=',$arg);
switch($args[0]){
case '--log':
$log = $args[1];
break;
case '--msgtype':
$msgtype = $args[1];
break;
} //end switch
} //end foreach
} //end if
// Daemonizen
$daemon_start = date('j/n/y H:i', time());
$pid = pcntl_fork();
if($pid == -1){
return 1; // error
} else if($pid) {
return 0;
} else {
while(true){
try {
$host = 'localhost';
$dbname = 'bla';
$dbuser = 'bla';
$dbpass = 'bla';
$db = new PDO('mysql:host='.$host.';dbname='.$dbname.';charset=utf8', $dbuser, $dbpass, array(PDO::ATTR_TIMEOUT => 2));
//$db->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_WARNING);
} catch (PDOException $e){
echo $e->getMessage();
}
$read_messages = $db->query("SELECT * blablabla");
while($read_message_row = $read_messages->fetch(PDO::FETCH_ASSOC)){
$id = $read_message_row['id'];
$pid1 = pcntl_fork();
if ($pid1 == -1){
die('could not fork');
} else { #START ELSE COULD FORK
$PIDS[$pid1] = $pid1; //KEEP TRACK OF SPAWNED PIDS
if ($pid1){
// parent
if ($child++ >= $maxforks){
pcntl_wait($status);
$child++;
}
echo "Forking child with PID $pid1 voor $id.\n";
//PARENT THREAD : $ch is a copy that we don't need in this thread
// child forken
} else {
include_once "test_worker.php";
} // einde child thread
} //if-else-forked
}
}
}
?>
Solution is simple. Connect (or reconnect) after forking.
Forked process is exact mirror of it's parent and shares resource handles. When one of your forked processes closes DB connection, it is closed for all other pocesses in the tree - even in the middle of the query. Then you get errors like "MySQL server has gone away" or "Lost connection to MySQL server during query".
Don't use pcntl_fork() while you have database handles open. The database handle ends up shared by both child processes, leaving the connection in an inconsistent state.
In fact, avoid using pcntl_fork() at all if you can help it. Code using it will tends to be very fragile, and will not work correctly outside the command-line SAPI.
I have this code to retrieve some countervalues of a copymachine.
foreach($sett as $key => $value){
if (intval(str_replace("INTEGER: ","",snmpget($ip, "public", $base.$value["MIB"])))) {
$c = intval(str_replace("INTEGER: ","",snmpget($ip, "public", $base.$value["MIB"])));
$error = false;
}
else {
$c = 0;
$error = true;
}
$counters = array_push_assoc($counters,ucwords($key),array("total" => $c, "code" => $value["code"]));
}
everything works like a charm but the only thing that is the problem is when a machine is down en the code cannot make an SNMPGET, the whole script fails.
First I want to check if the connection to the device is alive and then retrieve the counters with SNMPGET
Is there any solution you guys can offer me?
thx
The snmpget() function returns FALSE if it fails to retrieve the object.
See docs: http://www.php.net/manual/en/function.snmpget.php
You should do a check for this within your code, for example:
try
{
foreach($sett as $key => $value){
$sntpReturn = snmpget($ip, "public", $base.$value["MIB"]);
if ($sntpReturn === false)
{
// Do something to handle failed SNTP request.
throw new Exception("Failed to execute the SNTP request to the machine.");
}
else
{
if (intval(str_replace("INTEGER: ","", $sntpReturn))) {
$c = intval(str_replace("INTEGER: ","",snmpget($ip, "public", $base.$value["MIB"])));
$error = false;
}
else {
$c = 0;
$error = true;
}
$counters = array_push_assoc($counters,ucwords($key),array("total" => $c, "code" => $value["code"]));
}
}
catch (Exception $e)
{
// Handle the exception, maybe kill the script because it failed?
}