How to make the worker continuously run in PHP - Heroku? - php

I am creating a php application which inserts the record in the DB for every 10 - 20 automatically in heroku to achieve that i had created a proc file and i'm using worker inside that . While deploying the application in the heroku there is found out while running the worker I am new to this heroku and PHP. Can any one help me on what this mean and how can i resolve it and make the worker run continuously to insert the record in the Database.
This is line in my proc file contains which is under root of the project
worker: php /app/worker/db_worker.php
My db_worker.php:
<?php
include_once "./db_config.php";
$conn = $connection;
$number_1 = rand(1,50);
$number_2 = rand(60,500);
$insertQuery = "INSERT INTO random_numbers (num_1, num_2) VALUES ('".$number_1."', '".$number_2."')";
$result = mysqli_query($GLOBALS['conn'], $insertQuery);
if($result) {
echo 'Details saved';
} else {
echo 'Failed to save details';
}
?>

The simplest solution would be to run your operations in a loop. For example,
<?php
while (true) {
// assume this can be included repeatedly
include_once "./db_config.php";
$conn = $connection;
$number_1 = rand(1,50);
$number_2 = rand(60,500);
$insertQuery = "INSERT INTO random_numbers (num_1, num_2) VALUES ('".$number_1."', '".$number_2."')";
$result = mysqli_query($GLOBALS['conn'], $insertQuery);
if($result) {
echo 'Details saved';
} else {
echo 'Failed to save details';
}
// recommend to do some memory clean up
// ...
// wait for 5 seconds
sleep(5);
}
But there's a problem. PHP is not considered memory safe. There can be memory leak issue(s) running it for really long time. If this didn't work well, you might simply do this as a static script with the help of scheduler services like cron-job.org.

Related

SQLite3::exec failed

Below is my code. $sqlite->exec cannot return true. I used echo to output the insert statement above this line of code. The statement is correct, and it can be inserted normally when executed directly on the command line. But using $sqlite->exec does not work.
static function addContent($ja, $tc, $sc, $comment) {
$ja = trim($ja);
$tc = trim($tc);
$sc = trim($sc);
$sqlite = new SQLite3('/home/bitnami/htdocs/converter');
if ($sqlite) {
$query = "insert into hanzi values (null, '$ja', '$tc', '$sc', '$comment')";
echo $query . '<br />';
$result = $sqlite->exec($query);
if ($result) {echo '$result is OK';} else {echo '$result os not OK';}
}
return $result;
}
There is no problem with the database connection, that is, the result of $sqlite = new SQLite3('/home/bitnami/htdocs/converter'); is correct, otherwise the echo below could not be executed.
This code was working fine on a Cent OS server before, and now I moved it to a LAMP environment in AWS Lightsail.
Other parts of the code that use the select statement $sqlite->query($query) can be executed.
I used .log to collect the log, but there is no record of execution failure in the log.

PHP while loop break after sent

I sent a PHP infinite loop and I can't stop it, I deleted the file on the server but it still continues to enter data into the DATABASE, is there any way to deactivate it?
$q = "INSERT INTO `gancxadebebi`
($keys)
VALUES
($values)";
$var = 0;
while(true){
sleep(1);
$r = mysqli_query($c, $q);
if($r){
echo 'success';
}else{
echo 'error';
}
++$var;
echo $var;
if($var == '100000'){
break;
}
}
I'm guessing this is a development machine. If that is the case, just restart the web-server to terminate the PHP script. If you are using apache, that would look like:
sudo apache2ctl restart
Don't run this command when you are working in a production machine.

How use multi-threading with this

I've been reading up on multi-threading with PHP, but I'm having a tough time integrating it into my command line php script.
I read multithreading
and multithread foreach.
But I'm really not sure. Any thoughts how to apply multi-threading here? The reason I need multi-threading here is that Telnet takes forever (see shell script). But I can't write to my DB concurrently ($stmt2). I'm looping through my list of devices with $stmt->fetch.
Maybe I should do something like run task specifically, with just the telnet/shell script call in the task, like that example:
$task = new class extends Thread {
private $response;
public function run()
{
$content = file_get_contents("http://google.com");
preg_match("~<title>(.+)</title>~", $content, $matches);
$this->response = $matches[1];
}
};
$task->start() && $task->join();
var_dump($task->response); // string(6) "Google"
But, I'm getting the error when I try to add this to my code below:
PHP Parse error: syntax error, unexpected T_CLASS in /opt/IBM/custom/NAC_Dslam/calix_swVerThreaded.php on line 100
this is the line:
$task = new class ...
My script looks like this:
$stmt =$mysqli->prepare("SELECT ip, model FROM TableD WHERE vendor = 'Calix' AND model in ('C7','E7') AND sw_ver IS NULL LIMIT 6000"); //AND ping_reply IS NULL AND software_version IS NULL
$stmt->bind_result($ip, $model); //list of ip's
if(!$stmt->execute())
{
//err
}
$stmt2 = $mysqli2->prepare("UPDATE TableD SET sw_ver = ?
WHERE vendor = 'Calix'
AND ip = ? ");
$stmt2->bind_param("ss", $software, $ip);
while($stmt->fetch()) {
//initializing var's
if(pingAddress($ip)=="alive") { //Ones that don't ping are dead to us.
///////this is the part that takes forever and should be multi-threaded/////
//Call shell script to telnet to calix dslam and get version for that ip
if($model == "C7"){
$task = new class extends Thread {
private $itsOutput;
public function run()
{
exec ("./calix_C7_swVer.sh $ip", $itsOutput);//takes forever/telnet
//in shell script. Can't
//be fixed. Each time I
//call this script it's a
//different ip
}
};
$task->start() && $task->join();
var_dump($task->itsOutput); //should be returned output above //takes forever to telnet
//$output = $task->itsOutput;
$output2=array_reverse($output,true);
if (!(preg_grep("/DENY/", $output2))){
$found = preg_grep("/COMPLD/", $output2);
$ind = key($found);
$version = explode(",", $output[$ind+1]);
if(strlen($version[3])>=1) { //if sw ver came back in an acceptable size
$software = $version[3];
$software = trim($software,'"'); //trim double quote (usually is there)
print "sw ver after trim: " . $software . "\n";
if(!$stmt2->execute()) { //write sw version to netcool db
$tempErr = "Failed to insert into dslam_elements_nac: " . $stmt2->error;
printf($tempErr . "\n"); //show mysql execute error if exists
$err->logThis($tempErr);
}
if(!$stmtX->execute()) { //commit it
$tempErr = "Failed to commit dslam_elements_nac: " . $stmtX->error;
printf($tempErr . "\n"); //show mysql execute error if exists
$err->logThis($tempErr);
}
} //we got a version back
else { //version not retrieved
//error processing
} //didn't get sw ver
} //not deny
} //c7
else if($model == "E7") {
exec ("./calix_E7_swVer.sh $ip", $output);
$output2=array_reverse($output,true);
if (!(preg_grep("/DENY/", $output2))){
$found = preg_grep("/yes/", $output2);
$ind = key($found);
$version = explode(" ", $output[$ind]);
if(strlen($version[5])>=1) { //if sw ver came back in an acceptable size
$software = $version[5];
print "sw ver after trim: " . $software . "\n";
if(!$stmt2->execute()) { //write sw version to netcool db
$tempErr = "Failed to insert into dslam_elements_nac: " . $stmt2->error;
printf($tempErr . "\n"); //show mysql execute error if exists
$err->logThis($tempErr);
}
if(!$stmtX->execute()) { //commit it
//err processing
}
} //we got a version back
else { //version not retrieved
//handle it
} //didn't get sw ver
} //not deny
}
} //while
update
I'm trying this (pcntl_fork), but it doesn't seem to be quite what I need because when I sleep(30), which I think is similar to my shell script call, other processes don't continue and do the next one.
<?php
declare(ticks = 1);
$max=10;
$child=0;
$res = array("aabc", "bcd", "cde", "eft", "ggg", "hhh", "iii", "jjj", "kkk", "lll", "mmm", "nnn", "ooo", "ppp", "qqq", "aabc", "bcd", "cde", "eft", "ggg", "hhh", "iii", "jjj", "kkk", "lll", "mmm", "nnn", "ooo", "ppp", "qqq");
function sig_handler($signo) {
global $child;
switch ($signo) {
case SIGCHLD:
//echo "SIGCHLD receivedn";
// clean up zombies
$pid = pcntl_waitpid(-1, $status, WNOHANG);
$child -= 1;
//exit;
}
}
pcntl_signal(SIGCHLD, "sig_handler");
//$website_scraper = new scraper();
foreach($res as $r){
while ($child >= $max) {
sleep(5); //echo " - sleep $child n";
//pcntl_waitpid(0,$status);
}
$child++;
$pid=pcntl_fork();
if ($pid==-1) {
die("Could not fork:n");
}
elseif ($pid) {
// we're in the parent fork, dont do anything
}
else {
//example of what a child process could do:
print "child process stuff \n";
sleep(30);
//$website_scraper -> scraper("http://foo.com");
exit;
}
while(pcntl_waitpid(0, $status) != -1) { //////???
$status = pcntl_wexitstatus($status);
echo "child $status completed \n";
}
print "did stuff \n";
}
?>
I've been reading up on multi-threading with PHP
Don't. PHP threading has very limited utility, as it cannot be used in a web server environment. It can only be used in command-line scripts.
The author of the PHP pthreads extension has written:
pthreads v3 is restricted to operating in CLI only: I have spent many years trying to explain that threads in a web server just don't make sense, after 1,111 commits to pthreads I have realised that, my advice is going unheeded.
So I'm promoting the advice to hard and fast fact: you can't use pthreads safely and sensibly anywhere but CLI.
If you need to communicate with multiple network devices in parallel, consider using stream_select to perform asynchronous I/O, or running multiple PHP processes as part of a worker queue to manage the connections.

Gearman with multiple servers and php workers

I'm having a problem with gearman workers running on multiple servers which i can't seem to solve.
The problem occurs when a worker server is taken offline, rather than the worker process being cancelled, and causes all other worker processes to error and fail.
Example with just 1 client and 2 workers -
Client:
$client = new GearmanClient ();
$client->addServer ('192.168.1.200');
$client->addServer ('192.168.1.201');
$job = $client->do ('generate_tile', serialize ($arrData));
Worker:
$worker = new GearmanWorker ();
$worker->addServer ('192.168.1.200');
$worker->addServer ('192.168.1.201');
$worker->addFunction ('generate_tile', 'generate_tile');
while (1)
{
if (!$worker->work ())
{
switch ($worker->returnCode ())
{
default:
echo "Error: " . $worker->returnCode () . ': ' . $worker->error () . "\n";
break;
}
}
}
function generate_tile ($job) { ... }
The worker code is being run on 2 separate servers. When every server is up and running both workers execute jobs as expected. When one of the worker processes is cancelled, the other worker executes all jobs as expected.
However, when the server with the cancelled worker process is shutdown and taken completely offline, requests to the client script hang and the remaining worker process does not pick up any jobs.
I get the following set of errors from the remaining worker process:
Error: 46: gearman_con_wait:timeout reached
Error: 46: gearman_con_wait:timeout reached
Error: 4: gearman_con_flush:write:110
Error: 46: gearman_con_wait:timeout reached
Error: 4: gearman_con_flush:write:113
Error: 4: gearman_con_flush:write:113
Error: 4: gearman_con_flush:write:113
....
When i start-up the other server, not starting the worker process on it, the remaining worker process immediately jumps into life and executes any remaining jobs.
It seems clear to me that i need some code in the worker process to cope with any servers that may be offline, however i cannot see how to do this.
Many thanks,
Andy
Our tests with multiple gearman servers shows that if the last server in the list (192.168.1.201 in your case) is taken down, the workers stop executing the way you are describing. (Also, the workers grab jobs from the last server. They process jobs on .200 only if on .201 there are no jobs).
It seems that this is a bug with the linked list in the gearman server, which is reported to be fixed multiple times, but with all available versions of gearman, the bug persist. Sorry, I know that's not a solution, but we had the same problem and didn't found a solution. (if someone can provide working solution for this problem, I agree to give large bounty)
Further to #Darhazer 's comment above. We found that as well and solved like thus :-
// Gearman workers show a strong preference for servers at the end of a list so randomize the order
$worker = new GearmanWorker();
$s2 = explode(",", Configure::read('workers.servers'));
shuffle($s2);
$servers = implode(",", $s2);
$worker->addServers($servers);
We run 6 to 10 workers at any time, and expire them after they've completed x requests.
I use this class, which keep track of which jobs work on which servers. It hasn't been thoroughly tested, just wrote it now. I've pasted an edited version, so there might be a typo or somesuch, but otherwise appears to solve the issue.
<?
class MyGearmanClient {
static $server = "server1,server2,server3";
static $server_array = false;
static $workingServers = false;
static $gmclient = false;
static $timeout = 5000;
static $defaultTimeout = 5000;
static function randomServer() {
return self::$server_array[rand(0, count(self::$server_array) -1)];
}
static function getServer($job = false) {
if (self::$server_array == false) {
self::$server_array = explode(",", self::$server);
self::$workingServers = array();
}
$serverList = array();
if ($job) {
if (array_key_exists($job, self::$workingServers)) {
foreach (self::$server_array as $server) {
if (array_key_exists($server, self::$workingServers[$job])) {
if (self::$workingServers[$job][$server]) {
$serverList[] = $server;
}
} else {
$serverList[] = $server;
}
}
if (count($serverList) == 0) {
# All servers have failed, need to insert all the servers again and retry.
$serverList = self::$workingServers[$job] = self::$server_array;
}
return $serverList[rand(0, count($serverList) - 1)];
} else {
return self::randomServer();
}
} else {
return self::randomServer();
}
}
static function serverWorked($server, $job) {
self::$workingServers[$job][$server] = $server;
}
static function serverFailed($server, $job) {
self::$workingServers[$job][$server] = false;
}
static function Connect($server = false, $job = false) {
if ($server) {
self::$server = self::getServer();
}
self::$gmclient= new GearmanClient();
self::$gmclient->setTimeout(self::$timeout);
# add the default job server
self::$gmclient->addServer($server = self::getServer($job));
return $server;
}
static function Destroy() {
self::$gmclient = false;
}
static function Client($name, $vars, $timeout = false) {
if (is_int($timeout)) {
self::$timeout = $timeout;
} else {
self::$timeout = self::$defaultTimeout;
}
do {
$server = self::Connect(false, $name);
$value = self::$gmclient->do($name, $vars);
$return_code = self::$gmclient->returnCode();
if (!$value) {
$error_message = self::$gmclient->error();
if ($return_code == 47) {
self::serverFailed($server, $name);
if (count(self::$server_array) > 1) {
// ADDED SINGLE SERVER LOOP AVOIDANCE // echo "Timeout on server $server, trying another server...\n";
continue;
} else {
return false;
}
}
echo "ERR: $error_message ($return_code)\n";
}
# printf("Worker has returned\n");
$short_value = substr($value, 0, 80);
switch ($return_code)
{
case GEARMAN_WORK_DATA:
echo "DATA: $short_value\n";
break;
case GEARMAN_SUCCESS:
self::serverWorked($server, $name);
break;
case GEARMAN_WORK_STATUS:
list($numerator, $denominator)= self::$gmclient->doStatus();
echo "Status: $numerator/$denominator\n";
break;
case GEARMAN_TIMEOUT:
// self::Connect();
// Fall through
default:
echo "ERR: $error_message " . self::$gmclient->error() . " ($return_code)\n";
break;
}
}
while($return_code != GEARMAN_SUCCESS);
$rv = unserialize($value);
return $rv["rv"];
}
}
# Example usage:
# $rv = MyGearmanClient::Client("Function", $args);
?>
since 'addServer' from gearman client is not working properly this code can choose a jobserver randomly and if fails try the next one, this way you can balance the load.
// job servers
$jobservers = array('192.168.1.1','192.168.1.2');
// prepare gearman client
$gmclient = new GearmanClient();
// shuffle job servers (deliver jobs equally by server)
shuffle($jobservers);
// add job servers
foreach($jobservers as $jobserver) {
// add random jobserver
$gmclient->addServer($jobserver);
// check server state if ok end foreach
if (#$gmclient->ping('ping')) break;
// if connections fails reset client
$gmclient = new GearmanClient();
}
Solution tested and working ok.
$client = new GearmanClient();
if(!$client->addServer("11.11.65.73",4730))
$client->addServer("11.11.65.79",4730);

mysql connection from daemon written in php

i have written a daemon to fetch some stuff from mysql and make some curl requests based on the info from mysql. since i'm fluent in php i've written this daemon in php using System_Daemon from pear.
this works fine but i'm curious about the best approach for connecting to mysql. feels weird to create a new mysql connection every couple of seconds, should i try a persistent connection? any other input? keeping potential memory leaks to a minimum is of the essence...
cleaned up the script, attached below. removed the mysql stuff for now, using a dummy array to keep this unbiased:
#!/usr/bin/php -q
<?php
require_once "System/Daemon.php";
System_Daemon::setOption("appName", "smsq");
System_Daemon::start();
$runningOkay = true;
while(!System_Daemon::isDying() && $runningOkay){
$runningOkay = true;
if (!$runningOkay) {
System_Daemon::err('smsq() produced an error, '.
'so this will be my last run');
}
$messages = get_outgoing();
$messages = call_api($messages);
#print_r($messages);
System_Daemon::iterate(2);
}
System_Daemon::stop();
function get_outgoing(){ # get 10 rows from a mysql table
# dummycode, this should come from mysql
for($i=0;$i<5;$i++){
$message->msisdn = '070910507'.$i;
$message->text = 'nr'.$i;
$messages[] = $message;
unset($message);
}
return $messages;
}
function call_api($messages=array()){
if(count($messages)<=0){
return false;
}else{
foreach($messages as $message){
$message->curlhandle = curl_init();
curl_setopt($message->curlhandle,CURLOPT_URL,'http://yadayada.com/date.php?text='.$message->text);
curl_setopt($message->curlhandle,CURLOPT_HEADER,0);
curl_setopt($message->curlhandle,CURLOPT_RETURNTRANSFER,1);
}
$mh = curl_multi_init();
foreach($messages as $message){
curl_multi_add_handle($mh,$message->curlhandle);
}
$running = null;
do{
curl_multi_exec($mh,$running);
}while($running > 0);
foreach($messages as $message){
$message->api_response = curl_multi_getcontent($message->curlhandle);
curl_multi_remove_handle($mh,$message->curlhandle);
unset($message->curlhandle);
}
curl_multi_close($mh);
}
return $messages;
}
Technically if it's a daemon, it runs in background and doesn't stop until you ask it to. There's is no need to use a persistent connection in that case, and even, you probably shouldn't. I'd expect the connection to close when I kill the daemon.
Basically, you should open a connection on startup, and close it on shutdown, and that's about it. However, you should put some error trapping in there in case the connection drops unexpectedly while the daemon is running, so it either shutdowns gracefully (by logging a connection drop somewhere) or have it retry a reconnection later.
maybe before while statement just add mysql_pconnect, but I don't now anything about php daemons...

Categories