I have the problem that in a PHP application Gearman jobs sometimes are passed to more than one worker. I could reduce a code to reproduce it into one file. Now I am not sure if this is a bug in Gearman or a bug in the pecl library or maybe in my code.
Here is the code to reproduce the error:
#!/usr/bin/php
<?php
// Try 'standard', 'exception' or 'exception-sleep'.
$sWorker = 'exception';
// Detect run mode "client" or "worker".
if (!isset($argv[1]))
$sMode = 'client';
else
$sMode = 'worker-' . $sWorker;
$sLogFilePath = __DIR__ . '/log.txt';
switch ($sMode) {
case 'client':
// Remove all queued test jobs and quit if there are test workers running.
prepare();
// Init the greaman client.
$Client= new GearmanClient;
$Client->addServer();
// Empty the log file.
file_put_contents($sLogFilePath, '');
// Start some worker processes.
$aPids = array();
for ($i = 0; $i < 100; $i++)
$aPids[] = exec('php ' . __FILE__ . ' worker > /dev/null 2>&1 & echo $!');
// Start some jobs. Also try doHigh(), doBackground() and
// doBackgroundHigh();
for ($i = 0; $i < 50; $i++)
$Client->doNormal('test', $i);
// Wait a second (when running jobs in background).
// sleep(1);
// Prepare the log file entries.
$aJobs = array();
$aLines = file($sLogFilePath);
foreach ($aLines as $sLine) {
list($sTime, $sPid, $sHandle, $sWorkload) = $aAttributes = explode("\t", $sLine);
$sWorkload = trim($sWorkload);
if (!isset($aJobs[$sWorkload]))
$aJobs[$sWorkload] = array();
$aJobs[$sWorkload][] = $aAttributes;
}
// Remove all jobs that have been passed to only one worker as expected.
foreach ($aJobs as $sWorkload => $aJob) {
if (count($aJob) === 1)
unset($aJobs[$sWorkload]);
}
echo "\n\n";
if (empty($aJobs))
echo "No job has been passed to more than one worker.";
else {
echo "Those jobs has been passed more than one times to a worker:\n";
foreach ($aJobs as $sWorload => $aJob) {
echo "\nJob #" . $sWorload . ":\n";
foreach ($aJob as $aAttributes)
echo " $aAttributes[2] (Worker PID: $aAttributes[1])\n";
}
}
echo "\n\n";
// Kill all started workers.
foreach ($aPids as $sPid)
exec('kill ' . $sPid . ' > /dev/null 2>&1');
break;
case 'worker-standard':
$Worker = new GearmanWorker;
$Worker->addServer();
$Worker->addFunction('test', 'logJob');
$bShutdown = false;
while ($Worker->work())
if ($bShutdown)
continue;
break;
case 'worker-exception':
try {
$Worker = new GearmanWorker;
$Worker->addServer();
$Worker->addFunction('test', 'logJob');
$bShutdown = false;
while ($Worker->work())
if ($bShutdown)
throw new \Exception;
} catch (\Exception $E) {
}
break;
case 'worker-exception-sleep':
try {
$Worker = new GearmanWorker;
$Worker->addServer();
$Worker->addFunction('test', 'logJob');
$bShutdown = false;
while ($Worker->work())
{
if ($bShutdown) {
sleep(1);
throw new \Exception;
}
}
} catch (\Exception $E) {
}
break;
}
function logJob(\GearmanJob $Job)
{
global $bShutdown, $sLogFilePath;
$sLine = microtime(true) . "\t" . getmypid() . "\t" . $Job->handle() . "\t" . $Job->workload() . "\n";
file_put_contents($sLogFilePath, $sLine, FILE_APPEND);
$bShutdown = true;
}
function prepare()
{
$rGearman = fsockopen('127.0.0.1', '4730', $iErrno, $sErrstr, 3);
$aBuffer = array();
fputs ($rGearman, 'STATUS' . PHP_EOL);
stream_set_timeout($rGearman, 1);
while (!feof($rGearman))
if ('.' . PHP_EOL !== $sLine = fgets($rGearman, 128))
$aBuffer[] = $sLine;
else
break;
fclose($rGearman);
$bJobsInQueue = false;
$bWorkersRunning = false;
foreach ($aBuffer as $sFunctionLine) {
list($sFunctionName, $iQueuedJobs, $iRunningJobs, $iWorkers) = explode("\t", $sFunctionLine);
if ('test' === $sFunctionName) {
if (0 != $iQueuedJobs)
$bJobsInQueue = true;
if (0 != $iWorkers)
$bWorkersRunning = true;;
}
}
// Exit if there are workers running.
if ($bWorkersRunning)
die("There are some Gearman workers running that have registered a 'test' function. Please stop these workers and run again.\n\n");
// If there are test jobs in the queue start a worker that eat up the jobs.
if ($bJobsInQueue) {
$sPid = exec('gearman -n -w -f test > /dev/null 2>&1 & echo $!');
sleep(1);
exec ("kill $sPid > /dev/null 2>&1");
// Repeat this method to make sure all jobs are removed.
prepare();
}
}
When you run this code on the command line it should output "No job
has been passed to more than one worker." but insted it alway outputs a list of some jobs that have been passed to more than one worker. The error doesn't appear if you set $sWorker = 'standard'; or 'exception-sleep'.
It would help me a lot if you could run the code and tell me if you are able to reproduce the error of if I have a bug in the code.
Had exactly the same issue with Gearman 0.24, PECL lib 1.0.2. Was able to reproduce the error with your script every time.
An older version of Gearman (0.14 I think) used to work fine.
Upgrading Gearman to 0.33 fixed the issue.
Related
So I've been developing PHP for a bit now and have been attempting to troubleshoot why exec with a redirection of standard IO is still hanging the main thread.
exec(escapeshellcmd("php ".getcwd()."/orderProcessing.php" . " " . $Prepared . " " . escapeshellarg(35). "> /dev/null 2>&1 &"));
The code provided is what I've been trialing with, and I can't seem to get it to not hang until the other script completes. The reason I am doing this is in this paticular case, processing an order can take upto 30 seconds, and I don't want the user on the frontend to wait for that.
Can I only spawn child processes with php-fpm?
Is there something I have misconfigured?
Am I just misunderstanding how child processes work?
Setup is:
Centos 7 with Apache HTTPD and PHP 8.1.11
Any help appreciated!
Yes, it is possible in PHP. With proc_open() function, you can send command without wait another process. With handle opened stream, you can catch process status and check it consistently. For example:
$max = 5;
$streams = [];
$command = 'php ' . __DIR__ . '/../../process runSomeProcessCommand';
while (true) {
echo 'Working :' . count($streams) . "\n";
for ($i = 0; $i < $max; $i++) {
if (!isset($streams[$i])) {
$descriptor = [
0 => ['pipe', 'r'],
1 => ['pipe', 'w'],
2 => ['pipe', 'w']
];
echo "proc opened $i \n";
$streams[$i]['proc'] = proc_open($command . ":$i", $descriptor, $pipes);
$streams[$i]['pipes'] = $pipes;
$streams[$i]['ttl'] = time();
usleep(200000);
} else {
$info = proc_get_status($streams[$i]['proc']);
if ($info['running'] === false) {
echo "Finished $i . TTL: (" . (time() - $streams[$i]['ttl']) . " sec.).Response: " . stream_get_contents($streams[$i]['pipes'][1]) . " \n";
fclose($streams[$i]['pipes'][1]);
if ($error = stream_get_contents($streams[$i]['pipes'][2])) {
echo "Error for $i. Error: $error " . PHP_EOL;
fclose($streams[$i]['pipes'][2]);
}
proc_close($streams[$i]['proc']);
unset($streams[$i]);
} else {
echo "Running process (PID {$info['pid']}) - $i: \n";
}
# $return_value = proc_close($streams[$i]);
}
}
sleep(3);
}
There are 5 threads in same time which are doesn't wait one another.
I have a csv file containing about 3500 user's data. I want to import these users to my database and send them an email that they are registered.
I have the following code:
public function importUsers()
{
$this->load->model('perk/engagement_model');
$this->load->model('user/users_model');
$this->load->model('acl/aclUserRoles_model');
$allengagements = $this->engagement_model->getAll();
$filename = base_url() . 'assets/overdracht_users.csv';
$file = fopen($filename, "r");
$count = 0;
$totalImported = 0;
$importFails = array();
$mailFails = array();
while (($mappedData = fgetcsv($file, 10000, ";")) !== FALSE)
{
$count++;
//Skip first line because it is the header
if ($count > 1) {
if (!empty($mappedData[0])) {
$email = $mappedData[0];
$user = $this->users_model->getByEmail($email);
if (!$user) {
$user = new stdClass();
$user->email = $mappedData[0];
$user->first_name = $mappedData[1];
$user->family_name = $mappedData[2];
$user->address_line1 = $mappedData[3];
$user->address_postal_code = $mappedData[4];
$user->address_city = $mappedData[5];
$user->address_country = 'BE';
$user->volunteer_location = $mappedData[5];
$user->volunteer_location_max_distance = 50;
$user->phone = $mappedData[6];
if (!empty($mappedData[7])) {
$user->birthdate = $mappedData[7] . "-01-01 00:00:00";
} else {
$user->birthdate = null;
}
foreach ($allengagements as $eng) {
if ($eng->description == $mappedData[8]) {
$engagement = $eng->engagement_id;
}
}
$user->engagement = $engagement;
if (!empty($mappedData[9])) {
$date_created = str_replace('/', '-', $mappedData[9]);
$date_created = date('Y-m-d H:i:s', strtotime($date_created));
} else {
$date_created = date('Y-m-d H:i:s');
}
$user->created_at = $date_created;
if (!empty($mappedData[10])) {
$date_login = str_replace('/', '-', $mappedData[10]);
$date_login = date('Y-m-d H:i:s', strtotime($date_login));
} else {
$date_login = null;
}
$user->last_login = $date_login;
$user->auth_level = 1;
$user->is_profile_public = 1;
$user->is_account_active = 1;
$combinedname = $mappedData[1] . $mappedData[2];
$username = str_replace(' ', '', $combinedname);
if (!$this->users_model->isUsernameExists($username)) {
$uniqueUsername = $username;
} else {
$counter = 1;
while ($this->users_model->isUsernameExists($username . $counter)) {
$counter++;
}
$uniqueUsername = $username . $counter;
}
$user->username = $uniqueUsername;
$userid = $this->users_model->add($user);
if (!empty($userid)) {
$totalImported++;
//Add the user in the volunteer group in ACL
$aclData = [
'userID' => $userid,
'roleID' => 1,
'addDate' => date('Y-m-d H:i:s')
];
$this->aclUserRoles_model->add($aclData);
//Registration mail to volunteer
$mail_data['name'] = $user->first_name . ' ' . $user->family_name;
$mail_data['username'] = $user->username;
$this->email->from(GENERAL_MAIL, 'Test');
$this->email->to($user->email);
//$this->email->bcc(GENERAL_MAIL);
$this->email->subject('Test');
$message = $this->load->view('mail/register/registration',$mail_data,TRUE);
$this->email->message($message);
$mailsent = $this->email->send();
if (!$mailsent) {
array_push($mailFails, $mappedData);
}
} else {
array_push($importFails, $mappedData);
}
if ($count % 50 == 0) {
var_dump("count is " . $count);
var_dump("we are sleeping");
$min=20;
$max=40;
$randSleep = rand($min,$max);
sleep($randSleep);
var_dump("end of sleep (which is " . $randSleep . "seconds long)");
}
var_dump($user);
} else {
array_push($importFails, $mappedData);
}
}
}
}
var_dump("Totale aantal rijen in het bestand (met header) : " . $count);
var_dump("Totale aantal geimporteerd in de database : " . $totalImported);
var_dump("Totale aantal gefaalde imports in de database : " . count($importFails));
var_dump("Deze zijn gefailed : ");
var_dump($importFails);
}
If I do not add the users in the database, or send out a mail, and just var_dump() the $user, i can see all 3500+ users being correctly created in php objects (so they should be able to be inserted correctly).
The problem is that I want to add in a random sleep, between 20 and 40 seconds after every 50 mails that I sent.
So I started doing some testing and after commenting out the insert and mail code, I started running the script, noticing that after some amount (not 50 at all), it just stops for a bit, then continues and shows me the the var_dumps in my if case at the bottom, it can be shown here in the screenshots below.
The first screenshot shows the code stopping for a bit (note that I am only var_dumping stuff, i am not adding something in the database or sending out an email yet).
This screenshot shows what happens after the script reaches 200:
The script just completely stops from this point on. I have tried this 3 times, and every single time it stops exactly on 200.
What is happening here??
Most likely you're hitting default PHP limits. temporarily remove PHP default limits with:
ini_set('memory_limit',-1);
set_time_limit(0);
then, rerun the script and check your output.
There can be multiple reasons, but to know the exact one please enable error output with.
ini_set('display_errors',1);
ini_set('display_startup_errors',1);
and if limits are not the problem, the errors will tell you more.
HINT: using logs are still better than outputting errors to the visitors, but my guess is you're testing this on your computer.
Update, 2013-09-12:
I've dug a bit deeper into systemd and it's journal, and, I've stumbled upon this, that states:
systemd-journald will forward all received log messages to the AF_UNIX SOCK_DGRAM socket /run/systemd/journal/syslog, if it exists, which may be used by Unix syslog daemons to process the data further.
As per manpage, I did set up my environment to also have syslog underneath, I've tweaked my code accordingly:
define('NL', "\n\r");
$log = function ()
{
if (func_num_args() >= 1)
{
$message = call_user_func_array('sprintf', func_get_args());
echo '[' . date('r') . '] ' . $message . NL;
}
};
$syslog = '/var/run/systemd/journal/syslog';
$sock = socket_create(AF_UNIX, SOCK_DGRAM, 0);
$connection = socket_connect($sock, $syslog);
if (!$connection)
{
$log('Couldn\'t connect to ' . $syslog);
}
else
{
$log('Connected to ' . $syslog);
$readables = array($sock);
socket_set_nonblock($sock);
while (true)
{
$read = $readables;
$write = $readables;
$except = $readables;
$select = socket_select($read, $write, $except, 0);
$log('Changes: %d.', $select);
$log('-------');
$log('Read: %d.', count($read));
$log('Write: %d.', count($write));
$log('Except: %d.', count($except));
if ($select > 0)
{
if ($read)
{
foreach ($read as $readable)
{
$data = socket_read($readable, 4096, PHP_BINARY_READ);
if ($data === false)
{
$log(socket_last_error() . ': ' . socket_strerror(socket_last_error()));
}
else if (!empty($data))
{
$log($data);
}
else
{
$log('Read empty.');
}
}
}
if ($write)
{
foreach ($write as $writable)
{
$data = socket_read($writable, 4096, PHP_BINARY_READ);
if ($data === false)
{
$log(socket_last_error() . ': ' . socket_strerror(socket_last_error()));
}
else if (!empty($data))
{
$log($data);
}
else
{
$log('Write empty.');
}
}
}
}
}
}
This apparently, only sees (selects) changes on write sockets. Well, might be that something here is wrong so I attempted to read from them, no luck (nor there should be):
[Thu, 12 Sep 2013 14:45:15 +0300] Changes: 1.
[Thu, 12 Sep 2013 14:45:15 +0300] -------
[Thu, 12 Sep 2013 14:45:15 +0300] Read: 0.
[Thu, 12 Sep 2013 14:45:15 +0300] Write: 1.
[Thu, 12 Sep 2013 14:45:15 +0300] Except: 0.
[Thu, 12 Sep 2013 14:45:15 +0300] 11: Resource temporarily unavailable
Now, this drives me nuts a little. syslog documentation says this should be possible. What is wrong with the code?
Original:
I had a working prototype, by simply:
while(true)
{
exec('journalctl -r -n 1 | more', $result, $exit);
// do stuff
}
But this feels wrong, and consumes too much system resources, then I found out about journald having sockets.
I have attempted to connect and read from:
AF_UNIX, SOCK_DGRAM : /var/run/systemd/journal/socket
AF_UNIX, SOCK_STREAM : /var/run/systemd/journal/stdout
the given sockets.
With /var/run/systemd/journal/socket, socket_select sees 0 changes. With /var/run/systemd/journal/stdout I always (every loop) get 1 change, with 0 byte data.
This is my "reader":
<?php
define('NL', "\n\r");
$journal = '/var/run/systemd/journal/socket';
$jSTDOUT = '/var/run/systemd/journal/stdout';
$journal = $jSTDOUT;
$sock = socket_create(AF_UNIX, SOCK_STREAM, 0);
$connection = #socket_connect($sock, $journal);
$log = function ($message)
{
echo '[' . date('r') . '] ' . $message . NL;
};
if (!$connection)
{
$log('Couldn\'t connect to ' . $journal);
}
else
{
$log('Connected to ' . $journal);
$readables = array($sock);
while (true)
{
$read = $readables;
if (socket_select($read, $write = NULL, $except = NULL, 0) < 1)
{
continue;
}
foreach ($read as $read_socket)
{
$data = #socket_read($read_socket, 1024, PHP_BINARY_READ);
if ($data === false)
{
$log('Couldn\'t read.');
socket_shutdown($read_socket, 2);
socket_close($read_socket);
$log('Server terminated.');
break 2;
}
$data = trim($data);
if (!empty($data))
{
$log($data);
}
}
}
$log('Exiting.');
}
Having no data in read socket(s), I assume I'm doing something wrong.
Question, idea:
My goal is to read the messages and upon some of them, execute a callback.
Could anyone point me into the right direction of how to programmatically read journal messages?
The sockets under /run/systemd/journal/ won't work for this – …/socket and …/stdout are actually write-only (i.e. used for feeding data into the journal) while the …/syslog socket is not supposed to be used by anything else than a real syslogd, not to mention journald does not send any metadata over it. (In fact, the …/syslog socket doesn't even exist by default – syslogd must actually listen on it, and journald connects to it.)
The official method is to read directly from the journal files, and use inotify to watch for changes (which is the same thing journalctl --follow and even tail -f /var/log/syslog use in place of polling). In a C program, you can use the functions from libsystemd-journal, which will do the necessary parsing and even filtering for you.
In other languages, you have three choices: call the C library; parse the journal files yourself (the format is documented); or fork journalctl --follow which can be told to output JSON-formatted entries (or the more verbose journal export format). The third option actually works very well, since it only forks a single process for the entire stream; I have written a PHP wrapper for it (see below).
Recent systemd versions (v193) also come with systemd-journal-gatewayd, which is essentially a HTTP-based version of journalctl; that is, you can get a JSON or journal-export stream at http://localhost:19531/entries. (Both gatewayd and journalctl even support server-sent events for accessing the stream from HTML 5 webpages.) However, due to obvious security issues, gatewayd is disabled by default.
Attachment: PHP wrapper for journalctl --follow
<?php
/* © 2013 Mantas Mikulėnas <grawity#gmail.com>
* Released under the MIT Expat License <https://opensource.org/licenses/MIT>
*/
/* Iterator extends Traversable {
void rewind()
boolean valid()
void next()
mixed current()
scalar key()
}
calls: rewind, valid==true, current, key
next, valid==true, current, key
next, valid==false
*/
class Journal implements Iterator {
private $filter;
private $startpos;
private $proc;
private $stdout;
private $entry;
static function _join_argv($argv) {
return implode(" ",
array_map(function($a) {
return strlen($a) ? escapeshellarg($a) : "''";
}, $argv));
}
function __construct($filter=[], $cursor=null) {
$this->filter = $filter;
$this->startpos = $cursor;
}
function _close_journal() {
if ($this->stdout) {
fclose($this->stdout);
$this->stdout = null;
}
if ($this->proc) {
proc_close($this->proc);
$this->proc = null;
}
$this->entry = null;
}
function _open_journal($filter=[], $cursor=null) {
if ($this->proc)
$this->_close_journal();
$this->filter = $filter;
$this->startpos = $cursor;
$cmd = ["journalctl", "-f", "-o", "json"];
if ($cursor) {
$cmd[] = "-c";
$cmd[] = $cursor;
}
$cmd = array_merge($cmd, $filter);
$cmd = self::_join_argv($cmd);
$fdspec = [
0 => ["file", "/dev/null", "r"],
1 => ["pipe", "w"],
2 => ["file", "/dev/null", "w"],
];
$this->proc = proc_open($cmd, $fdspec, $fds);
if (!$this->proc)
return false;
$this->stdout = $fds[1];
}
function seek($cursor) {
$this->_open_journal($this->filter, $cursor);
}
function rewind() {
$this->seek($this->startpos);
}
function next() {
$line = fgets($this->stdout);
if ($line === false)
$this->entry = false;
else
$this->entry = json_decode($line);
}
function valid() {
return ($this->entry !== false);
/* null is valid, it just means next() hasn't been called yet */
}
function current() {
if (!$this->entry)
$this->next();
return $this->entry;
}
function key() {
if (!$this->entry)
$this->next();
return $this->entry->__CURSOR;
}
}
$a = new Journal();
foreach ($a as $cursor => $item) {
echo "================\n";
var_dump($cursor);
//print_r($item);
if ($item)
var_dump($item->MESSAGE);
}
I am learning how to run iMacros from my php scripts so that PHP script calls an iMacros browser session and passes any variables that I have (url and macro name for example). The iMacros session then runs the iMacro, after the macro is done running it passes the resulting html page back to the PHP script and closes itself. In an ideal world, anyway.
Here is the iMacros calling script:
<?php
require 'src/iimfx.class.php';
$iim = new imacros();
$vars = array();
$iim->play($vars,'grab_data.iim');
?>
But when i run this script from cmd.exe [command line] on WAMP, I get this:
New imacros session started!
Using Proxy: MY_PROXY_IP:MY_PROXY_PORT
-runner -fx -fxProfile default
--------------------------------------------------------
Setting Value IP => MY_PROXY_IP
Setting Value port => MY_PROXY_PORT
Playing Macro proxy.iim
--------MACRO ERROR!-------------------
ERROR: Browser was not started. iimInit() failed?
--------------------------------------------------------
Playing Macro grab_google.iim
--------MACRO ERROR!-------------------
ERROR: Browser was not started. iimInit() failed?
P.S. MY_PROXY_IP and MY_PROXY_PORT are replaced with actual numbers both in error messages above and iimfx.class.php.
And here is code for the iimfx.class.php :
<?php
class imacros {
function __construct($proxyip = 'MY_PROXY_IP', $proxyport = 'MY_PROXY_PORT', $silent = false, $noexit = false) {
echo "--------------------------------------\nNew imacros session started!\nUsing Proxy: $proxyip:$proxyport\n";
$this->proxyip = $proxyip;
$this->proxyport = $proxyport;
if (empty ( $this->proxyip ))
echo "NO PROXY!!\n";
$this->noexit = $noexit;
$this->fso = new COM ( 'Scripting.FileSystemObject' );
$this->fso = NULL;
$this->iim = new COM ( "imacros" );
$toexec = "-runner -fx -fxProfile default";
if ($silent === true)
$toexec .= " -silent";
if ($noexit === true)
$toexec .= " -noexit";
echo $toexec . "\n";
$this->iim->iimInit ( $toexec );
if (! empty ( $this->proxyip )) {
$dvars ['IP'] = $this->proxyip;
$dvars ['port'] = $this->proxyport;
$this->play ( $dvars, 'proxy.iim' );
}
}
function __destruct() {
if ($this->noexit === false)
$this->iim->iimExit ();
}
function play($immvars = '', $macro) {
echo "--------------------------------------------------------\n";
if (is_array ( $immvars )) {
foreach ( $immvars as $key => $value ) {
echo "Setting Value $key => $value\n";
$this->iim->iimSet ( "-var_" . $key, $value );
}
}
echo "Playing Macro $macro\n";
$s = $this->iim->iimPlay ( $macro );
if($s>0){
echo "Macro successfully played!\n";
}else{
echo "--------MACRO ERROR!-------------------\n ERROR: " . $this->getLastError() . "\n";
}
return $s;
}
// This function retrieves extracts in your iMacros script if you have any.
function getLastExtract($num) {
return $this->iim->iimGetLastExtract ( $num );
}
// Returns the last error :)
function getLastError(){
return $this->iim->iimGetLastError();
}
// Enables/disables images
function setImages($images = 1) { // 1 = on 2 = off
$dvars ['images'] = $images;
$this->play ( $dvars, 'images.iim' );
}
// Enables or disables adblockplus
function enableABP($status = true){
$dvars['status'] = $status;
$this->play ( $dvars, 'abp.iim' );
}
}
?>
Is there something I am missing here?
I have iimRunner.exe running during all of this [started manually before running the script] and I have iMacros Browser V8+.
Also, my grab_data.iim and all other required .iim are in the same place as the php script that is trying to call them and execute them.
Any kind of help and/or steer towards the right direction would be greatly appreciated!!
Thanks in advance.
U must by start the immrunner, before start the script =)
http://wiki.imacros.net/iimRunner
I have gone through all the similar questions and nothing fits the bill.
I am running a big script, which ran on chron on an old server but failed on the new so I am working on and testing in browser.
I have two functions, one pulls properties from the database, and then runs them through another which converts the price into 4 currencies, and if the value is different updates the row. The functions are as follows:
<?php
function convert_price($fore_currency, $aft_currency, $amount)
{
echo "going into convert<br/>";
$url = "http://www.currency.me.uk/remote/ER-ERC-AJAX.php?ConvertFrom=" . $fore_currency .
"&ConvertTo=" . $aft_currency . "&amount=" . $amount;
if (!is_int((int)file_get_contents($url))) {
//echo "Failed on convert<br/>";
return false;
} else {
//echo "Conversion done";
return (int)file_get_contents($url);
}
}
function run_conversion($refno = '', $output = false)
{
global $wpdb;
$currencies = array("GBP", "EUR", "TRY", "USD");
$q = "SELECT * FROM Properties ";
$q .= (!empty($refno)) ? " WHERE Refno='" . $refno . "'" : "";
$rows = $wpdb->get_results($wpdb->prepare($q), ARRAY_A);
$currencies = array("USD", "GBP", "TRY", "EUR");
//$wpdb->show_errors();
echo "in Run Conversion " . "<br/>";
foreach ($rows as $row) {
echo "In ROw <br/>";
foreach ($currencies as $currency) {
if ($currency != $row['Currency'] && $row['Price'] != 0) {
$currfield = $currency . "_Price";
$newprice = convert_price($row['Currency'], $currency, $row['Price']);
echo "Old Price Was " . $row['Price'] . " New Price Is " . $newprice . "<br/>";
if ($newprice) {
if ($row[$currfield] != $newprice) {
$newq = "UPDATE Properties SET DateUpdated = '" . date("Y-m-d h:i:s") . "', " .
$currfield . "=" . $newprice . " WHERE Refno='" . $row['Refno'] . "'";
$newr = $wpdb->query($newq);
if ($output) {
echo $newq . " executed <br/>";
} else {
echo "query failed " . $wpdb->print_error();
}
} else {
if ($output) {
echo "No need to update " . $row['Refno'] . " with " . $newprice . "<br/>";
}
}
} else {
echo "Currency conversion failed";
}
}
}
}
}
?>
I then run the process from a seperate file for the sake of chron like so:
require($_SERVER['DOCUMENT_ROOT'] . "/functions.php"); // page containing functions
run_conversion('',true);
If I limit the mysql query to 100 properties it runs fine and I get all the output in a nice stream. But when I try to run it in full the script completes (I can see from rows updated in db) but no output. I have tried upping the memory allowance but no joy. Any ideas gratefully received. Also, I get a 500 error when changing from Apache Module to CGI. Any ideas on that also well received as ideally I would like to turn site onto fastCGI.
You're running out of memory most likely. Probably in your DB layer. Your best bet is to unset your iterative values at the end of each loop:
foreach ( $rows as $row ) {
...
foreach ( $currencies as $currency ) {
...
unset( $currency, $newr, $currfield, $newprice );
}
unset( $row );
}
This will definitely help.
As another answer suggests you may be running out of memory, but if you make an HTTP call everytime the loop is running, and you have 100+ currencies, this may also cause the problem. Try limit it to 5 currencies for example.
If this is indeed the problem I would split the process, so a chunk of currencies have their respective value updated per cron job, instead of everything.