Forking processes while mysql locked - php

I am working on a shell script in cakephp that processes a queue of items in my mysql database. To speed up the process I am using pcntlfork like so:
$pids = array();
$i = 0;
for($i = 0; $i < count($queue); $i++)
{
$pids[$i] = pcntl_fork();
if(!$pids[$i]) {
# do code
exit();
}
}
While this code is executing the shell script may run before the current script has time to delete the items from the queue. I was using mysql and locking the table like so:
$this->Queue->query("SELECT GET_LOCK('".$this->mysqlLock."', ".$this->mysqlLockTime.") AS 'GetLock'");
This implementation does not work giving me the error "General error: mysql server has gone away". This is because the connection is lost in the children. It appears to be a flaw within fork itself in php.
My question is there a better solution to locking this processes until it finishes and then releasing it for the other shell scripts to execute?

I have found the best solution to assure that the program does not call and execute a instance of itself. I will be using semaphores in php to lock the program.
Here is a short tutorial on semaphores:
http://www.re-cycledair.com/php-dark-arts-semaphores

Related

Start and terminate a background python script with PHP and a timer

What I've to do it's a bit complicated.
I've a python script and I want to run it from PHP, but in background. I had seen somewhere that for run a python script in background I have to use the PHP command exec(script.py) to run without wait for a return: and thus far no problem.
First question:
I have to stop this loop script with another PHP command, how to do this?
Second question:
I have to implement a server-side timer, who stops the script at the end of the time.
I found this code:
<?php
$timer = 60*5; // seconds
$timestamp_file = 'end_timestamp.txt';
if(!file_exists($timestamp_file))
{
file_put_contents($timestamp_file, time()+$timer);
}
$end_timestamp = file_get_contents($timestamp_file);
$current_timestamp = time();
$difference = $end_timestamp - $current_timestamp;
if($difference <= 0)
{
echo 'time is up, BOOOOOOM';
// execute your function here
// reset timer by writing new timestamp into file
file_put_contents($timestamp_file, time()+$timer);
}
else
{
echo $difference.'s left...';
}
?>
From this answer.
Then, there is a way to implement it in a MySQL database? (The integration with the script stop is not a problem)
That's actually pretty simple. You can use a memory object caching system. I would recommend memcached. Memory objects from memcached can be accessed literally from anywhere in your system. The only requirement is that a connection to the memcached backend server is supported. (PHP does, Python does, etc.)
Answer to your first question:
Create a variable called stopme with the value 0 in the memcached database.
Connect from your python script to the memcached database and read the variable stopme permanently. Let's say the python script is running when the variable stopme has the value 0.
In order to stop your script from PHP, make a connection from your PHP script to the memcached server and set stopme to 1.
The python script receives the updated value instantly and exits.
Answer to your second question:
It could be done like explained in my answer before through reading shared variables, but additionally I would like to mention that you also could use a cronjob to kill a running script.

Interesting thing happened while writing data into redis within a php loop

i wrote a php script to pull data from one server (lets call it Server A) to the other (Server B). data in server A is a redis list stores all the operating commands need to be written in server B, such as :
["setex",["session:xxxx",604800,"xxxx"]]
["set",["uid:xxx","xxxxx"]]
["pipeline",[]]
["set",["uid:xxx","xxxxx"]]
["hIncrBy",["Signin:xxxx","totalTimes",1]]
["pipeline",[]]
....
my php codes are :
while($i < 1000){
$line = $redis['server_a']->rpop('sync:op');
list($op,$params) = json_decode($line,1);
$r = call_user_func_array(array($redis['server_b'], $op), $params);
$i++;
}
The wired thing is, when the call_user_func_array method executes the redis command uncorrectly, all the rest commands in the queue cannot be written correctly into server B.
i stuck in this problem almost one week for seeking answers. after thousands of tests i found if i remove the "bad commands" that cannot be executed correctly, such as the ["pipeline",[]] row. all the other commands can be inserted properly. so it reminds me of some redis transaction issue. maybe there are some machanisms that when a command executed unproperly in redis , all the other commands afterwards will be treated as a transaction. so i add a exec() command into the while loop :
while($i < 1000){
$line = $redis['server_a']->rpop('sync:op');
list($op,$params) = json_decode($line,1);
$r = call_user_func_array(array($redis['server_b'], $op), $params);
$redis['server_b']->exec(); //this is the significant update
$i++;
}
then, my problem solved !!!
My question is , can anybody help me explain the redis machanism? is my assumption correct ?
Your library is probably using transactions for pipelining for whatever reason. pipeline is no actual Redis command, see http://redis.io/commands
Just strip all pipeline commands with empty arguments or just use ->exec when you issued a pipeline before.

php never ending loop

I need a function that executes by itself in php without the help of crone. I have come up with the following code that works for me well but as it is a never-ending loop will it cause any problem to my server or script, if so could you give me some suggestion or alternatives, please. Thanks.
$interval=60; //minutes
set_time_limit(0);
while (1){
$now=time();
#do the routine job, trigger a php function and what not.
sleep($interval*60-(time()-$now));
}
We have used the infinite loop in a live system environment to basically wait for incoming SMS and then process it. We found out that doing it this way makes the server resource intensive over time and had to restart the server in order to free up memory.
Another issue we encountered is when you execute a script with an infinite loop in your browser, even if you hit the stop button it will continue to run unless you restart Apache.
while (1){ //infinite loop
// write code to insert text to a file
// The file size will still continue to grow
//even when you click 'stop' in your browser.
}
The solution is to run the PHP script as a deamon on the command line. Here's how:
nohup php myscript.php &
the & puts your process in the background.
Not only we found this method to be less memory intensive but you can also kill it without restarting apache by running the following command :
kill processid
Edit: As Dagon pointed out, this is not really the true way of running PHP as a 'Daemon' but using the nohup command can be considered as the poor man's way of running a process as a daemon.
You can use time_sleep_until() function. It will return TRUE OR FALSE
$interval=60; //minutes
set_time_limit( 0 );
$sleep = $interval*60-(time());
while ( 1 ){
if(time() != $sleep) {
// the looping will pause on the specific time it was set to sleep
// it will loop again once it finish sleeping.
time_sleep_until($sleep);
}
#do the routine job, trigger a php function and what not.
}
There are many ways to create a daemon in php, and have been for a very long time.
Just running something in background isn't good. If it tries to print something and the console is closed, for example, the program dies.
One method I have used on linux is pcntl_fork() in a php-cli script, which basically splits your script into two PIDs. Have the parent process kill itself, and have the child process fork itself again. Again have the parent process kill itself. The child process will now be completely divorced and can happily hang out in background doing whatever you want it to do.
$i = 0;
do{
$pid = pcntl_fork();
if( $pid == -1 ){
die( "Could not fork, exiting.\n" );
}else if ( $pid != 0 ){
// We are the parent
die( "Level $i forking worked, exiting.\n" );
}else{
// We are the child.
++$i;
}
}while( $i < 2 );
// This is the daemon child, do your thing here.
Unfortunately, this model has no way to restart itself if it crashes, or if the server is rebooted. (This can be resolved through creativity, but...)
To get the robustness of respawning, try an Upstart script (if you are on Ubuntu.) Here is a tutorial - but I have not yet tried this method.
while(1) means it is infinite loop. If you want to break it you should use break by condition.
eg,.
while (1){ //infinite loop
$now=time();
#do the routine job, trigger a php function and what no.
sleep($interval*60-(time()-$now));
if(condition) break; //it will break when condition is true
}

How many php scripts can my server run simultaneously?

I've a script accesible by the end user that makes the following call:
exec("php orderWatcher.php $insertedId > /dev/null &");
In orderWatcher.php I do some operations that take a long time:
if (checkSomeStuff) {
sleep(60);
}
makeOtherStuff();
I'm aware that I can have as many php scripts running as users requesting them, but I'm not sure if that remains true when I make an exec() call, since (according to my understanding) this executes a shell like command in the system.
Further more, if I perform the tests (This tests have been modified to keep them relevant to question, they have actually a lot more meaning than this):
class OrderPlacerResultOrders extends UnitTestCase {
function testSimple() {
exec("php orderWatcher.php $insertedId > /dev/null &");
// Wait for exec to finish
sleep(65);
$this->assertTrue(orderWatcherWorked(1));
// No problem here
}
function testComplex() {
for($i = 0; $i < 100; ++$i) {
exec("php orderWatcher.php $insertedId > /dev/null &");
}
// Wait a really long time
sleep(1000);
for($i = 0; $i < 100; ++$i) {
$this->assertTrue(orderWatcherWorked(i));
// Failure arround the 17th case
}
}
}
The tests aren't the main point, the test made me question the following:
How many exec calls to a php script can be made and handled by the server?
If there's a limited amount, it makes a diference that the exec is being requested by two instances of the script (As two different web users calling the script that makes the exec call)? Or is it the same as being called in the same script (As in the tests)?
PD: Coudln't think of any tags besides php, if you do think of one please tag the question
Calling a command via exec or by a URL doesn't change the amount of scripts that can be run at the same time.
The amount of scripts that a server can run at the same time depend on it's memory alongside with a number of other factors.

Speeding up a PHP App

I have a list of data that needs to be processed. The way it works right now is this:
A user clicks a process button.
The PHP code takes the first item that needs to be processed, takes 15-25 secs to process it, moves on to the next item, and so on.
This takes way too long. What I'd like instead is that:
The user clicks the process button.
A PHP script takes the first item and starts to process it.
Simultaneously another instance of the script takes the next item and processes it.
And so on, so around 5-6 of the items are being process simultaneously and we get 6 items processed in 15-25 secs instead of just one.
Is something like this possible?
I was thinking that I use CRON to launch an instance of the script every second. All items that need to be processed will be flagged as such in the MySQL database, so whenever an instance is launched through CRON, it will simply take the next item flagged to be processed and remove the flag.
Thoughts?
Edit: To clarify something, each 'item' is stored in a mysql database table as seperate rows. Whenever processing starts on an item, it is flagged as being processed in the db, hence each new instance will simply grab the next row which is not being processed and process it. Hence I don't have to supply the items as command line arguments.
Here's one solution, not the greatest, but will work fine on Linux:
Split the processing PHP into a separate CLI scripts in which:
The command line inputs include `$id` and `$item`
The script writes its PID to a file in `/tmp/$id.$item.pid`
The script echos results as XML or something that can be read into PHP to stdout
When finished the script deletes the `/tmp/$id.$item.pid` file
Your master script (presumably on your webserver) would do:
`exec("nohup php myprocessing.php $id $item > /tmp/$id.$item.xml");` for each item
Poll the `/tmp/$id.$item.pid` files until all are deleted (sleep/check poll is enough)
If they are never deleted kill all the processing scripts and report failure
If successful read the from `/tmp/$id.$item.xml` for format/output to user
Delete the XML files if you don't want to cache for later use
A backgrounded nohup started application will run independent of the script that started it.
This interested me sufficiently that I decided to write a POC.
test.php
<?php
$dir = realpath(dirname(__FILE__));
$start = time();
// Time in seconds after which we give up and kill everything
$timeout = 25;
// The unique identifier for the request
$id = uniqid();
// Our "items" which would be supplied by the user
$items = array("foo", "bar", "0xdeadbeef");
// We exec a nohup command that is backgrounded which returns immediately
foreach ($items as $item) {
exec("nohup php proc.php $id $item > $dir/proc.$id.$item.out &");
}
echo "<pre>";
// Run until timeout or all processing has finished
while(time() - $start < $timeout)
{
echo (time() - $start), " seconds\n";
clearstatcache(); // Required since PHP will cache for file_exists
$running = array();
foreach($items as $item)
{
// If the pid file still exists the process is still running
if (file_exists("$dir/proc.$id.$item.pid")) {
$running[] = $item;
}
}
if (empty($running)) break;
echo implode($running, ','), " running\n";
flush();
sleep(1);
}
// Clean up if we timeout out
if (!empty($running)) {
clearstatcache();
foreach ($items as $item) {
// Kill process of anything still running (i.e. that has a pid file)
if(file_exists("$dir/proc.$id.$item.pid")
&& $pid = file_get_contents("$dir/proc.$id.$item.pid")) {
posix_kill($pid, 9);
unlink("$dir/proc.$id.$item.pid");
// Would want to log this in the real world
echo "Failed to process: ", $item, " pid ", $pid, "\n";
}
// delete the useless data
unlink("$dir/proc.$id.$item.out");
}
} else {
echo "Successfully processed all items in ", time() - $start, " seconds.\n";
foreach ($items as $item) {
// Grab the processed data and delete the file
echo(file_get_contents("$dir/proc.$id.$item.out"));
unlink("$dir/proc.$id.$item.out");
}
}
echo "</pre>";
?>
proc.php
<?php
$dir = realpath(dirname(__FILE__));
$id = $argv[1];
$item = $argv[2];
// Write out our pid file
file_put_contents("$dir/proc.$id.$item.pid", posix_getpid());
for($i=0;$i<80;++$i)
{
echo $item,':', $i, "\n";
usleep(250000);
}
// Remove our pid file to say we're done processing
unlink("proc.$id.$item.pid");
?>
Put test.php and proc.php in the same folder of your server, load test.php and enjoy.
You will of course need nohup (unix) and PHP cli to get this to work.
Lots of fun, I may find a use for it later.
Use an external workqueue like Beanstalkd which your PHP script writes a bunch of jobs too. You have as many worker processes pulling jobs from beanstalkd and processing them as fast as possible. You can spin up as many workers as you have memory / CPU. Your job body should contain as little information as possible, maybe just some IDs which you hit the DB with. beanstalkd has a slew of client APIs and itself has a very basic API, think memcached.
We use beanstalkd to process all of our background jobs, I love it. Easy to use, its very fast.
There is no multithreading in PHP, however you can use fork.
php.net:pcntl-fork
Or you could execute a system() command and start another process which is multithreaded.
can you implementing threading in javascript on the client side? seems to me i've seen a javascript library (from google perhaps?) that implements it. google it and i'm sure you'll find something. i've never done it, but i know its possible. anyway, your client-side javascript could activate (ajax) a php script once for each item in separate threads. that might be easier than trying to do it all on the server side.
-don
If you are running a high traffic PHP server you are INSANE if you do not use Alternative PHP Cache: http://php.net/manual/en/book.apc.php . You do not have to make code modifications to run APC.
Another useful technique that can work along with APC is using the Smarty template system which allows you to cache output so that pages do not have to be rebuilt.
To solve this problem, I've used two different products; Gearman and RabbitMQ.
The benefit of putting your jobs into some sort of queuing software like Gearman or Rabbit is that you have multiple machines, they can all participate in processing items off the queue(s).
Gearman is easier to setup, so I'd suggest poking around with it a bit first. If you find you need something more heavy duty with queue robustness; Look into RabbitMQ
http://www.danga.com/gearman/
http://pear.php.net/package/Net_Gearman (PEAR library)
You can use pcntl_fork() and family to fork a process - however you may need something like IPC to communicate back to the parent process that the child process (the one you fork'd) is finished.
You could have them write to shared memory, like via memcache or a DB.
You could also have the child process write the completed data to a file, that the parent process keeps checking - as each child process completes the file is created/written to/updated, and parent process can grab it, one at a time, and them throw them back to the callee/client.
The parent's job is to control the queue, to make sure the same data isn't processed twice and also to sanity check the children (better kill that runaway process and start over...etc)
Something else to keep in mind - on windows platforms you are going to be severely limited - I dont even think you have access to pcntl_ unless you compiled PHP with support for it.
Also, can you cache the data once its been processed, or is it unique data every time? that would surely speed things up..?

Categories