What I've to do it's a bit complicated.
I've a python script and I want to run it from PHP, but in background. I had seen somewhere that for run a python script in background I have to use the PHP command exec(script.py) to run without wait for a return: and thus far no problem.
First question:
I have to stop this loop script with another PHP command, how to do this?
Second question:
I have to implement a server-side timer, who stops the script at the end of the time.
I found this code:
<?php
$timer = 60*5; // seconds
$timestamp_file = 'end_timestamp.txt';
if(!file_exists($timestamp_file))
{
file_put_contents($timestamp_file, time()+$timer);
}
$end_timestamp = file_get_contents($timestamp_file);
$current_timestamp = time();
$difference = $end_timestamp - $current_timestamp;
if($difference <= 0)
{
echo 'time is up, BOOOOOOM';
// execute your function here
// reset timer by writing new timestamp into file
file_put_contents($timestamp_file, time()+$timer);
}
else
{
echo $difference.'s left...';
}
?>
From this answer.
Then, there is a way to implement it in a MySQL database? (The integration with the script stop is not a problem)
That's actually pretty simple. You can use a memory object caching system. I would recommend memcached. Memory objects from memcached can be accessed literally from anywhere in your system. The only requirement is that a connection to the memcached backend server is supported. (PHP does, Python does, etc.)
Answer to your first question:
Create a variable called stopme with the value 0 in the memcached database.
Connect from your python script to the memcached database and read the variable stopme permanently. Let's say the python script is running when the variable stopme has the value 0.
In order to stop your script from PHP, make a connection from your PHP script to the memcached server and set stopme to 1.
The python script receives the updated value instantly and exits.
Answer to your second question:
It could be done like explained in my answer before through reading shared variables, but additionally I would like to mention that you also could use a cronjob to kill a running script.
Related
I made a php cli script which is running on loop. I run it from server's terminal(windows). This server also function as a php webserver (xampp).
The php cli script is dealing with hardware i/o stuff (responding and giving logics to a microcontroller board through serial port). Which is always running.
And what i'm trying to accomplish is to make a web-based app (php cgi) to control that cli script. like sending some command to to make it do something.
What i've tried
I have tried using a kind of temporary json file. Which contents is generated by the cgi script.
Then the cli script read that file in every loops. And if there is a change in the json (a timestamp), the script use data inside json to do something accordingly. And then store that timestamp to compare it with the json for next loop.
But this cause a huge load on the server and the cli script become much slower. Which affects the responsiveness of the microcontroller.
the php cli loops is something like this.
<?php
$lastTimestamp=0;
while(true){
//read json file
$json=json_decode(file_get_contents("temp.json"));
if($lastTimestamp < $json->stamp){
//do something with $json->data
...
//update $lastTimestamp
$lastTimestamp = $json->stamp;
}
//rest of microcontroller logic here
...
}
and the temp.json file is something like
{
"stamp": 1557653475,
"data": {
"nd_a": 1,
"nd_b": 1,
"nd_c": 0
}
}
so the question is how to interract with the already running php cli script from the cgi script without using above methods? expecting a better way that is not affecting the server load and performance.
Edit: i also tried using database in place of json file, but the performance is still not good.
This way is inefficient. The simplest way is to make a cron job that executes the cli script.
If you're using a linux system, here is how you setup a cron (on Ubunutu)
https://help.ubuntu.com/community/CronHowto
Then in your cli script, you remove the loop and the job will read the temp.json (which is gonna be done every time the job runs) and compare both timestamps.
Now for keeping the $lastTimestamp I would suggest to create a text file right next to your cli script and store the value of $lastTimestamp in that file.
Then only if there is a difference between the timestamp retrieved from the json and the timestamp in the text file, you overwrite the text file with the new timestamp.
You can use file_put_contents or fopen/fwrite to write into the file.
So your script would be like the following
<?php
$lastTimestamp=0;
$lastTimeStampFilePath = "lastTimeStampFile.txt";
//read text file
$lastTimestamp=file_get_contents($lastTimeStampFilePath);
//read json file
$json=json_decode(file_get_contents("temp.json"));
if($lastTimestamp < $json->stamp){
//do something with $json->data
...
//update $lastTimestamp
file_put_contents($lastTimeStampFilePath, $json->stamp);
}
//rest of microcontroller logic here
Maybe configure the job to run every 2 mins or 5 mins, depending on your requirements.
I have a cron job on my hosting server that is supposed to execute a phpscript every 30 minutes. This php script is used to scrap a csv file and uptate that data into a database (i'm using MySQL for that). When executed manually that script works - it takes it about 45 seconds to finish the process.
Now when that cron job runs i'm getting this message:
Could not open input file.
At first i thought that this might be because it takes over 30 seconds to execute that script so i decided to set the max execution time at the begging of the php script:
ini_set('max_execution_time', 300);
But again the same message came up.
This is my cron job:
/usr/local/bin/php /home/emmaein/domain.com/folder/script.php?token=d8cn3j
P.S: that token get variable is sort of a password so it can't be executed by everybody so basically my php script looks like this:
if($_GET['token'] === 'd8cn3j'){
//open csv
//get data
//update db
} else{
exit('I see you >:)');
}
When PHP is not running inside a web server, you can't access $_GET variables (there's no such thing). Instead you should use command line arguments:
<?php
if ($argc > 1 && $argv[1] === 'd8cn3j') {
// Do stuff
}
And then your crontab becomes:
/usr/local/bin/php /home/emmaein/domain.com/folder/script.php d8cn3j
The concept of $argc (the number of arguments) and $argv (an array of the arguments) is fairly standard among CLI programs, and is documented on PHP's website.
I am unable to understand and run a simple PHP script in FCGI mode. I am learning both Perl and PHP and I got the Perl version of FastCGI example below to work as expected.
Perl FastCGI counter:
#!/usr/bin/perl
use FCGI;
$count = 0;
while (FCGI::accept() >= 0) {
print("Content-type: text/html\r\n\r\n",
"<title>FastCGI Hello! (Perl)</title>\n",
"<h1>FastCGI Hello! (Perl)</h1>\n",
"Request number ", $++count,
" running on host <i>$ENV('SERVER_NAME')</i>");
}
Searching for similar in PHP found talk about "fastcgi_finish_request" but have no clue how
to accomplish the counter example in PHP, here is what I tried:
<?php
header("content-type: text/html");
$counter++;
echo "Counter: $counter ";
//http://www.php.net/manual/en/intro.fpm.php
fastcgi_finish_request(); //If you remove this line, then you will see that the browser has to wait 5 seconds
sleep(5);
?>
Perl is not PHP. This must not mean that you can not most often interchange things and port code between the two, however when it comes to runtime environments there are bigger differences you can not just interchange.
FCGI is on the request / protocol level already which is fully abstracted in the PHP runtime and you therefore have not as much control in PHP as you would have with Perl and use FCGI;
Therefore you can not just port that code.
Next to that fastcgi_finish_request is totally unrelated to the Perl code. You must have confused it or thrown it in by sheer luck to give it a try. However it's not really useful in this counter example context.
PHP and HTTP are stateless.
All data is only relevant for the current, ongoing request.
If you need to save state, you might consider storing the data into cookie, session, cache or db.
So the implementation of this "counter" example will be different for PERL and PHP.
Your usage of fastcgi_finish_request won't bring the functionality you expect from PERL.
Think about a long running calculation, where you output data in the middle.
You can do that with fastcgi_finish_request, the data is then pushed to the browsers, while the long running tasks keeps running.
Opening happens together FASTCGI+PHP.
Normally the connection would be open till PHP finishes, then FASTCGI would be closed.
Except you reach the exec timeout of PHP (exec timeout) or fastcgi timeout (connection timeout). fastcgi_finish_request handles the case, where the fascgi connection to the browser is closed BEFORE PHP finishes execution.
Simple Hit Counter Example for PHP
<?php
$hit_count = #file_get_contents('count.txt'); // read count from file
$hit_count++; // increment hit count by 1
echo $hit_count; // display
#file_put_contents('count.txt', $hit_count); // store the new hit count
?>
Honestly, that's not even how you should do it using Perl either.
Instead, I'd recommend using CGI::Session to track session information:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use CGI::Carp qw(fatalsToBrowser);
use CGI::Session;
my $q = CGI->new;
my $session = CGI::Session->new($q) or die CGI->Session->errstr;
print $session->header();
# Page View Count
my $count = 1 + ($session->param('count') // 0);
$session->param('count' => $count);
# HTML
print qq{<html>
<head><title>Hello! (Perl)</title></head>
<body>
<h1>Hello! (Perl)</h1>
<p>Request number $count running on host <i>$ENV{SERVER_NAME}</i></p>
</body>
</html>};
Alternatively, if you really want to go barebones, you could keep a local file as demonstrated in: I still don't get locking. I just want to increment the number in the file. How can I do this?
I've been completely unsuccessful finding an answer to this question. Hopefully someone here can help.
I have a PHP script (a WordPress template, to be specific) that automatically imports and processes images when a user hits it. The problem is that the image processing takes up a lot of memory, particularly if multiple users are accessing the template at the same time and initiating the image processing. My server crashed multiple times because of this.
My solution to this was to not execute the image-processing function if it was already running. Before the function started running, I would check a database entry named image_import_running to see if it was set to false. If it was, the function then ran. The very first thing the function did was set image_import_running to true. Then, after it was all finished, I set it back to false.
It worked great -- in theory. The site hasn't crashed since, I can tell you that. But there are two major problems with it:
If the user closes the page while it's loading, the script never finishes processing the images and therefore never sets image_import_running back to false. The template will never process images again until it's manually set to false.
If the script times out while it's processing images -- and that's a strong possibility if there are many images in the queue -- you have essentially the same problem as No. 1: the script never gets to the point where it sets image_import_running back to false.
To handle No. 1 (the first one of the two problems I realized), I added ignore_user_abort(true) to the script. Did it work? I don't know, because No. 2 is still an issue. That's where I'm stumped.
If I could ask the server whether the script was running or not, I could do something like this:
if($import_running && $script_not_running) {
$import_running = false;
}
But how do I set that $script_not_running variable? Beats me.
I've shared this entire story with you just in case you have some other brilliant solution.
Try using
ignore_user_abort(true); it will continue to run even if the person leaves and closes the browser.
you might also want to put a number instead of true false in the db record and set a maximum number of processes that can run together
As others have suggested, it would be best to move the image processing out of the request itself.
As an interim "fix", store a timestamp alongside image_import_running when a processing job begins (e.g., image_import_commenced). This is a very crude mechanism, but if you know the maximum time that a job can run before timing out, the script can check whether that period of time has elapsed.
e.g., if image_import_running is still true but the current time is more than 10 minutes since image_import_commenced, run the processing anyway.
What about setting a transient with an expiry time that would throttle the operation?
if(!get_transient( 'import_running' )) {
set_transient( 'import_running', true, 30 ); // set a 30 second transient on the import.
run_the_import_function();
}
I would rather store the job into database flagging it pending and set a cron job to execute the processing one job at a time.
For Me i use just this simple idea with a text document. for example run.txt file
in the top script use :
if((file_get_contents('run.txt') != 'run'){ // here the script will work
$file = fopen('run.txt', 'w+');
fwrite($file, 'run');
fclose('run.txt');
}else{
exit(); // if it find 'run' in run.txt the script will stop
}
And add this in the end of your script file
$file = fopen('run.txt', 'w+');
fwrite($file, ''); //will delete run word for the next try ;)
fclose('run.txt');
That will check if script already work by checking runt.txt contents
if run word exist in run.txt it will not run
Running a cron would definitively be a better solution. Idea to store url in a table is a good one.
To answer to the original question, you may run a ps auxwww command with exec (Check this page: How to get list of running php scripts using PHP exec()? ) and move your function in a separated php file.
exec("ps auxwww|grep myfunction.php|grep -v grep", $output);
Just add following on the top of your script.
<?php
// Ensures single instance of script run at a time.
$fileName = basename(__FILE__);
$output = shell_exec("ps -ef | grep -v grep | grep $fileName | wc -l");
//echo $output;
if ($output > 2)
{
echo "Already running - $fileName\n";
exit;
}
// Your php script code.
?>
I have a list of data that needs to be processed. The way it works right now is this:
A user clicks a process button.
The PHP code takes the first item that needs to be processed, takes 15-25 secs to process it, moves on to the next item, and so on.
This takes way too long. What I'd like instead is that:
The user clicks the process button.
A PHP script takes the first item and starts to process it.
Simultaneously another instance of the script takes the next item and processes it.
And so on, so around 5-6 of the items are being process simultaneously and we get 6 items processed in 15-25 secs instead of just one.
Is something like this possible?
I was thinking that I use CRON to launch an instance of the script every second. All items that need to be processed will be flagged as such in the MySQL database, so whenever an instance is launched through CRON, it will simply take the next item flagged to be processed and remove the flag.
Thoughts?
Edit: To clarify something, each 'item' is stored in a mysql database table as seperate rows. Whenever processing starts on an item, it is flagged as being processed in the db, hence each new instance will simply grab the next row which is not being processed and process it. Hence I don't have to supply the items as command line arguments.
Here's one solution, not the greatest, but will work fine on Linux:
Split the processing PHP into a separate CLI scripts in which:
The command line inputs include `$id` and `$item`
The script writes its PID to a file in `/tmp/$id.$item.pid`
The script echos results as XML or something that can be read into PHP to stdout
When finished the script deletes the `/tmp/$id.$item.pid` file
Your master script (presumably on your webserver) would do:
`exec("nohup php myprocessing.php $id $item > /tmp/$id.$item.xml");` for each item
Poll the `/tmp/$id.$item.pid` files until all are deleted (sleep/check poll is enough)
If they are never deleted kill all the processing scripts and report failure
If successful read the from `/tmp/$id.$item.xml` for format/output to user
Delete the XML files if you don't want to cache for later use
A backgrounded nohup started application will run independent of the script that started it.
This interested me sufficiently that I decided to write a POC.
test.php
<?php
$dir = realpath(dirname(__FILE__));
$start = time();
// Time in seconds after which we give up and kill everything
$timeout = 25;
// The unique identifier for the request
$id = uniqid();
// Our "items" which would be supplied by the user
$items = array("foo", "bar", "0xdeadbeef");
// We exec a nohup command that is backgrounded which returns immediately
foreach ($items as $item) {
exec("nohup php proc.php $id $item > $dir/proc.$id.$item.out &");
}
echo "<pre>";
// Run until timeout or all processing has finished
while(time() - $start < $timeout)
{
echo (time() - $start), " seconds\n";
clearstatcache(); // Required since PHP will cache for file_exists
$running = array();
foreach($items as $item)
{
// If the pid file still exists the process is still running
if (file_exists("$dir/proc.$id.$item.pid")) {
$running[] = $item;
}
}
if (empty($running)) break;
echo implode($running, ','), " running\n";
flush();
sleep(1);
}
// Clean up if we timeout out
if (!empty($running)) {
clearstatcache();
foreach ($items as $item) {
// Kill process of anything still running (i.e. that has a pid file)
if(file_exists("$dir/proc.$id.$item.pid")
&& $pid = file_get_contents("$dir/proc.$id.$item.pid")) {
posix_kill($pid, 9);
unlink("$dir/proc.$id.$item.pid");
// Would want to log this in the real world
echo "Failed to process: ", $item, " pid ", $pid, "\n";
}
// delete the useless data
unlink("$dir/proc.$id.$item.out");
}
} else {
echo "Successfully processed all items in ", time() - $start, " seconds.\n";
foreach ($items as $item) {
// Grab the processed data and delete the file
echo(file_get_contents("$dir/proc.$id.$item.out"));
unlink("$dir/proc.$id.$item.out");
}
}
echo "</pre>";
?>
proc.php
<?php
$dir = realpath(dirname(__FILE__));
$id = $argv[1];
$item = $argv[2];
// Write out our pid file
file_put_contents("$dir/proc.$id.$item.pid", posix_getpid());
for($i=0;$i<80;++$i)
{
echo $item,':', $i, "\n";
usleep(250000);
}
// Remove our pid file to say we're done processing
unlink("proc.$id.$item.pid");
?>
Put test.php and proc.php in the same folder of your server, load test.php and enjoy.
You will of course need nohup (unix) and PHP cli to get this to work.
Lots of fun, I may find a use for it later.
Use an external workqueue like Beanstalkd which your PHP script writes a bunch of jobs too. You have as many worker processes pulling jobs from beanstalkd and processing them as fast as possible. You can spin up as many workers as you have memory / CPU. Your job body should contain as little information as possible, maybe just some IDs which you hit the DB with. beanstalkd has a slew of client APIs and itself has a very basic API, think memcached.
We use beanstalkd to process all of our background jobs, I love it. Easy to use, its very fast.
There is no multithreading in PHP, however you can use fork.
php.net:pcntl-fork
Or you could execute a system() command and start another process which is multithreaded.
can you implementing threading in javascript on the client side? seems to me i've seen a javascript library (from google perhaps?) that implements it. google it and i'm sure you'll find something. i've never done it, but i know its possible. anyway, your client-side javascript could activate (ajax) a php script once for each item in separate threads. that might be easier than trying to do it all on the server side.
-don
If you are running a high traffic PHP server you are INSANE if you do not use Alternative PHP Cache: http://php.net/manual/en/book.apc.php . You do not have to make code modifications to run APC.
Another useful technique that can work along with APC is using the Smarty template system which allows you to cache output so that pages do not have to be rebuilt.
To solve this problem, I've used two different products; Gearman and RabbitMQ.
The benefit of putting your jobs into some sort of queuing software like Gearman or Rabbit is that you have multiple machines, they can all participate in processing items off the queue(s).
Gearman is easier to setup, so I'd suggest poking around with it a bit first. If you find you need something more heavy duty with queue robustness; Look into RabbitMQ
http://www.danga.com/gearman/
http://pear.php.net/package/Net_Gearman (PEAR library)
You can use pcntl_fork() and family to fork a process - however you may need something like IPC to communicate back to the parent process that the child process (the one you fork'd) is finished.
You could have them write to shared memory, like via memcache or a DB.
You could also have the child process write the completed data to a file, that the parent process keeps checking - as each child process completes the file is created/written to/updated, and parent process can grab it, one at a time, and them throw them back to the callee/client.
The parent's job is to control the queue, to make sure the same data isn't processed twice and also to sanity check the children (better kill that runaway process and start over...etc)
Something else to keep in mind - on windows platforms you are going to be severely limited - I dont even think you have access to pcntl_ unless you compiled PHP with support for it.
Also, can you cache the data once its been processed, or is it unique data every time? that would surely speed things up..?