creating schedule task without Cron job - php

Scheduled task needs to be created but its not possible to use Cron job (there is a warning from hosting provider that "running the cron Job more than once within a 45-minute period is a infraction of their rules and could result in shutting down the account."
php script (which insert data from txt to mysql database) should be executed every minute, ie this link should be called http://www.myserver.com/ImportCumulusFile.php?type=dayfile&key=letmein&table=Dayfile&file=./data/Jan10log.txt
Is there any other way?

There are multiple ways of doing repetitive jobs. Some of the ways that I can think about right away are:
Using: https://www.setcronjob.com/
Use an external site like this to fire off your url at set intervals
Using meta refresh. More here. You'd to have to open the page and leave it running.
Javascript/Ajax refresh. Similar to the above example.
Setting up a cron job. Most shared hosting do provide a way to set up cron jobs. Have a look at the cPanel of your hosting.

if you have shell access you could execute a php script via the shell
something like this would be an endless loop, that would sleep 60 seconds execute, collect garbage and repeat until the end of time.
while(true) {
sleep(60);
//script here
//end your script
}
or you could do a "poor mans cron" with ajax or meta refresh. i've done it before. basically, you just place a redirect with either javascript or html's meta refresh at the beggining of your script. access this script from your browser, and just leave it open. it'll refresh every 60 seconds, just like a cronjob.
yet another alternative to a cronjob, would be a bash script such as:
#!/bin/bash
while :
do
sleep 60
wget http://127.0.0.1/path/to/cronjob.php -O Temp --delete-after
done
all this being said, you probably will get caught by the host and get terminated anyway.
So your best solution:
go and sign up for a 5-10 dollar a month vps, and say good bye to shared hosting and hello to running your own little server.
if you do this, you can even stop using crappy php and use facebook's hhvm instead and enjoy its awesome performance.

There's a free service at
http://cron-job.org
That lets you set up a nice little alternative.

Option A
An easy way to realize it would be to create a file/database entry containing the execution time of your php script:
<?php
// crons.php
return [
'executebackup.php' => 1507979485,
'sendnewsletter.php' => 1507999485
];
?>
And on every request made through your visitors you check the current time and if its higher you include your php script:
<?php
// cronpixel.php
$crons = #include 'cache/crons.php';
foreach ($crons as $script => $time) {
if ($time < time()) {
// create lock to avoid race conditions
$lock = 'cache/' . md5($script) . '.lock';
if (file_exists($lock) || !mkdir($lock)) {
continue;
}
// start your php script
include($script);
// now update crons.php
$crons[ $script ] += 86400; // tomorrow
file_put_contents('cache/crons.php', '<?php return ' . var_export($crons, true) . '; ?' . '>')
// finally delete lock
rmdir($lock);
}
}
header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT");
// image data
$im = imagecreate(1, 1);
$blk = imagecolorallocate($im, 0, 0, 0);
imagecolortransparent($im, $blk);
// image output
header("Content-type: image/gif");
imagegif($im);
// free memory
imagedestroy($im);
?>
Note: It will be rarily called on the exact second, because you do not know when your visitor will open your page (maybe 2 seconds later). So it makes sense to set the new time for the next day not through adding 86400 seconds. Instead use mktime.
Option B
This is a little project I realized in the past, that is similar to #r3wt 's idea, but covers race conditions and works on exact times like a cronjob would do in a scheduler without hitting the max_execution_time. And it works most of the time without the need to resurrect it (as done through visitors in Option A).
Explanation:
The script writes a lock file (to avoid race conditions) for the 15th, 30th, 45th and 60th second of a minute:
// cron monitoring
foreach ($allowed_crons as $cron_second) {
$cron_filename = 'cache/' . $cron_second . '_crnsec_lock';
// start missing cron requests
if (!file_exists($cron_filename)) {
cron_request($cron_second);
}
// restart interrupted cron requests
else if (filemtime($cron_filename) + 90 < time()) {
rmdir($cron_filename);
cron_request($cron_second);
}
}
Every time a lock file is missing the script creates it and uses sleep() to reach the exact second:
if (file_exists($cron_filename) || !mkdir($cron_filename)) {
return;
}
// add one minute if necessary
$date = new DateTime();
$cron_date = new DateTime();
$cron_date->setTime($cron_date->format('H'), $cron_date->format('i'), $sec);
$diff = $date->diff($cron_date);
if ($diff->invert && $diff->s > 0) {
$cron_date->setTime($cron_date->format('H'), $cron_date->format('i') + 1, $sec);
}
$diff = $date->diff($cron_date);
// we use sleep() as time_sleep_until() starts one second to early (https://bugs.php.net/bug.php?id=69044)
sleep($diff->s);
After waking up again, it sends a request to itself through fopen():
// note: filter_input returns the unchanged SERVER var (http://php.net/manual/de/function.filter-input.php#99124)
// note: filter_var is unsecure (http://www.d-mueller.de/blog/why-url-validation-with-filter_var-might-not-be-a-good-idea/)
$url = 'http' . isSecure() . '://' . filter_input(INPUT_SERVER, 'HTTP_HOST', FILTER_SANITIZE_URL) . htmlspecialchars($request_uri, ENT_QUOTES, 'UTF-8');
$context = stream_context_create(array(
'http' => array(
'timeout' => 1.0
)
));
// note: return "failed to open stream: HTTP request failed!" because timeout < time_sleep_until
if ($fp = #fopen($url, 'r', false, $context)) {
fclose($fp);
}
rmdir($cron_filename);
By that it calls itself infinitely and you are able to define different starting times:
if (isset($_GET['cron_second'])) {
if ($cron_second === 0 && !(date('i') % 15)) {
mycron('every 15 minutes');
}
if ($cron_second === 0 && !(date('i') % 60)) {
mycron('every hour');
}
}
Note: It produces 5760 requests per day (4 per minute). Not much, but a cronjob uses much less ressources. If your max_execution_time is high enough you could change it to calling itself only once per minute (1440 requests/day).

I understand that this question is bit old but I stumbled on it a week ago with this very question and the best and secure option we found was using a Web Service.
Our context:
We have our system in both shared hosting and private clouds.
We need that a script is activated once in a month (there are plans to create more schedules and to allow users to create some predetermined actions)
Our system provides access to many clients, so, when anyone uses the system it calls for a Web Service via Ajax and doesn't care about the response (after all everything is logged in our database and must run without user interaction)
What we've done is:
1 - An ajax call is called upon access in any major screen.
2 - The Web Service reads a schedule table on our database and calls whatever needs calling
3 - To avoid many stacked Web Service calls we check datetime with an interval of 10 mins before actually performing any actions
That's also a way to distribute the load balance and the schedules doesn't affect the system with user interaction.

Related

PHP cURL; Wait for API status change before continuing [duplicate]

I work on a somewhat large web application, and the backend is mostly in PHP. There are several places in the code where I need to complete some task, but I don't want to make the user wait for the result. For example, when creating a new account, I need to send them a welcome email. But when they hit the 'Finish Registration' button, I don't want to make them wait until the email is actually sent, I just want to start the process, and return a message to the user right away.
Up until now, in some places I've been using what feels like a hack with exec(). Basically doing things like:
exec("doTask.php $arg1 $arg2 $arg3 >/dev/null 2>&1 &");
Which appears to work, but I'm wondering if there's a better way. I'm considering writing a system which queues up tasks in a MySQL table, and a separate long-running PHP script that queries that table once a second, and executes any new tasks it finds. This would also have the advantage of letting me split the tasks among several worker machines in the future if I needed to.
Am I re-inventing the wheel? Is there a better solution than the exec() hack or the MySQL queue?
I've used the queuing approach, and it works well as you can defer that processing until your server load is idle, letting you manage your load quite effectively if you can partition off "tasks which aren't urgent" easily.
Rolling your own isn't too tricky, here's a few other options to check out:
GearMan - this answer was written in 2009, and since then GearMan looks a popular option, see comments below.
ActiveMQ if you want a full blown open source message queue.
ZeroMQ - this is a pretty cool socket library which makes it easy to write distributed code without having to worry too much about the socket programming itself. You could use it for message queuing on a single host - you would simply have your webapp push something to a queue that a continuously running console app would consume at the next suitable opportunity
beanstalkd - only found this one while writing this answer, but looks interesting
dropr is a PHP based message queue project, but hasn't been actively maintained since Sep 2010
php-enqueue is a recently (2017) maintained wrapper around a variety of queue systems
Finally, a blog post about using memcached for message queuing
Another, perhaps simpler, approach is to use ignore_user_abort - once you've sent the page to the user, you can do your final processing without fear of premature termination, though this does have the effect of appearing to prolong the page load from the user perspective.
When you just want to execute one or several HTTP requests without having to wait for the response, there is a simple PHP solution, as well.
In the calling script:
$socketcon = fsockopen($host, 80, $errno, $errstr, 10);
if($socketcon) {
$socketdata = "GET $remote_house/script.php?parameters=... HTTP 1.1\r\nHost: $host\r\nConnection: Close\r\n\r\n";
fwrite($socketcon, $socketdata);
fclose($socketcon);
}
// repeat this with different parameters as often as you like
On the called script.php, you can invoke these PHP functions in the first lines:
ignore_user_abort(true);
set_time_limit(0);
This causes the script to continue running without time limit when the HTTP connection is closed.
Another way to fork processes is via curl. You can set up your internal tasks as a webservice. For example:
http://domain/tasks/t1
http://domain/tasks/t2
Then in your user accessed scripts make calls to the service:
$service->addTask('t1', $data); // post data to URL via curl
Your service can keep track of the queue of tasks with mysql or whatever you like the point is: it's all wrapped up within the service and your script is just consuming URLs. This frees you up to move the service to another machine/server if necessary (ie easily scalable).
Adding http authorization or a custom authorization scheme (like Amazon's web services) lets you open up your tasks to be consumed by other people/services (if you want) and you could take it further and add a monitoring service on top to keep track of queue and task status.
http://domain/queue?task=t1
http://domain/queue?task=t2
http://domain/queue/t1/100931
It does take a bit of set-up work but there are a lot of benefits.
If it just a question of providing expensive tasks, in case of php-fpm is supported, why not to use fastcgi_finish_request() function?
This function flushes all response data to the client and finishes the request. This allows for time consuming tasks to be performed without leaving the connection to the client open.
You don't really use asynchronicity in this way:
Make all your main code first.
Execute fastcgi_finish_request().
Make all heavy stuff.
Once again php-fpm is needed.
I've used Beanstalkd for one project, and planned to again. I've found it to be an excellent way to run asynchronous processes.
A couple of things I've done with it are:
Image resizing - and with a lightly loaded queue passing off to a CLI-based PHP script, resizing large (2mb+) images worked just fine, but trying to resize the same images within a mod_php instance was regularly running into memory-space issues (I limited the PHP process to 32MB, and the resizing took more than that)
near-future checks - beanstalkd has delays available to it (make this job available to run only after X seconds) - so I can fire off 5 or 10 checks for an event, a little later in time
I wrote a Zend-Framework based system to decode a 'nice' url, so for example, to resize an image it would call QueueTask('/image/resize/filename/example.jpg'). The URL was first decoded to an array(module,controller,action,parameters), and then converted to JSON for injection to the queue itself.
A long running cli script then picked up the job from the queue, ran it (via Zend_Router_Simple), and if required, put information into memcached for the website PHP to pick up as required when it was done.
One wrinkle I did also put in was that the cli-script only ran for 50 loops before restarting, but if it did want to restart as planned, it would do so immediately (being run via a bash-script). If there was a problem and I did exit(0) (the default value for exit; or die();) it would first pause for a couple of seconds.
Here is a simple class I coded for my web application. It allows for forking PHP scripts and other scripts. Works on UNIX and Windows.
class BackgroundProcess {
static function open($exec, $cwd = null) {
if (!is_string($cwd)) {
$cwd = #getcwd();
}
#chdir($cwd);
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$WshShell = new COM("WScript.Shell");
$WshShell->CurrentDirectory = str_replace('/', '\\', $cwd);
$WshShell->Run($exec, 0, false);
} else {
exec($exec . " > /dev/null 2>&1 &");
}
}
static function fork($phpScript, $phpExec = null) {
$cwd = dirname($phpScript);
#putenv("PHP_FORCECLI=true");
if (!is_string($phpExec) || !file_exists($phpExec)) {
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$phpExec = str_replace('/', '\\', dirname(ini_get('extension_dir'))) . '\php.exe';
if (#file_exists($phpExec)) {
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
} else {
$phpExec = exec("which php-cli");
if ($phpExec[0] != '/') {
$phpExec = exec("which php");
}
if ($phpExec[0] == '/') {
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
}
} else {
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$phpExec = str_replace('/', '\\', $phpExec);
}
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
}
}
PHP HAS multithreading, its just not enabled by default, there is an extension called pthreads which does exactly that.
You'll need php compiled with ZTS though. (Thread Safe)
Links:
Examples
Another tutorial
pthreads PECL Extension
UPDATE: since PHP 7.2 parallel extension comes into play
Tutorial/Example
reference manual
This is the same method I have been using for a couple of years now and I haven't seen or found anything better. As people have said, PHP is single threaded, so there isn't much else you can do.
I have actually added one extra level to this and that's getting and storing the process id. This allows me to redirect to another page and have the user sit on that page, using AJAX to check if the process is complete (process id no longer exists). This is useful for cases where the length of the script would cause the browser to timeout, but the user needs to wait for that script to complete before the next step. (In my case it was processing large ZIP files with CSV like files that add up to 30 000 records to the database after which the user needs to confirm some information.)
I have also used a similar process for report generation. I'm not sure I'd use "background processing" for something such as an email, unless there is a real problem with a slow SMTP. Instead I might use a table as a queue and then have a process that runs every minute to send the emails within the queue. You would need to be warry of sending emails twice or other similar problems. I would consider a similar queueing process for other tasks as well.
It's a great idea to use cURL as suggested by rojoca.
Here is an example. You can monitor text.txt while the script is running in background:
<?php
function doCurl($begin)
{
echo "Do curl<br />\n";
$url = 'http://'.$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
$url = preg_replace('/\?.*/', '', $url);
$url .= '?begin='.$begin;
echo 'URL: '.$url.'<br>';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
echo 'Result: '.$result.'<br>';
curl_close($ch);
}
if (empty($_GET['begin'])) {
doCurl(1);
}
else {
while (ob_get_level())
ob_end_clean();
header('Connection: close');
ignore_user_abort();
ob_start();
echo 'Connection Closed';
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
$begin = $_GET['begin'];
$fp = fopen("text.txt", "w");
fprintf($fp, "begin: %d\n", $begin);
for ($i = 0; $i < 15; $i++) {
sleep(1);
fprintf($fp, "i: %d\n", $i);
}
fclose($fp);
if ($begin < 10)
doCurl($begin + 1);
}
?>
There is a PHP extension, called Swoole.
Although it might not be enabled, it is available on my hosting for being enabled at click of a button.
Worth checking it out. I haven't had time to use it yet, as I was searching here for info, when I stumbled across it and thought it worth sharing.
Unfortunately PHP does not have any kind of native threading capabilities. So I think in this case you have no choice but to use some kind of custom code to do what you want to do.
If you search around the net for PHP threading stuff, some people have come up with ways to simulate threads on PHP.
If you set the Content-Length HTTP header in your "Thank You For Registering" response, then the browser should close the connection after the specified number of bytes are received. This leaves the server side process running (assuming that ignore_user_abort is set) so it can finish working without making the end user wait.
Of course you will need to calculate the size of your response content before rendering the headers, but that's pretty easy for short responses (write output to a string, call strlen(), call header(), render string).
This approach has the advantage of not forcing you to manage a "front end" queue, and although you may need to do some work on the back end to prevent racing HTTP child processes from stepping on each other, that's something you needed to do already, anyway.
If you don't want the full blown ActiveMQ, I recommend to consider RabbitMQ. RabbitMQ is lightweight messaging that uses the AMQP standard.
I recommend to also look into php-amqplib - a popular AMQP client library to access AMQP based message brokers.
Spawning new processes on the server using exec() or directly on another server using curl doesn't scale all that well at all, if we go for exec you are basically filling your server with long running processes which can be handled by other non web facing servers, and using curl ties up another server unless you build in some sort of load balancing.
I have used Gearman in a few situations and I find it better for this sort of use case. I can use a single job queue server to basically handle queuing of all the jobs needing to be done by the server and spin up worker servers, each of which can run as many instances of the worker process as needed, and scale up the number of worker servers as needed and spin them down when not needed. It also let's me shut down the worker processes entirely when needed and queues the jobs up until the workers come back online.
i think you should try this technique it will help to call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
cornjobpage.php //mainpage
<?php
post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue");
//post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue2");
//post_async("http://localhost/projectname/otherpage.php", "Keywordname=anyValue");
//call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
?>
<?php
/*
* Executes a PHP page asynchronously so the current page does not have to wait for it to finish running.
*
*/
function post_async($url,$params)
{
$post_string = $params;
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "GET ".$parts['path']."?$post_string"." HTTP/1.1\r\n";//you can use POST instead of GET if you like
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
fwrite($fp, $out);
fclose($fp);
}
?>
testpage.php
<?
echo $_REQUEST["Keywordname"];//case1 Output > testValue
?>
PS:if you want to send url parameters as loop then follow this answer :https://stackoverflow.com/a/41225209/6295712
PHP is a single-threaded language, so there is no official way to start an asynchronous process with it other than using exec or popen. There is a blog post about that here. Your idea for a queue in MySQL is a good idea as well.
Your specific requirement here is for sending an email to the user. I'm curious as to why you are trying to do that asynchronously since sending an email is a pretty trivial and quick task to perform. I suppose if you are sending tons of email and your ISP is blocking you on suspicion of spamming, that might be one reason to queue, but other than that I can't think of any reason to do it this way.

Long PHP script runs multiple times

I have a products database that synchronizes with product data ever morning.
The process is very clear:
Get all products from database by query
Loop through all products, and get and xml from the other server by product_id
Update data from xml
Log the changes to file.
If I query a low amount of items, but limiting it to 500 random products for example, everything goes fine. But when I query all products, my script SOMETIMES goes on the fritz and starts looping multiple times. Hours later I still see my log file growing and products being added.
I checked everything I could think of, for example:
Are variables not used twice without overwriting each other
Does the function call itself
Does it happen with a low amount of products too: no.
The script is called using a cronjob, are the settings ok. (Yes)
The reason that makes it especially weird is that it sometimes goes right, and sometimes it doesnt. Could this be some memory problem?
EDIT
wget -q -O /dev/null http://example.eu/xxxxx/cron.php?operation=sync its in webmin called on a specific hour and minute
Code is hundreds of lines long...
Thanks
You have:
max_execution_time disabled. Your script won't end until the process is complete for as long as it needed.
memory_limit disabled. There is no limit to how much data stored in memory.
500 records were completed without issues. This indicates that the scripts completes its process before the next cronjob iteration. For example, if your cron runs every hour, then the 500 records are processed in less than an hour.
If you have a cronjob that is going to process large amount of records, then consider adding lock mechanism to the process. Only allow the script to run once, and start again when the previous process is complete.
You can create script lock as part of a shell script before executing your php script. Or, if you don't have an access to your server you can use database lock within the php script, something like this.
class ProductCronJob
{
protected $lockValue;
public function run()
{
// Obtain a lock
if ($this->obtainLock()) {
// Run your script if you have valid lock
$this->syncProducts();
// Release the lock on complete
$this->releaseLock();
}
}
protected function syncProducts()
{
// your long running script
}
protected function obtainLock()
{
$time = new \DateTime;
$timestamp = $time->getTimestamp();
$this->lockValue = $timestamp . '_syncProducts';
$db = JFactory::getDbo();
$lock = [
'lock' => $this->lockValue,
'timemodified' => $timestamp
];
// lock = '0' indicate that the cronjob is not active.
// Update #__cronlock set lock = '', timemodified = '' where name = 'syncProducts' and lock = '0'
// $result = $db->updateObject('#__cronlock', $lock, 'id');
// $lock = SELECT * FROM #__cronlock where name = 'syncProducts';
if ($lock !== false && (string)$lock !== (string)$this->lockValue) {
// Currently there is an active process - can't start a new one
return false;
// You can return false as above or add extra logic as below
// Check the current lock age - how long its been running for
// $diff = $timestamp - $lock['timemodified'];
// if ($diff >= 25200) {
// // The current script is active for 7 hours.
// // You can change 25200 to any number of seconds you want.
// // Here you can send notification email to site administrator.
// // ...
// }
}
return true;
}
protected function releaseLock()
{
// Update #__cronlock set lock = '0' where name = 'syncProducts'
}
}
Your script is running for quite some time (~45m) and wget think it's "timing out" since you don't return any data. By default wget will have a 900s timeout value and a retry count of 20. So first you should probably change your wget command to prevent this:
wget --tries=0 --timeout=0 -q -O /dev/null http://example.eu/xxxxx/cron.php?operation=sync
Now removing the timeout could lead to other issue, so instead you could send (and flush to force webserver to send it) data from your script to make sure wget doesn't think the script "timed out", something every 1000 loops or something like that. Think of this as a progress bar...
Just keep in mind that you will hit an issue when the run time will get close to your period as 2 crons will run in parallel. You should optimize your process and/or have a lock mechanism maybe?
I see two possibilities:
- chron calls the script much more often
- script takes too long somehow.
you can try estimate the time a single iteration of the loop takes.
this can be done with time(). perhaps the result is suprising, perhaps not. you can probably get the number of results too. multiply the two, that way you will have an estimate of how long the process should take.
$productsToSync = $db->loadObjectList();
and
foreach ($productsToSync AS $product) {
it seems you load every result into an array. this wont work for huge databases because obviously a million rows wont fit in memory. you should just get one result at a time. with mysql there are methods that just fetch one thing at a time from the resource, i hope yours allows the same.
I also see you execute another query each iteration of the loop. this is something I try to avoid. perhaps you can move this to after the first query has ended and do all of those in one big query? otoh this may bite my first suggestion.
also if something goes wrong, try to be paranoid when debugging. measure as much as you can. time as much as you can when its a performance issue. put the timings in you log file. usually you will find the bottleneck.
I solved the problem myself. Thanks for all the replies!
My MySQL timed out, that was the problem. As soon as I added:
ini_set('mysql.connect_timeout', 14400);
ini_set('default_socket_timeout', 14400);
to my script the problem stopped. I really hope this helps someone. Ill upvote all the locking answers, because those were very helpful!

PHP ends script execution automaticly

Firstly I want to give you the basic idea of what I am trying to do:
I'm trying to make a free web hosting service do some work for me. I've created one php page and MySQL db. The basic idea behind my PHP page is I have a while loop with condition of $shutdown, and some counter inside while loop to track whether code is running or not
<?php
/*
Connect to database etc. etc
*/
$shutdown = false;
// Main loop
while (!$shutdown)
{
// Check for user shutdown request
$strq = "SELECT * FROM TB_Shutdown;";
$result = mysql_query($strq);
$row = mysql_fetch_array($result);
if ($row[0] == "true")
{
$shutdown = true; // I know this statement is useless but nevermind
break;
}
//Increase counter
$strq = "SELECT * FROM TB_Counter;";
$result = mysql_query($strq);
$row = mysql_fetch_array($result);
if (intval($row[0]) == 60)
{
// Reset counter
$strq = "UPDATE TB_Counter SET value = 0";
$result = mysql_query($strq);
/*
I have some code to do some works at here its not important just curl stuff
*/
else
{
// Increase counter
$strq = "UPDATE TB_Counter SET value = " . (intval($row[0]) + 1);
$result = mysql_query($strq);
}
/*
I have some code to do some works at here its not important just curl stuff
*/
// Sleep
sleep(1);
}
?>
And I have a check.php which returns me the value from TB_Counter.
The problem is: I'm tracking the TB_Counter table every second. It stops after a while. If I close my webbrowser (which I called my main while php loop page from) it stops after like 2 minutes. If not after 5-7 mins I get the error "connection has been reset" on browser and loop stops.
What should I do to make my loop lasts forever?
You need to allow PHP to execute completely. There is an option in the PHP.INI file which says:
max_execution_time = 30;
This sets the maximum time in seconds a script is allowed to run
before it is terminated by the parser. This helps prevent poorly
written scripts from tying up the server. The default setting is 30.
When running PHP from the command line the default setting is 0.
The function set_time_limit:
Set the number of seconds a script is allowed to run. If this is
reached, the script returns a fatal error. The default limit is 30
seconds or, if it exists, the max_execution_time value defined in the
php.ini.
To check if PHP is running in safe mode, you can use this:
echo $phpinfo['PHP Core']['safe_mode'][0]
If it is going to be a huge process, you can consider running on Cron as a CronJob. A small explanation on it:
Cron is very simply a Linux module that allows you to run commands at predetermined times or intervals. In Windows, it’s called Scheduled Tasks. The name Cron is in fact derived from the same word from which we get the word chronology, which means order of time.
Using Cron, a developer can automate such tasks as mailing ezines that might be better sent during an off-hour, automatically updating stats, or the regeneration of static pages from dynamic sources. Systems administrators and Web hosts might want to generate quota reports on their clients, complete automatic credit card billing, or similar tasks. Cron has something for everyone!
Read more about Cron
You could use php function set_time_limit().
You should not handle this from a browser. Run a cron every minute doing the checks you need would be a better solution.
Next to that why would you update every second? Just write down a timestamp so you know when a request was made?
Making something to run forever is not doable. More important is to secure that your business process keeps running. So maybe it would be wise to put your business case here, you seem to need to count seconds and do something within a minute but it's not totally clear. So what do you need to do?

PHP parent/child timing communication

I am working on an app which will allow me to login to a remote telnet server and monitor statistics. The problem is that the telnet server has a minimum refresh rate of 10 seconds, and the refresh rate varies slightly depending on server load (the report itself has this refresh rate, not the server). I need the front-end of this system to refresh more often than every 10 seconds (5 seconds minimum). I have somewhat been able to accomplish this by forking, but eventually the timings synchronize (it's a gradual process which I am assuming is due to remote server loads).
Child Code:(i have removed quite a bit of code relating to how the app logs in and accesses the report, but it is basically key-presses through 7 menus to get to the page which refreshes - I can include if needed):
// 1 - Telnet Handshaking and initial screen load
$sHandshaking = '<IAC><WILL><OPT_BINARY><IAC><DO><OPT_BINARY><IAC><WILL><OPT_TSPEED><IAC><SB><OPT_TSPEED><IS>38400,38400<IAC><SE>';
// 2-4 - Keypresses to get to report (login, menus, etc. - Removed)
// Loop and cache
while(1){
// 4 - View Report
$oVT100->listen();
$reference = $temp;
$screen = $oVT100->getScreenFull();
if(empty($screen)){
Failed:
echo "FAILED";
file_put_contents($outFile,array('html'=>"<div class=\"header\"><font color='red'>Why are things always breaking?!<font color='red'></div>"));
goto restartIt; // If screen does not contain valid report, assume logout and start at top
}else{
$screen = parseReport($oVT100->getScreenFull());
$temp = json_decode($screen);
// Check old report file, if different save, else sleep a bit
$currentFile = file_get_contents($outFile);
if($screen !== $currentFile){
file_put_contents($outFile,$screen);
sleep(5);
}else{
sleep(1);
//usleep(500000);
}
}
}
As you can see with the child's code above, it logs in then infinite loops the report. It then writes the screen out to a cache file (if it is different from the existing file). As a dirty workaround to the refresh issue, I wrote a parent to fork:
$pids = array();
for($i = 0; $i < 2; $i++) {
sleep(5);
$pids[$i] = pcntl_fork();
if(!$pids[$i]) {
require_once(dirname(__FILE__).'/checkQueue.php');
exit();
}
}
for($i = 0; $i < 2; $i++) {
pcntl_waitpid($pids[$i], $status, WUNTRACED);
}
I immediately noticed the timing offset and had to spawn three children instead of two to keep under 5 seconds; but the timing worked for a few days. Eventually, all the children were updating the file at the same time and I had to restart the parent. What would be the best solution to monitor the child processes so that I maintain a refresh interval of less than five seconds?
[EDIT]
This is not a running log, it is a file which contains current call statistics for a call center. The data in the cache file needs to be no less than 5 seconds old at any time, but eventually all of the children sync and write to the log at nearly the same time. The issue isn't really with the local file, it is inconsistent remote server response times which eventually leads to the child processes running getting their report at the same time.
I'm hoping there's a better solution, but here's how I solved it - I feel kind of stupid for not having thought of this in the beginning:
$currentFile = file_get_contents($outFile);
if($screen !== $currentFile){
if(time()-filemtime($outFile) > 2){
file_put_contents($outFile,$screen);
sleep(5);
}else{
echo "Failed, sleeping\r";
sleep(2);
}
}else{
sleep(1);
//usleep(500000);
}
basically, I check the file write time of the cache file and if it is less than 2 seconds ago, sleep and re-execute loop. I was somewhat hoping there would be a solution contained within the parent app but I guess this works...

Run PHP Task Asynchronously

I work on a somewhat large web application, and the backend is mostly in PHP. There are several places in the code where I need to complete some task, but I don't want to make the user wait for the result. For example, when creating a new account, I need to send them a welcome email. But when they hit the 'Finish Registration' button, I don't want to make them wait until the email is actually sent, I just want to start the process, and return a message to the user right away.
Up until now, in some places I've been using what feels like a hack with exec(). Basically doing things like:
exec("doTask.php $arg1 $arg2 $arg3 >/dev/null 2>&1 &");
Which appears to work, but I'm wondering if there's a better way. I'm considering writing a system which queues up tasks in a MySQL table, and a separate long-running PHP script that queries that table once a second, and executes any new tasks it finds. This would also have the advantage of letting me split the tasks among several worker machines in the future if I needed to.
Am I re-inventing the wheel? Is there a better solution than the exec() hack or the MySQL queue?
I've used the queuing approach, and it works well as you can defer that processing until your server load is idle, letting you manage your load quite effectively if you can partition off "tasks which aren't urgent" easily.
Rolling your own isn't too tricky, here's a few other options to check out:
GearMan - this answer was written in 2009, and since then GearMan looks a popular option, see comments below.
ActiveMQ if you want a full blown open source message queue.
ZeroMQ - this is a pretty cool socket library which makes it easy to write distributed code without having to worry too much about the socket programming itself. You could use it for message queuing on a single host - you would simply have your webapp push something to a queue that a continuously running console app would consume at the next suitable opportunity
beanstalkd - only found this one while writing this answer, but looks interesting
dropr is a PHP based message queue project, but hasn't been actively maintained since Sep 2010
php-enqueue is a recently (2017) maintained wrapper around a variety of queue systems
Finally, a blog post about using memcached for message queuing
Another, perhaps simpler, approach is to use ignore_user_abort - once you've sent the page to the user, you can do your final processing without fear of premature termination, though this does have the effect of appearing to prolong the page load from the user perspective.
When you just want to execute one or several HTTP requests without having to wait for the response, there is a simple PHP solution, as well.
In the calling script:
$socketcon = fsockopen($host, 80, $errno, $errstr, 10);
if($socketcon) {
$socketdata = "GET $remote_house/script.php?parameters=... HTTP 1.1\r\nHost: $host\r\nConnection: Close\r\n\r\n";
fwrite($socketcon, $socketdata);
fclose($socketcon);
}
// repeat this with different parameters as often as you like
On the called script.php, you can invoke these PHP functions in the first lines:
ignore_user_abort(true);
set_time_limit(0);
This causes the script to continue running without time limit when the HTTP connection is closed.
Another way to fork processes is via curl. You can set up your internal tasks as a webservice. For example:
http://domain/tasks/t1
http://domain/tasks/t2
Then in your user accessed scripts make calls to the service:
$service->addTask('t1', $data); // post data to URL via curl
Your service can keep track of the queue of tasks with mysql or whatever you like the point is: it's all wrapped up within the service and your script is just consuming URLs. This frees you up to move the service to another machine/server if necessary (ie easily scalable).
Adding http authorization or a custom authorization scheme (like Amazon's web services) lets you open up your tasks to be consumed by other people/services (if you want) and you could take it further and add a monitoring service on top to keep track of queue and task status.
http://domain/queue?task=t1
http://domain/queue?task=t2
http://domain/queue/t1/100931
It does take a bit of set-up work but there are a lot of benefits.
If it just a question of providing expensive tasks, in case of php-fpm is supported, why not to use fastcgi_finish_request() function?
This function flushes all response data to the client and finishes the request. This allows for time consuming tasks to be performed without leaving the connection to the client open.
You don't really use asynchronicity in this way:
Make all your main code first.
Execute fastcgi_finish_request().
Make all heavy stuff.
Once again php-fpm is needed.
I've used Beanstalkd for one project, and planned to again. I've found it to be an excellent way to run asynchronous processes.
A couple of things I've done with it are:
Image resizing - and with a lightly loaded queue passing off to a CLI-based PHP script, resizing large (2mb+) images worked just fine, but trying to resize the same images within a mod_php instance was regularly running into memory-space issues (I limited the PHP process to 32MB, and the resizing took more than that)
near-future checks - beanstalkd has delays available to it (make this job available to run only after X seconds) - so I can fire off 5 or 10 checks for an event, a little later in time
I wrote a Zend-Framework based system to decode a 'nice' url, so for example, to resize an image it would call QueueTask('/image/resize/filename/example.jpg'). The URL was first decoded to an array(module,controller,action,parameters), and then converted to JSON for injection to the queue itself.
A long running cli script then picked up the job from the queue, ran it (via Zend_Router_Simple), and if required, put information into memcached for the website PHP to pick up as required when it was done.
One wrinkle I did also put in was that the cli-script only ran for 50 loops before restarting, but if it did want to restart as planned, it would do so immediately (being run via a bash-script). If there was a problem and I did exit(0) (the default value for exit; or die();) it would first pause for a couple of seconds.
Here is a simple class I coded for my web application. It allows for forking PHP scripts and other scripts. Works on UNIX and Windows.
class BackgroundProcess {
static function open($exec, $cwd = null) {
if (!is_string($cwd)) {
$cwd = #getcwd();
}
#chdir($cwd);
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$WshShell = new COM("WScript.Shell");
$WshShell->CurrentDirectory = str_replace('/', '\\', $cwd);
$WshShell->Run($exec, 0, false);
} else {
exec($exec . " > /dev/null 2>&1 &");
}
}
static function fork($phpScript, $phpExec = null) {
$cwd = dirname($phpScript);
#putenv("PHP_FORCECLI=true");
if (!is_string($phpExec) || !file_exists($phpExec)) {
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$phpExec = str_replace('/', '\\', dirname(ini_get('extension_dir'))) . '\php.exe';
if (#file_exists($phpExec)) {
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
} else {
$phpExec = exec("which php-cli");
if ($phpExec[0] != '/') {
$phpExec = exec("which php");
}
if ($phpExec[0] == '/') {
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
}
} else {
if (strtoupper(substr(PHP_OS, 0, 3)) == 'WIN') {
$phpExec = str_replace('/', '\\', $phpExec);
}
BackgroundProcess::open(escapeshellarg($phpExec) . " " . escapeshellarg($phpScript), $cwd);
}
}
}
PHP HAS multithreading, its just not enabled by default, there is an extension called pthreads which does exactly that.
You'll need php compiled with ZTS though. (Thread Safe)
Links:
Examples
Another tutorial
pthreads PECL Extension
UPDATE: since PHP 7.2 parallel extension comes into play
Tutorial/Example
reference manual
This is the same method I have been using for a couple of years now and I haven't seen or found anything better. As people have said, PHP is single threaded, so there isn't much else you can do.
I have actually added one extra level to this and that's getting and storing the process id. This allows me to redirect to another page and have the user sit on that page, using AJAX to check if the process is complete (process id no longer exists). This is useful for cases where the length of the script would cause the browser to timeout, but the user needs to wait for that script to complete before the next step. (In my case it was processing large ZIP files with CSV like files that add up to 30 000 records to the database after which the user needs to confirm some information.)
I have also used a similar process for report generation. I'm not sure I'd use "background processing" for something such as an email, unless there is a real problem with a slow SMTP. Instead I might use a table as a queue and then have a process that runs every minute to send the emails within the queue. You would need to be warry of sending emails twice or other similar problems. I would consider a similar queueing process for other tasks as well.
It's a great idea to use cURL as suggested by rojoca.
Here is an example. You can monitor text.txt while the script is running in background:
<?php
function doCurl($begin)
{
echo "Do curl<br />\n";
$url = 'http://'.$_SERVER['SERVER_NAME'].$_SERVER['REQUEST_URI'];
$url = preg_replace('/\?.*/', '', $url);
$url .= '?begin='.$begin;
echo 'URL: '.$url.'<br>';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
echo 'Result: '.$result.'<br>';
curl_close($ch);
}
if (empty($_GET['begin'])) {
doCurl(1);
}
else {
while (ob_get_level())
ob_end_clean();
header('Connection: close');
ignore_user_abort();
ob_start();
echo 'Connection Closed';
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
$begin = $_GET['begin'];
$fp = fopen("text.txt", "w");
fprintf($fp, "begin: %d\n", $begin);
for ($i = 0; $i < 15; $i++) {
sleep(1);
fprintf($fp, "i: %d\n", $i);
}
fclose($fp);
if ($begin < 10)
doCurl($begin + 1);
}
?>
There is a PHP extension, called Swoole.
Although it might not be enabled, it is available on my hosting for being enabled at click of a button.
Worth checking it out. I haven't had time to use it yet, as I was searching here for info, when I stumbled across it and thought it worth sharing.
Unfortunately PHP does not have any kind of native threading capabilities. So I think in this case you have no choice but to use some kind of custom code to do what you want to do.
If you search around the net for PHP threading stuff, some people have come up with ways to simulate threads on PHP.
If you set the Content-Length HTTP header in your "Thank You For Registering" response, then the browser should close the connection after the specified number of bytes are received. This leaves the server side process running (assuming that ignore_user_abort is set) so it can finish working without making the end user wait.
Of course you will need to calculate the size of your response content before rendering the headers, but that's pretty easy for short responses (write output to a string, call strlen(), call header(), render string).
This approach has the advantage of not forcing you to manage a "front end" queue, and although you may need to do some work on the back end to prevent racing HTTP child processes from stepping on each other, that's something you needed to do already, anyway.
If you don't want the full blown ActiveMQ, I recommend to consider RabbitMQ. RabbitMQ is lightweight messaging that uses the AMQP standard.
I recommend to also look into php-amqplib - a popular AMQP client library to access AMQP based message brokers.
Spawning new processes on the server using exec() or directly on another server using curl doesn't scale all that well at all, if we go for exec you are basically filling your server with long running processes which can be handled by other non web facing servers, and using curl ties up another server unless you build in some sort of load balancing.
I have used Gearman in a few situations and I find it better for this sort of use case. I can use a single job queue server to basically handle queuing of all the jobs needing to be done by the server and spin up worker servers, each of which can run as many instances of the worker process as needed, and scale up the number of worker servers as needed and spin them down when not needed. It also let's me shut down the worker processes entirely when needed and queues the jobs up until the workers come back online.
i think you should try this technique it will help to call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
cornjobpage.php //mainpage
<?php
post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue");
//post_async("http://localhost/projectname/testpage.php", "Keywordname=testValue2");
//post_async("http://localhost/projectname/otherpage.php", "Keywordname=anyValue");
//call as many as pages you like all pages will run at once independently without waiting for each page response as asynchronous.
?>
<?php
/*
* Executes a PHP page asynchronously so the current page does not have to wait for it to finish running.
*
*/
function post_async($url,$params)
{
$post_string = $params;
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "GET ".$parts['path']."?$post_string"." HTTP/1.1\r\n";//you can use POST instead of GET if you like
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
fwrite($fp, $out);
fclose($fp);
}
?>
testpage.php
<?
echo $_REQUEST["Keywordname"];//case1 Output > testValue
?>
PS:if you want to send url parameters as loop then follow this answer :https://stackoverflow.com/a/41225209/6295712
PHP is a single-threaded language, so there is no official way to start an asynchronous process with it other than using exec or popen. There is a blog post about that here. Your idea for a queue in MySQL is a good idea as well.
Your specific requirement here is for sending an email to the user. I'm curious as to why you are trying to do that asynchronously since sending an email is a pretty trivial and quick task to perform. I suppose if you are sending tons of email and your ISP is blocking you on suspicion of spamming, that might be one reason to queue, but other than that I can't think of any reason to do it this way.

Categories