I'm running a foreach loop in php which takes longer to execute than my maximum execution time of 30 seconds. The loop sends individual emails to users.
Instead of running cron jobs every 30 seconds and creating queues for records is it unethical to just restart the counter in the loop using set_time_limit(30) ?
$i = 0; //start count from 0
foreach ($users as $user):
//limit emails sent
if(++$i == 100) break; //ends execution of loop
set_time_limit(30); //restart timeout counter
send_email($user); //send email to user
endforeach;
I'm new to this but with the code above I think I'm giving each email 30 seconds to complete but also breaking the loop when 100 emails are sent so the script doesn't run forever.
Update: set_time_limit(0) goes against hosting TOS, I believed that restarting the timeout counter restarts the script as well as would CRON
Running set_time_limit in foreach loop with brings and solves few problems at the same time.
I see the greatest pro of this solution in making sure that no request will take more than 30 seconds long (and when you have a full cue I believe it's even desirable to cut every script that takes that long) .
The problem it brings it's that no all jobs will be executed necessarily. Maybe you'll experience some problems in the middle of jobs queue and it will all fail.
I would go with this:
# crontab
0,30 * * * * php /path/to/your/script.php
And would use your script.
If you need to execute jobs as fast as possible, I'd create a bash script that would execute (without any timeout) php script as long as it wouldn't finish with exit(0) (all jobs executed successfully) or wouldn't return "Done!" or whatever you like.
Exampe bash script: 1,2
#!/bin/bash
# Note that false sets $? to 1
false
while [ $? -ne 0 ]; do
php /path/to/your/script.php >> log.log
done
And if you need to make sure no two instances will run at the same time, google one of those (just from the top of my head):
.pid file
mysql LOCK TABLE
PS: If you'll use some of the method make sure that your script will work if it crashes in the middle
Just disable the time limit all together at the start of your script:
set_time_limit(0);
If your host's TOS preclude using unlimited scripts, they're almost certainly going to object to resetting the script. The only choice would be to send the emails in parallel, or move to a different host.
Related
So, I'm requesting data from an API.
So far, my API key is limited to:
10 requests every 10 seconds
500 requests every 10 minutes
Bascially, I want to request a specific value from every game the user has played.
That are, for example, about 300 games.
So I have to make 300 requests with my PHP. How can I slow them down to observe the rate limit?
(It can take time, site does not have to be fast)
I tried sleep(), which resulted in my script crashing.. Any other ways to do this?
I suggest setting up a cron job that executes every minute, or even better use Laravel scheduling rather than using sleep or usleep to imitate a cron.
Here is some information on both:
https://laravel.com/docs/5.1/scheduling
http://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
This sounds like a perfect use for the set_time_limit() function. This function allows you to specify how long your script can execute, in seconds. For example, if you say set_time_limit(45); at the beginning of your script, then the script will run for a total of 45 seconds. One great feature of this function is that you can allow your script to execute indefinitely (no time limit) by saying: set_time_limit(0);.
You may want to write your script using the following general structure:
<?php
// Ignore user aborts and allow the script
// to run forever
ignore_user_abort(true);
set_time_limit(0);
// Define constant for how much time must pass between batches of connections:
define('TIME_LIMIT', 10); // Seconds between batches of API requests
$tLast = 0;
while( /* Some condition to check if there are still API connections that need to be made */ ){
if( timestamp() <= ($tLast + TIME_LIMIT) ){ // Check if TIME_LIMIT seconds have passed since the last connection batch
// TIME_LIMIT seconds have passed since the last batch of connections
/* Use cURL multi to make 10 asynchronous connections to the API */
// Once all of those connections are made and processed, save the current time:
$tLast = timestamp();
}else{
// TIME_LIMIT seconds have not yet passed
// Calculate the total number of seconds remaining until TIME_LIMIT seconds have passed:
$timeDifference = $tLast + TIME_LIMIT - timestamp();
sleep( $timeDifference ); // Sleep for the calculated number of seconds
}
} // END WHILE-LOOP
/* Do any additional processing, computing, and output */
?>
Note: In this code snippet, I am also using the ignore_user_abort() function. As noted in the comment on the code, this function just allows the script to ignore a user abort, so if the user closes the browser (or connection) while your script is still executing, the script will continue retrieving and processing the data from the API anyway. You may want to disable that in your implementation, but I will leave that up to you.
Obviously this code is very incomplete, but it should give you a decent understanding of how you could possibly implement a solution for this problem.
Don't slow the individual requests down.
Instead, you'd typically use something like Redis to keep track of requests per-IP or per-user. Once the limit is hit for a time period, reject (with a HTTP 429 status code, perhaps) until the count resets.
http://redis.io/commands/INCR coupled with http://redis.io/commands/expire would easily do the trick.
I've got a CronJob scheduled to run every 7 minutes. The script runs a loop over a number of users and sends email via SMTP or does some API calls via curl.
Thus, most of the execution time is apparently spent outside the realm tracked by max_execution_time on Linux. So, because I was experiencing hangs in my script that were always fixed by restarting it (I'm also looking for the cause of the hangs, but wasn't successful so far).
Because with set_time_limit set to 6 minutes, the script still ran 30 minutes sometimes, I now check microtime(true) after each round in the loop and break out of it, if it has been running more than 6 minutes.
Still, the script sometimes runs 37 minutes (even though I can see that emails, which map to one round in a loop, still go off).
The only trick I have left is pcntl_fork. I am reluctant to employ it because of platform dependence, and because I figured using a loop and microtime(true) should track time spent outside the process too and I'd like to understand why this isn't the case here.
The max_execution_time is used for limiting script execution time. However this does not affect system calls. See the manpage.
You can also use set_time_limit() to properly set the maximum amount of time a script can run. This only changes the time_limit for the script in scope. Have you tried this as well?
You can find another question that has been asked may help slightly at this link.
I ended up going outside PHP to get the time, this is simple, works and doesn't require me to learn to deal with pcntl functions.
$time_comp = microtime(true); // save start time
function check_time_against_mysql($dbh,$time_comp) // run this function
{ // to see how much time passed
$time = $dbh->prepare("SELECT UNIX_TIMESTAMP() - :created AS time_in_seconds");
$time->bindValue(":created",$time_comp);
$time->execute() or die("fail time");
$time_passed = $time->fetch(PDO::FETCH_ASSOC);
return floatval($time_passed['time_in_seconds']);
}
I'll be using the social networking software, Elgg, for an organization that needs to send mass emails to specific groups when they need to. The number of emails can range from 10-1000 depending on the group. Web host only allows 500 emails per hour, so I need to throttle the script to send one email every 8 seconds.
I'm using PHPmailer with Elgg. PHPmailer says that I should use these two scripts (code below) in conjunction with each other in order to throttle the mailing. I know how I'm going to use the code in the mailing script, I'm just unsure about a couple things.
1) I don't really understand the purpose for the safemode
2) After looking up set_time_limit, it looks like I should set this to an amount of time to allow all potential emails to be sent, whether it's 10 or 1000? Or is this a max of 30 seconds per loop in case it needs to timeout?
3) How should I set this to get what I need?
Links to PHPmailer describing code:
http://phpmailer.worxware.com/index.php?pg=tip_ext
http://phpmailer.worxware.com/index.php?pg=tip_pause
<?php
/* The following code snippet with set the maximum execution time
* of your script to 300 seconds (5 minutes)
* Note: set_time_limit() does not work with safe_mode enabled
*/
$safeMode = ( #ini_get("safe_mode") == 'On' || #ini_get("safe_mode") === 1 ) ? TRUE : FALSE;
if ( $safeMode === FALSE ) {
set_time_limit(300); // Sets maximum execution time to 5 minutes (300 seconds)
// ini_set("max_execution_time", "300"); // this does the same as "set_time_limit(300)"
}
echo "max_execution_time " . ini_get('max_execution_time') . "<br>";
/* if you are using a loop to execute your mailing list (example: from a database),
* put the command in the loop
*/
while (1==1) {
set_time_limit(30); // sets (or resets) maximum execution time to 30 seconds)
// .... put code to process in here
if (1!=1) {
break;
}
}
?>
and
<?php
/* Note: set_time_limit() does not work with safe_mode enabled */
while (1==1) {
set_time_limit(30); // sets (or resets) maximum execution time to 30 seconds)
// .... put code to process in here
usleep(1000000); // sleep for 1 million micro seconds - will not work with Windows servers / PHP4
// sleep(1); // sleep for 1 seconds (use with Windows servers / PHP4
if (1!=1) {
break;
}
}
?>
Safe mode is deprecated as of php 5.3 and removed in php 5.4, so if your install is relatively recent, it's a moot point: http://php.net/manual/en/ini.sect.safe-mode.php#ini.safe-mode
Doing a set_time_limit() will reset the counter, so as long as your code reaches the set_time_limit() call in less time than the limit was set previously (e.g. gets there in 29 seconds, leaving 1 second on clock), the code will reset the timer and get another 30 seconds. However, since you don't want your code to be racy, you should simply disable the time limit entirely.
Personally, I wouldn't dump out one email every 8 seconds. I'd blast out the 500 we're allowed, then have a scheduled job to fire up the script once an hour and resume from where the blast left off. This will make things be a bit bursty for the mail server, but potentially more efficient in the long run, as it could batch together emails for the same recipient domains. e.g. all #aol.com mails in the group of 500 can go together, rather than forcing the server to connect to aol multiple times to deliver individual mails.
As well, if you're batching like this, a server failure will only be 'bad' during the few seconds when the script's actually running and building emails. The rest of the time the PHP script won't even be running, and it'll be up to the smtp server to do its thing.
I might not be of quick and perticular help but i would consider an asynchronous approach.
This involves storing the task to send an email in a queue and having workers which process those tasks.
The simplest way is to just store emails in the database and have a cronjob running on the server which sends the emails in batches.
The better (but more complex) solution would be to use some sort of message queue system, like zeromq or the heavy-weight rabbitmq.
The last and maybe most comfortable option from the top of my head is to use a web service like MailChimp or Postmark.
I have a php script that needs to run once every 5 minutes. Currently I'm using a cron job to run it (and it works great) but my host only allows a minimum time of 15 minutes.
So my question is, can I use visitors to trigger the running of a php script every 5 minutes. I can easily just record the last time it ran, and re-run it based on elapsed time.
However, I'm worried about race conditions. It is important that the script only gets run once every 5 minutes.
My script takes about 60 seconds to run. It writes to a couple files during this time. If the script ran more than once it would corrupt files. Also, if I get no vistors for 10 minutes, then running once when the next vistor arrives is fine.
Is there some standard way to accomplish this task?
Thanks!
Have you considered just having your script run an infinite loop with a sleep to wait 5 minutes between iterations?
for (;;)
{
perform_actions();
sleep(300);
}
Alternatively, you could have a file (for example, is_running), and get an exclusive lock on it at the start of your script which is released at the end. At least this way you will not do anything destructive.
You could also combine these two solutions.
$fp = fopen("is_running", "r+");
/* is it already running? */
if (! flock($fp, LOCK_EX | LOCK_NB)) return;
for (;;)
{
perform_actions();
sleep(300);
}
And then have the cron job still run every 15 minutes. If the process is still running, it will just bail out, otherwise it will relaunch and resume updating every 5 minutes.
Lame answer for a lame situation (the ISP, not the poster). Schedule 12 cron jobs, all calling the same script, each running once per hour, but calling at a different 5 minute mark.
00 * * * * root echo "run at :00 of every hour"
05 * * * * root echo "run at :05 of every hour"
10 * * * * root echo "run at :10 of every hour"
etc until :55. But I stand by my original comment - find a new ISP :)
If you cannot do what #Brandon suggested, I would recommend the approaching this in the same way I did when writing a daemon in PHP (not the best solution but I was practically forced to do this).
In my case as well the script accessed a (log)file and did processing on it, afterwards inserting the results in the database. So to ensure that I don't have two files running at the same time, I created a "status" file on which the script acquired a lock and if not able to do so if failed gracefully.
$fh = fopen('status_file', 'w');
/**
* LOCK_NB is required because otherwise your script would stall until
* a lock is aquired, queing a bunch of scripts.
*/
if(!flock($fh, LOCK_EX | LOCK_NB)) {
exit 1; // our job is done here
}
The answer to whether visitors can start this script is yes. You can run the script when visitors enter a page. You would want to store start time, but also an attribute for whether it is running. This should avoid any possibility of conflict when trying to update your data. I'de also put an additional field for mail warning which you can use if the runtime passes what you would expect as beyond max time. You can then have the script send you a warning email that your script have been running beyond max time. I've personally kept these statuses in databases, but they can also be stored in files.
I need to create a script in PHP language which performs permutation of numbers. But PHP has an execution time limit set to 60 seconds. How can I run the script so that if you need to run more than 60 sesunde, not been interrupted by the server. I know I can change the maximum execution time limit in php, but I want to hear another version that does not require to know in advance the execution time of a script.
A friend suggested me to sign in and log out frequently from the server, but I have no idea how to do this.
Any advice is welcome. An example code would be useful.
Thanks.
First I need to enter a number, lets say 25. After this the script is launch and it need to do the following: for every number <= than 25 it will create a file with the numbers generated at the current stage; for the next number it will open the previuos created file, and will create another file base on the lines of the opened file and so on. Because this take to long, I need to avoid the script beeing intrerupted by the server.
#emanuel:
I guess when your friend told you "A friend suggested me to sign in and log out frequently from the server, but I have no idea how to do this.", he/she must have meant "Split your script computation into x pieces of work and run it separately"
For example with this script you can execute it 150 times to achieve a 150! (factorising) and show the result:
// script name: calc.php
<?php
session_start();
if(!isset($_SESSION['times'])){
$_SESSION['times'] = 1;
$_SESSION['result'] = 0;
}elseif($_SESSION['times'] < 150){
$_SESSION['times']++;
$_SESSION['result'] = $_SESSION['result'] * $_SESSION['times'];
header('Location: calc.php');
}elseif($_SESSION['times'] == 150){
echo "The Result is: " . $_SESSION['result'];
die();
}
?>
BTW (#Davmuz), you can only use set_time_limit() function on Apache servers, it's not a valid function on Microsoft IIS servers.
set_time_limit(0)
You could try to put the calls you want to make in a queue, which you serialize to a file (or memory cache?) when an operation is done. Then you could use a CRON-daemon to execute this queue every sixty seconds, so it continues to do the work, and finishes the task.
The drawbacks of this approach are problems with adding to the queue, with file locking and the such, and if you need the results immediately, this can prove troublesome. If you are adding stuff to a Db, it might work out. Also, it is not very efficient.
Use set_time_limit(0) but you have to disable the safe_mode:
http://php.net/manual/en/function.set-time-limit.php
I suggest to use a fixed time (set_time_limit(300)) because if there is a problem in the script (endless loops or memory leaks) this can not be a source of problems.
The web server, like Apache, have also a maximum time limit of 300 seconds, so you have to change it. If you want to do a Comet application, it may be better to chose another web server than Apache that can have long requests times.
If you need a long execution time for a heavy algorithm, you can also implement a parallel processing: http://www.google.com/#q=php+parallel+processing
Or store the input data and computer with another external script with a cron or whatever else.