I am trying to send out a bigger amount of emails using PHP in a symfony 3.4 system. Now the script stops due to php time execution limit, which of course I need to work around.
My idea is to put all emails to sent in a table email_queue and then send them after they are saved in the queue.
So I am saving all these emails, displaying a page with a progress bar and making an ajax call which should send emails out up to under the time limit, report back how many it sent out (to update the progress bar) and if something is left in the queue, the scipt is called again till everything is sent.
So far so good. Now I try to measure the execution time of the ajax script part to stop sending before I reach the 30 seconds time limit.
Now for some reason the measurement is always much smaller then the 30 seconds, but before the measured execution time even reaches something like 2 seconds, the script stops due to reaching the time limit. Why is that so?
This is how I do my measurement:
$executionTimes = [];
$executionPoints = [];
$timeStart = $_SERVER['REQUEST_TIME_FLOAT'];
foreach ($mailQueue as $mail) {
// do some mail configuration
// send the email
$now = microtime(true);
$executionTime = $now - $timeStart;
$executionTimes[] = $executionTime;
$executionPoints[] = $now;
if ($executionTime >= 10) {
break;
}
}
// return data
Using $_SERVER['REQUEST_TIME_FLOAT'] I thought I'd get the real execution time including all the symfony overhead. Is that not correct?
Here is the result of one run:
{
"timeStart": 1525702394.248,
"executionPoints": [
1525702394.863065,
1525702394.866609,
1525702394.870812,
1525702394.874702,
1525702394.878718,
1525702394.882434,
1525702394.886418,
1525702394.890428,
1525702394.894365,
1525702394.899119
],
"executionTimes": [
0.6150650978088379,
0.6186091899871826,
0.622812032699585,
0.626702070236206,
0.6307179927825928,
0.6344339847564697,
0.6384181976318359,
0.6424281597137451,
0.6463651657104492,
0.6511189937591553
]
}
So the last execution measurement was under 0.7 seconds, but there the script execution stoped.
I did a lot of research already but I just can't figure out why my script is doing things it shouldn't. Any ideas would be highly appreciated.
Related
I have a products database that synchronizes with product data ever morning.
The process is very clear:
Get all products from database by query
Loop through all products, and get and xml from the other server by product_id
Update data from xml
Log the changes to file.
If I query a low amount of items, but limiting it to 500 random products for example, everything goes fine. But when I query all products, my script SOMETIMES goes on the fritz and starts looping multiple times. Hours later I still see my log file growing and products being added.
I checked everything I could think of, for example:
Are variables not used twice without overwriting each other
Does the function call itself
Does it happen with a low amount of products too: no.
The script is called using a cronjob, are the settings ok. (Yes)
The reason that makes it especially weird is that it sometimes goes right, and sometimes it doesnt. Could this be some memory problem?
EDIT
wget -q -O /dev/null http://example.eu/xxxxx/cron.php?operation=sync its in webmin called on a specific hour and minute
Code is hundreds of lines long...
Thanks
You have:
max_execution_time disabled. Your script won't end until the process is complete for as long as it needed.
memory_limit disabled. There is no limit to how much data stored in memory.
500 records were completed without issues. This indicates that the scripts completes its process before the next cronjob iteration. For example, if your cron runs every hour, then the 500 records are processed in less than an hour.
If you have a cronjob that is going to process large amount of records, then consider adding lock mechanism to the process. Only allow the script to run once, and start again when the previous process is complete.
You can create script lock as part of a shell script before executing your php script. Or, if you don't have an access to your server you can use database lock within the php script, something like this.
class ProductCronJob
{
protected $lockValue;
public function run()
{
// Obtain a lock
if ($this->obtainLock()) {
// Run your script if you have valid lock
$this->syncProducts();
// Release the lock on complete
$this->releaseLock();
}
}
protected function syncProducts()
{
// your long running script
}
protected function obtainLock()
{
$time = new \DateTime;
$timestamp = $time->getTimestamp();
$this->lockValue = $timestamp . '_syncProducts';
$db = JFactory::getDbo();
$lock = [
'lock' => $this->lockValue,
'timemodified' => $timestamp
];
// lock = '0' indicate that the cronjob is not active.
// Update #__cronlock set lock = '', timemodified = '' where name = 'syncProducts' and lock = '0'
// $result = $db->updateObject('#__cronlock', $lock, 'id');
// $lock = SELECT * FROM #__cronlock where name = 'syncProducts';
if ($lock !== false && (string)$lock !== (string)$this->lockValue) {
// Currently there is an active process - can't start a new one
return false;
// You can return false as above or add extra logic as below
// Check the current lock age - how long its been running for
// $diff = $timestamp - $lock['timemodified'];
// if ($diff >= 25200) {
// // The current script is active for 7 hours.
// // You can change 25200 to any number of seconds you want.
// // Here you can send notification email to site administrator.
// // ...
// }
}
return true;
}
protected function releaseLock()
{
// Update #__cronlock set lock = '0' where name = 'syncProducts'
}
}
Your script is running for quite some time (~45m) and wget think it's "timing out" since you don't return any data. By default wget will have a 900s timeout value and a retry count of 20. So first you should probably change your wget command to prevent this:
wget --tries=0 --timeout=0 -q -O /dev/null http://example.eu/xxxxx/cron.php?operation=sync
Now removing the timeout could lead to other issue, so instead you could send (and flush to force webserver to send it) data from your script to make sure wget doesn't think the script "timed out", something every 1000 loops or something like that. Think of this as a progress bar...
Just keep in mind that you will hit an issue when the run time will get close to your period as 2 crons will run in parallel. You should optimize your process and/or have a lock mechanism maybe?
I see two possibilities:
- chron calls the script much more often
- script takes too long somehow.
you can try estimate the time a single iteration of the loop takes.
this can be done with time(). perhaps the result is suprising, perhaps not. you can probably get the number of results too. multiply the two, that way you will have an estimate of how long the process should take.
$productsToSync = $db->loadObjectList();
and
foreach ($productsToSync AS $product) {
it seems you load every result into an array. this wont work for huge databases because obviously a million rows wont fit in memory. you should just get one result at a time. with mysql there are methods that just fetch one thing at a time from the resource, i hope yours allows the same.
I also see you execute another query each iteration of the loop. this is something I try to avoid. perhaps you can move this to after the first query has ended and do all of those in one big query? otoh this may bite my first suggestion.
also if something goes wrong, try to be paranoid when debugging. measure as much as you can. time as much as you can when its a performance issue. put the timings in you log file. usually you will find the bottleneck.
I solved the problem myself. Thanks for all the replies!
My MySQL timed out, that was the problem. As soon as I added:
ini_set('mysql.connect_timeout', 14400);
ini_set('default_socket_timeout', 14400);
to my script the problem stopped. I really hope this helps someone. Ill upvote all the locking answers, because those were very helpful!
Context :
I'm making a PHP websocket server (here) running as a DAEMON in which there is obviously a main loop listening for sockets connections and incoming data so i can't just create an other loop with a sleep(x_number_of_seconds); in it because it'll freeze my whole server.
I can't execute an external script with a CRON job or fork a new process too (i guess) because I have to be in the scope of my server class to send data to connected client sockets.
Does anyone knows a magic trick to achieve this in PHP ? :/
Some crazy ideas :
Keeping track of the last loop execution time with microtime(true), and compare it with the current time on each loop, if it's about my desired X seconds interval, execute the method... which would result in a very drunk and inconsistent interval loop.
Run a JavaScript setInterval() in a browser that will communicate with my server trough a websocket and tell it to execute my method... i said they where crazy ideas !
Additional infos about what i'm trying to achieve :
I'm making a little online game (RPG like) in which I would like to add some NPCs that updates their behaviours every X seconds.
Is there an other ways of achieving this ? Am I missing something ? Should I rewrite my server in Node.js ??
Thanks a lot for the help !
A perfect alternative doesn't seams to exists so I'll use my crazy solution #1 :
$this->last_tick_time = microtime(true);
$this->tick_interval = 1;
$this->tick_counter = 0;
while(true)
{
//loop code here...
$t= microtime(true) - $this->last_tick_time;
if($t>= $this->tick_interval)
{
$this->on_server_tick(++$this->tick_counter);
$this->last_tick_time = microtime(true) - ($t- $this->tick_interval);
}
}
Basically, if the time elapsed since the last server tick is greater or equal to my desired tick interval, execute on_server_tick() method. And most importantly : we subtract the time overflow to make the next tick happen faster if this one happened too late. This way we fill the gaps and at the end, if the socket_select timeout is set to 1 second, we will never have a gap greater than 1.99999999+ second.
I also keep track of the tick counter, this way I can use modulo (%) to execute code on multiple intervals like this :
protected function on_server_tick($counter)
{
if($counter%5 == 0)
{
// 5 seconds interval
}
if($counter%10 == 0)
{
// 10 seconds interval
}
}
which covers all my needs ! :D
Don't worry PHP, I won't replace you with Node.js, you still my friend.
It looks to me like the websocket-framework you are using is too primitive to allow your server to do other useful things while waiting for connections from clients. The only call to PHP's socket_select() function is hard-coded to a one second timeout, and it does nothing when the time runs out. It really ought to allow a callback or an outside loop.
Look at the http://php.net/manual/en/function.socket-select.php manual page. The last parameter is a timeout time. socket_select() waits for incoming data on a socket or until the timeout time is up, which sounds like what you want to do, but the library has no provision for it. Then look at how the library uses it in core/classes/SocketServer.php.
I'm assuming you call run() and then it just never returns to your calling code until it gets a message on the socket, which prevents you from doing anything.
In PHP, I want to put a number of second delay on each iteration of the loop.
for ($i=0; $i <= 10; $i++) {
$file_exists=file_exists($location.$filename);
if($file_exists) {
break;
}
//sleep for 3 seconds
}
How can I do this?
Use PHP sleep() function. http://php.net/manual/en/function.sleep.php
This stops execution of next loop for the given number of seconds. So something like this
for ($i=0; $i <= 10; $i++) {
$file_exists=file_exists($location.$filename);
if($file_exists) {
break;
}
sleep(3); // this should halt for 3 seconds for every loop
}
I see what you are doing... your delaying a script to constantly check for a file on the filesystem (one that is being uploaded or being written by another script I assume). This is a BAD way to do it.
Your script will run slowly. Choking the server if several users are running that script.
Your server may timeout for some users.
HDD access is a costly resource.
There are better ways to do this.
You could use Ajax. And use a timeout to call your PHP script every few seconds. This will avoid the slow script loading. And also you can keep doing it constantly (the current for loop will only run for 33 seconds and then stop).
You can use a database. In some cases database access is faster than HDD access. Especially with views and caching. The script creating the file/uploading the file can set a flag in a table (i.e. file_exists) and then you can have a script that checks that field in your database.
You can use sleep(3) which sleeps the thread for 3 seconds.
Correction sleep method in php are in seconds.
Hare are two ways to sleep php script for some period of time. When you have your code and want to pause script working for some time use these functions.
In these examples the first part of code will be done on script run and the second part of code will be done but with time delay.
Using sleep() function you can define sleep time in seconds.
Example:
echo "Message 1";
// The first part of code.
$timeInSeconds = 3;
sleep($timeInSeconds);
// The second part of code.
echo "Message 2";
This way it is possible to sleep php script for 3 seconds. Using this function you can sleep script for whole number (integer) of seconds.
Using usleep() function you can define sleep time in microseconds. This sleep time is convenient for intervals that require more precise time than one second.
Example:
echo "Message 1";
// The first part of code.
$timeInMicroSeconds = 2487147;
usleep($timeInMicroSeconds);
// The second part of code.
echo "Message 2";
You can use this function if you want to sleep php for smaller time values than second (float). In this example I have put script to sleep for 2.487147 seconds.
Have you considered using a PHP Daemon script using supervisorD. I use it in multiple tasks that are required to be running all the time.
The catch is making sure that each time you are running your script you check for memory resources. If its too high, stop the process and then let it restart itself up again.
I have successfully used this process to be always checking database records for tasks to process.
It might be overkill but worth considering.
I wanted to execute a bunch of code for 5 seconds and if it has not finished executing within the specificed time frame I need to execute another piece of code..
Whether it's possible?
Ex..
There are two functions A and B
If A takes more than 30 seconds to execute the control should pass on to B
During function A you could periodically check how long the script has been executing, and if it goes over x seconds, run B:
function checkTime($start) {
$current = time();
$secondsToExecute = 5;
if (($start+$secondsToExecute) <= $current) {
func_b();
}
}
function func_a($start) {
// do some code
checkTime($start);
// do some code
checkTime($start);
// do some code
}
function func_b() {
// do something else
exit();
}
func_a(time());
http://php.net/manual/en/features.connection-handling.php
Set a time limit and a shutdown function, which checks if the status is 2 (timeout) and does your stuff if so.
One thing to note is that the time limit set this way only counts actual php processing time. Time spent with php waiting for another process or a database or http connection, etc, will not count and your time limit will not be considered reached.
If you need to count actual time that passed, even if it was not php processing time, you're going to have to go with the above suggested answer. Manually inserting that time check in places where it makes sense is the best, i.e. inside loops that you know may run too long, maybe even not on every iteration but on every N iterations, etc. Alternatively a more general approach is to use register_tick_function(), but that might lead to a noticeable performance hit with a low tick count, and you must take care to unregister it or use appropriate flags so you don't end up infinitely starting more and more calls to your timeout handling code once the timeout has happened.
Other approaches are also possible, you can register a handler for some signal using pcntl_signal() and have it sent to your process when the time limit is reached by an outside program ('man timeout' if you are on a linux box) or by a fork()-ed instance of your own php script, etc.
i wanted to break down a process (loop) into some parts, for example if have 128 emails to send :
function subs_emails(){
$subscribers = //find subscribers
if(!empty($subscribers )){
foreach($subscribers as $i => $subscriber){
sendEmail($subscriber->id);
if($i % 15 == 0){ //<-- send email per 15
sleep(60); //to pause the process for 60 seconds
}
}
return true;
}else{
return false;
}
}
will this works ?? or is there any other "better approach" solution ?? need advice please
thanks
The usual approach would be to send only a few emails at once and mark the sent ones on the background(via database flag sent=1 for example)
Then call the script every few minutes via a cronjob
This way you dont run into problems with php timeouts when sending emails to a lot of subscribers
sleep() will cause the script to stall for the defined number of seconds (60 in your example). It won't really be breaking the loop but instead merely delaying it.
A possible solution is to make a note of which subscriber has already been sent an email. Then you can have your script execute at regular intervals via cron and only load a small amount of those who have not yet been sent an email. For example:
Script executes every 10mins
Load 15 subscribers who have not been flagged as already notified
Loop through all 15 loaded subscribers and send each an email
Set flag on all 15 to say they have been sent the email
Script will then run 10mins later to process the next 15 subscribers