PHP execution takes longer than it should, why? [closed] - php

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a large database with 1800 contact records. A php script is being used to facilitate "Word Mailmerge". When I do a mailmerge with 500 records, it takes around 40 seconds. But to do same for 1800 records, execution time for the task is around 300 seconds, where it should take no more than 180-200 seconds.
I have tweaked around PHP.ini and php-fpm config and increase some values, but no improvement in results. Is it normal for PHP when executing large no of records?

Is it exactly 300 seconds (i.e 5 minutes) every time? If so, I think you might be getting stuck on one of the records somehow and PHP is hitting its max_execution_time.
Try running your script with the latter 1300 of the 1800 (by adding LIMIT 501, 1300 to your query). Does it take 300s?
Try splitting the recordset in half (by adding LIMIT 0, 900 in one attempt then LIMIT 901,900 in the next). Take the set that results in the 300s execution time and split that in half -- continue until you find the record that is causing your trouble.
Investigate that record and see why it's causing an infinite loop or hang.
Also, do you have E_ALL error reporting on? You may have Notices or Warnings that may shed some light on the issue.
UPDATE:
In your script, where you have your query string defined, add an echo with a concatenated <br> after the assignment and before the query call itself. Then wrap that echo in ob_end_flush(); and ob_start();.
For example, if, you had code like this:
$query = "SELECT * FROM {contacts};";
$result = mysqli_query($query);
change it to:
$query = "SELECT * FROM {contacts};";
ob_end_flush();
# CODE THAT NEEDS IMMEDIATE FLUSHING
echo $query . "<br>";
ob_start();
$result = mysqli_query($query);
This will make PHP echo the query string immediately without waiting for the script execution to end or for the buffer to fill. Then you can see the output while the script is still running and determine immediately what's actually happening (either the script is getting hung on a single, specific record OR the execution time is legitimately increasing as your iterate over an increasing amount of records).
If you actually share your code, I'd be able to help a lot more.

1800 is a very small number of records and shouldn't take any amount of time at all to process. Can you provide a little more detail on the process itself? Is it a webpage / background process? Are you sending the same data to all of these users?
This type of thing should be handled by a worker / background process so, typically, you shouldn't care how long it takes (within reason) providing the job completes. That said, because it is such a small range, you could troubleshoot the process by trying to send 250 first and set the base time for what you expect as an average of 250 records to be sent. Increase by 250 each time. If the average time increases each time you increment the batch size then you can pretty be sure you have an issue with the code - in which case post it here and we can try to resolve for you. If the average increases only within a specific window (e.g. between 750 and 1000) then it is likely a data issue.
This is as simple an approach as I can suggest without seeing any code or having any extra detail.

Related

Memory not freed after execution of a PHP script

I have a LAMP server on which I run a PHP script that makes a SELECT query on a table containing about 1 million rows.
Here is my script (PHP 8.2 and mariaDB 10.5.18) :
$db = new PDO("mysql:host=$dbhost;dbname=$dbname;", $dbuser, $dbpass);
$req = $db->prepare('SELECT * FROM '.$f_dataset);
$req->execute();
$fetch = $req->fetchAll(PDO::FETCH_ASSOC);
$req->closeCursor();
My problem is that each execution of this script seems to consume about 500MB of RAM on my server, and this memory is not released at the end of the execution, so having only 2GB of RAM, after 3 executions, the server kills the Apache2 task, which forces me to restart the Apache server each time.
Is there a solution to this? A piece of code that allows to free the used memory?
I tried to use unset($fetch) and gc_collect_cycles() but nothing works and I haven't found anyone who had the same problem as me.
EDIT
After the more skeptical among you about my problem posted several responses asking for evidence as well as additional information, here is what else I can tell you:
I am currently developing a trading strategy testing tool where I set the parameters manually via an HTML form. This one is then processed by a PHP script that will first perform calculations in order to reproduce technical indicators (using the Trader library for some of them, and reprogrammed for others) from the parameters returned by the form.
In a second step, after having reproduced the technical indicators and having stored their values in my database, the PHP script will simulate a buy or sell order according to the values of the stock market price I am interested in, and according to the values of the technical indicators calculated just before.
To do this, I have in my database for example 2 tables, the first one stores the information of the candles of size 1 minute (opening price, closing price, max price, min price, volume ...), that is to say 1 candle per line, the second table stores the value of a technical indicator, corresponding to a candle, thus to a line of my 1st table.
The reason why I need to make calculations, and therefore to get my 1 million candles, is that my table contains 1 million candles of 1 minute on which I want to test my strategy. I could do this with 500 candles as well as with 10 million candles.
My problem now, is only with the candle retrieval, there are not even any calculations yet. I shared my script above which is very short and there is absolutely nothing else in it except the definitions of my variables $dbname, $dbhost etc. So look no further, you have absolutely everything here.
When I run this script on my browser, and I look at my RAM load during execution, I see that an apache process consumes up to 697 MB of RAM. I'd like to say that so far, nothing abnormal, the table I'm retrieving candles from is a little over 100 MB. The real problem is that once the script is executed, the RAM load remains the same. If I run my script a second time, the RAM load is 1400 MB. And this continues until I have used up all the RAM, and my Apache server crashes.
So my question is simple, do you know a way to clear this RAM after my script is executed?
What you describe is improbable and you don't say how you made these measurements. If your assertions are valid then there are a couple of ways to solve the memory issue, however this is the xy problem. There is no good reason to read a million rows into a web page script.
After several hours of research and discussion, it seems that this problem of unreleased memory has no solution. It is simply the current technical limitations of Apache compared to my case, which is not able to free the memory it uses unless it is restarted every time.
I have however found a workaround in the Apache configuration, by only allowing one maximum request per server process instead of the default 5.
This way, the process my script is running on gets killed at the end of the run and is replaced by another one that starts automatically.

Running scheduled tasks in php [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a table, that consists of all the info to run a campaign. The info includes, the time interval to check the campaign (10 mins, 15 mins etc.) and other information to check whether it meets the specific requirement or not, to run the campaign.
At the moment, what I am planning to do is:
Add my code in one php file
In the code, go through all the rows of the table
Check if it's the time to check the campaign or not (via the interval)
If it's the time to check the campaign, then go through other details of the table and based on the set conditions, send an email or SMS.
I am planning to run a cron job which goes through this php file, after every 10 minutes (As it's the shortest check interval)
I need suggestions, whether it's the proper solution or not OR if someone has any better and efficient solution?
That's a decent starting point...
If it's the time to check the campaign...
Keep in mind that sometimes a cron process takes longer than expected, gets stuck or your system crashes in the middle. Ideally your process will keep track of a.) what it's doing, and b.) when it did it. And be able to fix problems like skipped or stuck processing.
It could be that you never want to send a message that late. Then again you may want to make sure all of the missed messages get sent. Your code should be able to handle this case automatically to some degree. Maybe automatically do anything that's should've been done in the last hour but wasn't and ignore anything older than that. For older stuff you'd have to manually run the script. Make sure your script has command line arguments that simplify you forcing it to run for prior time intervals and specific campaign IDs. This will make your life way easier after a disaster.
I suggest that you have some kind of reporting so you can keep track of your processing in real time. Pretty simple if you're writing state info to your database. Add on an end of processing timestamp and you can even see how long your cron jobs are running. If you don't want to use this state info in your cron job you can just write it to a log file instead of a database. And in that case (if needed) you would use a lock file to indicate when a cron job is running and prevent other cron jobs from starting at the same time. Regardless, it's good practice to write a log file so you have a record of what happened. Imagine if your cron job sent an email but crashed while attempting to write the state to the database. You'd at least have a log line to help you investigate later.
I am planning to run a cron job which goes through this php file,
after every 10 minutes (As it's the shortest check interval)
So the speed of your script will vary with the amount of data, latency of external services (assuming your script talks directly to such services). I would start with a much longer cron job start interval - assuming that your client/use case allows for that. If you follow the suggestion above to have your script automatically handle skipped times this isn't a problem. The more stuff you're processing the more time your script will eventually need. So on day 1 it might only need 1 second. But on day 300 it might need 15 minutes? (At that point you could decide that you want to have multiple processes/threads running at the same time with each one focused on a single campaign or range of campaigns. Who knows...) But you'll know because you have reports/alerts/logs on the start/end processing times.

Running heavy PHP scripts in background [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I want to run more than 800 PHP scripts in the background simultaneously on Linux. Each PHP script will execute forever, meaning it will not stop once it has started. Each script will send request and get response from the server. How much RAM do I need for that? Will it possible to run more than 800 scripts? What kind of hardware do I need?
You're probably doing it wrong. Since your scripts are I/O bound instead of CPU bound, an event loop will help you. That way you just need as many workers as CPU cores.
This approach does not only lower your required resources in terms of memory and CPU cycles, but also reduce the number of scripts you have to monitor.
There are various PHP implementations, here are the three most popular ones:
Amp
Icicle
React
Well, I'm sure the hardware you seek exists, but you will need a time machine to access it ... do you have a time machine ??
I'm going to assume you do not have access to, or plans to build a time machine, and say that this is not sensible.
In case humour didn't do it for you; There is no hardware that is capable of executing that many processes concurrently, setting out to create an architecture that requires more threads than any commonly available hardware can execute is clearly a bad idea.
If all you are doing is I/O, then you should use non-blocking, asynchronous I/O.
To figure out how much ram you will need is simple, how much data will be stored in memory during execution x 800.
You can improve memory usage by setting variables to null as soon as you are done with the data, even if you are re-using the variables again I would highly recommend this. That way the execution will not turn into a memory leak filling up RAM and crashing your server.
$myVariable = null; //clears memory
The second part of your question "execute forever" is easy too, you simply need to tell PHP to allow the script to run for a long time... Personally though I would do the following:
Setup 800 crons of your script all running every 1 hour.
I would assume your script is in an infinite loop... note the time into a variable before the infinite loop and in the loop check if 1 hour has passed, if 1 hour has passed end the loop and the process (a new one will replace it).
Doing the above will ensure the process will be cleaned every hour, also if for some reason a process gets killed by the server due to resource or security checks the process will spring back up within the hour.
Of course you could lower this to 30 mins, 15 mins, 5 mins depending on how heavy each loop is and how often you want to re-establish the processes.

Will PHP sleep function slow down my site [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I’m looking at ways to send out 1 email every 1 min. I’ve looked at the example below where the top answer is to use PHP sleep() function.
However, I’ve also found suggestions that sleep() might slow down the server.
I’m not looking for exact answers but general approaches would be great.
However, I've also found suggestions that sleep might slow down the
server.
Yes, and hitting the pause button on a movie playing on your computer will slow down the duration of the film based on the amount of time you pause the movie.
The purpose of sleep is to put a pause in your script. As described in the official PHP documentation:
Delays the program execution for the given number of seconds.
So yes, it slows down your server. But only on content or pages where sleep is active.
So if this is a fronted script with sleep in it, it slows down the ability for anyone to view content via the PHP script that uses sleep. Place it in the middle of a page where HTML is rendering with a 1 second delay & your page now takes 1 second longer to render.
If this is a backend process only you really know about or trigger, no big deal. It’s a background process anyway so it will just expectedly slow things down in that realm.
That said, let’s look at your core question which is the first sentence of your post:
I’m looking at ways to send out 1 email every 1 min.
Then what you are looking for is a cron job which is a timed job on a Unix/Linux system. An entry for a cron job for something sending mails out every minute might be something like this:
* * * * * /full/path/to/php /full/path/to/your/php/script.php
But that is superficial. It basically just triggers the script.php every minute. Then within your script.php you would have to create some core logic that would control what happens each time it’s triggered. If you are using a database, then maybe you could create a last_sent field where you sent a time stamp of the last time a mail was sent to a recipient and then you act on that. But again, the logic is based on your core needs.
But at the end of the day, I am not too clear how sleep would factor into any of this. Might be worth it to take a step back and better architect your script to fit your needs knowing what cron is, what sleep is & what they are as well are not.
It is generally done with a separated worker and a queue manager.
That's it: you have a queue manager (i.e. RabbitMQ) that a sending email worker is bound to,
Then when you need to send 10 emails you put all of them to the corresponding queue at once in the script that serves HTTP response. This step is immediate.
Then a worker reads emails one by one and sends them with required delay. This step takes some time but we don't care.

PHP elapsed time changes after certain time [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have calculated the elapsed time of two process in my program. When I execute the program the elapse time in 0.203 sec but if I leave the page for a while the elapse time changes and become 0.173. What is the reaosn for this? My php program is
include ('db.php');
$data=array();
session_start();
$start_time= microtime(true);
if (isset($_SESSION['img']))
{
$image=$_SESSION['img'];
$addr="C:/Users/adithi.a/Desktop/FashionSearch/trial/db_features/distrib/db_features.exe $image";
exec($addr,$data);
/*for($i=18;$i<=34;$i++)
{
if($i!=30)
{
echo $data[$i]."<br>";
}
}*/
$start_time1=microtime(true);
$result=mysql_query("select tbl_features.img_id,img_path,((pow(($data[18]-features_1),2))+(pow(($data[19]-features_2),2))+(pow(($data[20]-features_3),2))+(pow(($data[21]-features_4),2))+(pow(($data[22]-features_5),2))+(pow(($data[23]-features_6),2))+(pow(($data[24]-features_7),2))+(pow(($data[25]-features_8),2))+(pow(($data[26]-features_9),2))+(pow(($data[27]-features_10),2))+(pow(($data[28]-features_11),2))+(pow(($data[29]-features_12),2))+(pow(($data[31]-features_13),2))+(pow(($data[32]-features_14),2))+(pow(($data[33]-features_15),2))+(pow(($data[34]-features_16),2))) as distance from tbl_features join tbl_image where tbl_features.img_id=tbl_image.img_id AND tbl_features.img_id>=92303 AND tbl_features.img_id<124232 ORDER BY distance ASC LIMIT 6") or die(mysql_error());
while($num=mysql_fetch_assoc($result))
{
echo "<a href='Dressinformation.php?image=$num[img_id]'><div class='imgdiv'><img src='$num[img_path]'></div></a>";
//echo $num["img_id"]." ".$num["img_path"]." ".$num["distance"]."<br>";
}
$stop_time1= microtime(true);
$time1=$stop_time1-$start_time1;
print "Euclidean distance time is $time1 seconds";
$stop_time= microtime(true);
$time=$stop_time-$start_time;
print "elapse time was $time seconds.";
}
else
{
echo "Please upload image";
}
There are tens of factors envolved. Just listing some:
If the HD is busy it will affect how long your program takes to load. On Windows it usually is, sinse its constantly doing ACL queries for reasons beyond me. Even if most of the stuff going on is cached in some form, PHP alone don't cache scripts and will read and parse it every time.
Your MySQL Server is probably running in Dev Mode, which decreases its responsiveness and memory usage to prioritize the user's processes. It also may or may not have the rows you are trying to SELECT in cache, and of course when we are talking about milliseconds it can make a lot of difference. Running the same query several times may cause MySQL to make it available in cache, what can explain why it becomes faster after a while.
Sinse Windows XP there is the Prefetcher, a technology that keeps a record of what your program needs to load so it will load faster next time you run it. If you ever noticed how a program is so much slower when you first run it on your machine, you now see why. From the second run on, it will boost up thanks to the Prefetcher. But as far as I know it does no further improvements on each subsequent run so its probably of little effect here.
And sinse Windows Vista, every Windows version also comes with SuperFetch, a technology that keeps commonly used programs pre-loaded in the "non-used" part of the RAM, and if yours was lucky enough to be elected it may explain why this boost happened. Thats also why from Windows Vista on the machine will perform so much better with larger amounts of RAM, even if you don't use it all.

Categories