PHP elapsed time changes after certain time [closed] - php

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have calculated the elapsed time of two process in my program. When I execute the program the elapse time in 0.203 sec but if I leave the page for a while the elapse time changes and become 0.173. What is the reaosn for this? My php program is
include ('db.php');
$data=array();
session_start();
$start_time= microtime(true);
if (isset($_SESSION['img']))
{
$image=$_SESSION['img'];
$addr="C:/Users/adithi.a/Desktop/FashionSearch/trial/db_features/distrib/db_features.exe $image";
exec($addr,$data);
/*for($i=18;$i<=34;$i++)
{
if($i!=30)
{
echo $data[$i]."<br>";
}
}*/
$start_time1=microtime(true);
$result=mysql_query("select tbl_features.img_id,img_path,((pow(($data[18]-features_1),2))+(pow(($data[19]-features_2),2))+(pow(($data[20]-features_3),2))+(pow(($data[21]-features_4),2))+(pow(($data[22]-features_5),2))+(pow(($data[23]-features_6),2))+(pow(($data[24]-features_7),2))+(pow(($data[25]-features_8),2))+(pow(($data[26]-features_9),2))+(pow(($data[27]-features_10),2))+(pow(($data[28]-features_11),2))+(pow(($data[29]-features_12),2))+(pow(($data[31]-features_13),2))+(pow(($data[32]-features_14),2))+(pow(($data[33]-features_15),2))+(pow(($data[34]-features_16),2))) as distance from tbl_features join tbl_image where tbl_features.img_id=tbl_image.img_id AND tbl_features.img_id>=92303 AND tbl_features.img_id<124232 ORDER BY distance ASC LIMIT 6") or die(mysql_error());
while($num=mysql_fetch_assoc($result))
{
echo "<a href='Dressinformation.php?image=$num[img_id]'><div class='imgdiv'><img src='$num[img_path]'></div></a>";
//echo $num["img_id"]." ".$num["img_path"]." ".$num["distance"]."<br>";
}
$stop_time1= microtime(true);
$time1=$stop_time1-$start_time1;
print "Euclidean distance time is $time1 seconds";
$stop_time= microtime(true);
$time=$stop_time-$start_time;
print "elapse time was $time seconds.";
}
else
{
echo "Please upload image";
}

There are tens of factors envolved. Just listing some:
If the HD is busy it will affect how long your program takes to load. On Windows it usually is, sinse its constantly doing ACL queries for reasons beyond me. Even if most of the stuff going on is cached in some form, PHP alone don't cache scripts and will read and parse it every time.
Your MySQL Server is probably running in Dev Mode, which decreases its responsiveness and memory usage to prioritize the user's processes. It also may or may not have the rows you are trying to SELECT in cache, and of course when we are talking about milliseconds it can make a lot of difference. Running the same query several times may cause MySQL to make it available in cache, what can explain why it becomes faster after a while.
Sinse Windows XP there is the Prefetcher, a technology that keeps a record of what your program needs to load so it will load faster next time you run it. If you ever noticed how a program is so much slower when you first run it on your machine, you now see why. From the second run on, it will boost up thanks to the Prefetcher. But as far as I know it does no further improvements on each subsequent run so its probably of little effect here.
And sinse Windows Vista, every Windows version also comes with SuperFetch, a technology that keeps commonly used programs pre-loaded in the "non-used" part of the RAM, and if yours was lucky enough to be elected it may explain why this boost happened. Thats also why from Windows Vista on the machine will perform so much better with larger amounts of RAM, even if you don't use it all.

Related

PHP execution takes longer than it should, why? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a large database with 1800 contact records. A php script is being used to facilitate "Word Mailmerge". When I do a mailmerge with 500 records, it takes around 40 seconds. But to do same for 1800 records, execution time for the task is around 300 seconds, where it should take no more than 180-200 seconds.
I have tweaked around PHP.ini and php-fpm config and increase some values, but no improvement in results. Is it normal for PHP when executing large no of records?
Is it exactly 300 seconds (i.e 5 minutes) every time? If so, I think you might be getting stuck on one of the records somehow and PHP is hitting its max_execution_time.
Try running your script with the latter 1300 of the 1800 (by adding LIMIT 501, 1300 to your query). Does it take 300s?
Try splitting the recordset in half (by adding LIMIT 0, 900 in one attempt then LIMIT 901,900 in the next). Take the set that results in the 300s execution time and split that in half -- continue until you find the record that is causing your trouble.
Investigate that record and see why it's causing an infinite loop or hang.
Also, do you have E_ALL error reporting on? You may have Notices or Warnings that may shed some light on the issue.
UPDATE:
In your script, where you have your query string defined, add an echo with a concatenated <br> after the assignment and before the query call itself. Then wrap that echo in ob_end_flush(); and ob_start();.
For example, if, you had code like this:
$query = "SELECT * FROM {contacts};";
$result = mysqli_query($query);
change it to:
$query = "SELECT * FROM {contacts};";
ob_end_flush();
# CODE THAT NEEDS IMMEDIATE FLUSHING
echo $query . "<br>";
ob_start();
$result = mysqli_query($query);
This will make PHP echo the query string immediately without waiting for the script execution to end or for the buffer to fill. Then you can see the output while the script is still running and determine immediately what's actually happening (either the script is getting hung on a single, specific record OR the execution time is legitimately increasing as your iterate over an increasing amount of records).
If you actually share your code, I'd be able to help a lot more.
1800 is a very small number of records and shouldn't take any amount of time at all to process. Can you provide a little more detail on the process itself? Is it a webpage / background process? Are you sending the same data to all of these users?
This type of thing should be handled by a worker / background process so, typically, you shouldn't care how long it takes (within reason) providing the job completes. That said, because it is such a small range, you could troubleshoot the process by trying to send 250 first and set the base time for what you expect as an average of 250 records to be sent. Increase by 250 each time. If the average time increases each time you increment the batch size then you can pretty be sure you have an issue with the code - in which case post it here and we can try to resolve for you. If the average increases only within a specific window (e.g. between 750 and 1000) then it is likely a data issue.
This is as simple an approach as I can suggest without seeing any code or having any extra detail.

Running heavy PHP scripts in background [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I want to run more than 800 PHP scripts in the background simultaneously on Linux. Each PHP script will execute forever, meaning it will not stop once it has started. Each script will send request and get response from the server. How much RAM do I need for that? Will it possible to run more than 800 scripts? What kind of hardware do I need?
You're probably doing it wrong. Since your scripts are I/O bound instead of CPU bound, an event loop will help you. That way you just need as many workers as CPU cores.
This approach does not only lower your required resources in terms of memory and CPU cycles, but also reduce the number of scripts you have to monitor.
There are various PHP implementations, here are the three most popular ones:
Amp
Icicle
React
Well, I'm sure the hardware you seek exists, but you will need a time machine to access it ... do you have a time machine ??
I'm going to assume you do not have access to, or plans to build a time machine, and say that this is not sensible.
In case humour didn't do it for you; There is no hardware that is capable of executing that many processes concurrently, setting out to create an architecture that requires more threads than any commonly available hardware can execute is clearly a bad idea.
If all you are doing is I/O, then you should use non-blocking, asynchronous I/O.
To figure out how much ram you will need is simple, how much data will be stored in memory during execution x 800.
You can improve memory usage by setting variables to null as soon as you are done with the data, even if you are re-using the variables again I would highly recommend this. That way the execution will not turn into a memory leak filling up RAM and crashing your server.
$myVariable = null; //clears memory
The second part of your question "execute forever" is easy too, you simply need to tell PHP to allow the script to run for a long time... Personally though I would do the following:
Setup 800 crons of your script all running every 1 hour.
I would assume your script is in an infinite loop... note the time into a variable before the infinite loop and in the loop check if 1 hour has passed, if 1 hour has passed end the loop and the process (a new one will replace it).
Doing the above will ensure the process will be cleaned every hour, also if for some reason a process gets killed by the server due to resource or security checks the process will spring back up within the hour.
Of course you could lower this to 30 mins, 15 mins, 5 mins depending on how heavy each loop is and how often you want to re-establish the processes.

Will PHP sleep function slow down my site [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I’m looking at ways to send out 1 email every 1 min. I’ve looked at the example below where the top answer is to use PHP sleep() function.
However, I’ve also found suggestions that sleep() might slow down the server.
I’m not looking for exact answers but general approaches would be great.
However, I've also found suggestions that sleep might slow down the
server.
Yes, and hitting the pause button on a movie playing on your computer will slow down the duration of the film based on the amount of time you pause the movie.
The purpose of sleep is to put a pause in your script. As described in the official PHP documentation:
Delays the program execution for the given number of seconds.
So yes, it slows down your server. But only on content or pages where sleep is active.
So if this is a fronted script with sleep in it, it slows down the ability for anyone to view content via the PHP script that uses sleep. Place it in the middle of a page where HTML is rendering with a 1 second delay & your page now takes 1 second longer to render.
If this is a backend process only you really know about or trigger, no big deal. It’s a background process anyway so it will just expectedly slow things down in that realm.
That said, let’s look at your core question which is the first sentence of your post:
I’m looking at ways to send out 1 email every 1 min.
Then what you are looking for is a cron job which is a timed job on a Unix/Linux system. An entry for a cron job for something sending mails out every minute might be something like this:
* * * * * /full/path/to/php /full/path/to/your/php/script.php
But that is superficial. It basically just triggers the script.php every minute. Then within your script.php you would have to create some core logic that would control what happens each time it’s triggered. If you are using a database, then maybe you could create a last_sent field where you sent a time stamp of the last time a mail was sent to a recipient and then you act on that. But again, the logic is based on your core needs.
But at the end of the day, I am not too clear how sleep would factor into any of this. Might be worth it to take a step back and better architect your script to fit your needs knowing what cron is, what sleep is & what they are as well are not.
It is generally done with a separated worker and a queue manager.
That's it: you have a queue manager (i.e. RabbitMQ) that a sending email worker is bound to,
Then when you need to send 10 emails you put all of them to the corresponding queue at once in the script that serves HTTP response. This step is immediate.
Then a worker reads emails one by one and sends them with required delay. This step takes some time but we don't care.

Scalable job queue system for large scale task scheduling [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
The scenario:
TL;DR - I need a queue system for triggering jobs based on a future timestamp and NOT on the order it is inserted
I have a MySQL database of entries that detail particular events that need to be performed (which will consist mostly of a series of arithmetic calculations and a database insert/update) in a precise sequence based on timestamps. The time the entry is inserted and when the event will be "performed" has no correlation and is determined by outside factors. The table also contains a second column of milliseconds which increases the timing precision.
This table is part of a job "queue" which will contain entries set to execute from anywhere between a few seconds to a few days in the future, and can potentially have up to thousands of entries added every second. The queue needs to be parsed constantly (every second?) - perhaps by doing a select of all timestamps that have expired during this second and sorting by the milliseconds, and then executing each event detailed by the entries.
The problem
Currently the backend is completely written in PHP on an apache server with MySQL (ie standard LAMP architecture). Right now, the only way I can think of to achieve what I've specified is to write a custom PHP job queue script that will do the parsing and execution, looped every second using this method. There are no other job systems that I'm aware of which can queue jobs according to a specified timestamp/millisecond rather than the entry time.
This method however sounds rather infeasible CPU wise even on paper - I have to perform a huge MySQL query every second and execute some sort of function for each row retrieved, with the possibility of it running over a second of execution time which will start introducing delays to the parsing time and messing up the looping script.
I am of course attempting to create a solution that will be scalable should there be heavy traffic on the system, which this solution fails miserably as it will continue falling behind as the number of entries get larger.
The questions
I'd prefer to stick to the standard LAMP architecture, but is there any other technology I can integrate nicely into the stack that is better equipped to deal with what I'm attempting to do here?
Is there another method entirely to to accurately trigger events at a specified future date without the messy fiddling about with the constant queue checking?
If neither of the above options are suitable, is there a better way to loop the PHP script in the background? In the worst case scenario I can accept the long execution times and split the task up between multiple 'workers'.
Update
RabbitMQ was a good suggestion, but unfortunately doesn't execute the task as soon as it 'expires' - it has to go through a queue first and wait up on any tasks in front that have yet to expire. The expiry time has a wide range between a few seconds to a few days, and the queue needs to be sorted somehow each time a new event is added in so the expiry time is always in order in the queue. This isn't possible as far as I'm aware of in RabbitMQ, and doesn't sound very efficient either. Is there an alternative or a programmatic fix?
Sometimes, making a square peg fit into a round hole takes too much effort. While using MySQL to create queues can be effective, it gets much trickier to scale. I would suggest that this might be an opportunity for RabbitMQ.
Basically, you would setup a message queue that you can put the events into. You would then have a "fanout" architecture with your workers processing each queue. Each worker would listen to the queue and check to see if the particular event needs to be processed. I imagine that a combination of "Work Queues" and the "Routing" techniques available in Rabbit would achieve what you are looking for in a scalable and reliable way.
I would envision a system that works something like this:
spawn workers to listen to queues, using routing keys to prune down how many messages they get
each worker checks the messages to see if they are to be performed now
if the message is to be performed, perform it and acknowledge -- otherwise, re-dispatch the message for future processing. There are some simple techniques available for this.
As you need more scale, you add more workers. RabbitMQ is extremely robust and easy to cluster as well when you eventually cap out your queue server. There are also other cloud-based queuing systems such as Iron.IO and StormMQ

Php takes too long to be executed, need thread [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I've been learning to program in PHP and made an application which makes several independent things, the problem is it takes about 20-30 seconds to finish the task, because the code is executed sequentially.
I was reading and found out that there are no threads in php, is there any way to get around?
Edit: added information:
Basically, my application will seek information from news, weather, etc. (with file_get_contents($url)), but performs the functions sequentially, in other words, first fetches the news, then information about weather, and successively, instead of running it all at the same time .
Use some kind of job-queuing software like Gearman or RabbitMQ, then - put those ops in the consumer.
use CURL_MULTI instead, much faster. http://php.net/manual/en/function.curl-multi-init.php
It will reduce the loading \ processing time noticeably if you are reading numerous pages.
You could also try to hack in some threading behaviour by launching different requests to your webserver at the same time. For instance, your index.php would serve a simple page, which contains a number of AJAX calls to, say, fetchNews.php and fetchWeather.php. These requests would then be run asynchronously, in parallel, by the browser, and you'd circumvent phps limit on threading by just launching different webserver requests.
You mention that you're doing a bunch of file_get_contents($url)-calls. These are pretty slow. It would be a huge timesaver if instead of pulling these files in every time you load the page, you cache them to local storage and read them from there: that would be almost instant. Of course, you'd need to keep in mind how fresh you need your information.
For instance you could run a cron job that fetches these files every minute or so. Then you could have your website render this fetched information: the information would only be max 1 minute + the time it takes to run that script out of date.

Categories