What is a practical use for PHP's sleep()? - php

I just had a look at the docs on sleep().
Where would you use this function?
Is it there to give the CPU a break in an expensive function?
Any common pitfalls?

One place where it finds use is to create a delay.
Lets say you've built a crawler that uses curl/file_get_contents to get remote pages. Now you don't want to bombard the remote server with too many requests in short time. So you introduce a delay between consecutive requests.
sleep takes the argument in seconds, its friend usleep takes arguments in microseconds and is more suitable in some cases.

Another example: You're running some sort of batch process that makes heavy use of a resource. Maybe you're walking the database of 9,000,000 book titles and updating about 10% of them. That process has to run in the middle of the day, but there are so many updates to be done that running your batch program drags the database server down to a crawl for other users.
So you modify the batch process to submit, say, 1000 updates, then sleep for 5 seconds to give the database server a chance to finish processing any requests from other users that have backed up.

Here's a snippet of how I use sleep in one of my projects:
foreach($addresses as $address)
{
$url = "http://maps.google.com/maps/geo?q={$address}&output=json...etc...";
$result = file_get_contents($url);
$geo = json_decode($result, TRUE);
// Do stuff with $geo
sleep(1);
}
In this case sleep helps me prevent being blocked by Google maps, because I am sending too many requests to the server.

Old question I know, but another reason for using u/sleep can be when you are writing security/cryptography code, such as an authentication script. A couple of examples:
You may wish to reduce the effectiveness of a potential brute force attack by making your login script purposefully slow, especially after a few failed attempts.
Also you might wish to add an artificial delay during encryption to mitigate against timing attacks. I know that the chances are slim that you're going to be writing such in-depth encryption code in a language like PHP, but still valid I reckon.
EDIT
Using u/sleep against timing attacks is not a good solution. You can still get the important data in a timing attack, you just need more samples to filter out the noise that u/sleep adds.
You can find more information about this topic in: Could a random sleep prevent timing attacks?

Another way to use it: if you want to execute a cronjob more often there every minute. I use the following code for this:
sleep(30);
include 'cronjob.php';
I call this file, and cronjob.php every minute.

This is a bit of an odd case...file transfer throttling.
In a file transfer service we ran a long time ago, the files were served from 10Mbps uplink servers. To prevent the network from bogging down, the download script tracked how many users were downloading at once, and then calculated how many bytes it could send per second per user. It would send part of this amount, then sleep a moment (1/4 second, I think) then send more...etc.
In this way, the servers ran continuously at about 9.5Mbps, without having uplink saturation issues...and always dynamically adjusting speeds of the downloads.
I wouldn't do it this way, or in PHP, now...but it worked great at the time.

You can use sleep to pause the script execution... for example to delay an AJAX call by server side or implement an observer. You can also use it to simulate delays.
I use that also to delay sendmail() & co. .
Somebody uses use sleep() to prevent DoS and login brutefoces, I do not agree 'cause in this you need to add some checks to prevent the user from running multiple times.
Check also usleep.

I had to use it recently when I was utilising Google's Geolocation API. Every address in a loop needed to call Google's server so it needed a bit of time to receive a response. I used usleep(500000) to give everything involved enough time.

I wouldn't typically use it for serving web pages, but it's useful for command line scripts.
$ready = false;
do {
$ready = some_monitor_function();
sleep(2);
} while (!$ready);

Super old posts, but I thought I would comment as well.
I recently had to check for a VERY long running process that created some files. So I made a function that iterates over a cURL function. If the file I'm looking for doesn't exist, I sleep the php file, and check again in a bit:
function remoteFileExists() {
$curl = curl_init('domain.com/file.ext');
//don't fetch the actual page, you only want to check the connection is ok
curl_setopt($curl, CURLOPT_NOBODY, true);
//do request
$result = curl_exec($curl);
//if request did not fail
if ($result !== false) {
//if request was ok, check response code
$statusCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ($statusCode == 404) {
sleep(7);
remoteFileExists();
}
else{
echo 'exists';
}
}
curl_close($curl);
}
echo remoteFileExists();

One of its application is, if I am sending mails by a script to 100+ customers then this operation will take maximum 1-2 seconds thus most of the website like hotmail and yahoo consider it as spam, so to avoid this we need to use some delay in execution after every mail.

Among the others: you are testing a web application that makes ayncronous requests (AJAX calls, lazy image loading,...)
You are testing it locally so responses are immediate since there is only one user (you) and no network latency.
Using sleep lets you see/test how the web app behaves when load and network cause delay on requests.

A quick pseudo code example of where you may not want to get millions of alert emails for a single event but you want your script to keep running.
if CheckSystemCPU() > 95
SendMeAnEmail()
sleep(1800)
fi

Related

How does Long Polling or Comet Work with PHP?

I am making a notification system for my website. I want the logged in users to immediately noticed when a notification has made. As many people say, there're only a few ways of doing so.
One is writing some javascript code to ask the server "Are there any new notifications ?" at a given time interval. It's called "Polling" (I should be right).
Another is "Long Polling" or "Comet". As wikipedia says, long polling is similar to polling. Without asking everytime for new notifications, when new notifications are available, server sends them directly to the client.
So how can i use Long Polling with PHP ? (Don't need full source code, but a way of doing so)
What's its architecture/design really ?
The basic idea of long-polling is that you send a request which is then NOT responded or terminated by the server until some desired condition. I.e. server-side doesn't "finish" serving the request by sending the response. You can achieve this by keeping the execution in a loop on server-side.
Imagine that in each loop you do a database query or whatever is necessary for you to find out if the condition you need is now true. Only when it IS you break the loop and send the response to the client. When the client receives the response, it immediately re-sends the "long-polling" request so it wouldn't miss a next "notification".
A simplified example of the server-side PHP code for this could be:
// Set the loop to run 28 times, sleeping 2 seconds between each loop.
for($i = 1; $i < 29; $i++) {
// find out if the condition is satisfied.
// If YES, break the loop and send response
sleep(2);
}
// If nothing happened (the condition didn't satisfy) during the 28 loops,
// respond with a special response indicating no results. This helps avoiding
// problems of 'max_execution_time' reached. Still, the client should re-send the
// long-polling request even in this case.
You can use (or study) some existing implementations, like Ratchet. There are a few others.
Essentially, you need to avoid having apache or the web server handle the request. Just like you would with a node.js server, you can start PHP from the command line and use the server socket functions to create a server and use socket_select to handle communications.
It could technically work throught the web server by keeping a loop active. However, the memory overhead of keeping a php process active per HTTP connection is typically too high. Creating your own server allows you to share the memory between connections.
I used long polling for a chat application recently. After doing some research and playing it with a while here are some things I would recommend.
1) Don't long poll for more than about 20 seconds. Some browsers will timeout. I normally set my long poll to run about 20 seconds and send back an empty response at that point. Then you can use javascript to restart the long poll.
2) Every once in a while a browser will hang up. To help add a second level of error checking, I have a javascript timer run for 30 seconds and if no response has come in 30 seconds I abandon the ajax call and start it up again.
3) If you are using php make sure you use session_write_close()
4) If you are using ajax with Jquery you may need to use abort()
You can find your answer here. More detail here . And you should remember to use $.ajaxSetup({ cache:false }); when working with jquery.

Calling A PHP Function Every Second

Is there a way to execute a function at regular interval?
I have a database table and I need to know when an entry is added or removed. The logic am trying to use is Ajax makes a call to Server, but instead of responding immediately, server continuously checks for 30 seconds if database is updated, if yes then only then it responds, else it responds after 30 seconds. This way I am trying to minimize the load on server by calling Ajax requests every second.
How do I do this? Does using while loop make sense ? Something like this may be-
while (SomeCondition)
{
if (CheckIfDatabaseChanged())
{
echo "System Updated";
break;
}
}
If this is a no non-sense solution then how can I make sure that the loop runs only for 30 seconds and breaks. Or is there a better solution?
What you are thinking off is something called long-polling and it does not scale good on PHP especially when you use blocking IO.
See https://stackoverflow.com/a/6488569/11926 for some more information.
But your code could look something like this
set_timeout_limit(31);
$i=0;
while ($i<30) {
// poll database for changes which is a bad idea.
i = i + 1;
sleep(1); // sleep 1 second
}
I bet you you can not run many of these concurrent.My advice would be to use something like redis pubsub to notify of db changes and some kind of long-polling/websocket solution instead.
If possible you should spawn a background process to subscribe to database changes and then publishes changes to pusher for example, because having multiple long running processes is really bad for performance.
You could host both of them yourself or use hosted services like for example:
Redis:
http://redistogo.com
Long-polling / Websocket:
http://pusher.com
They both have small free plans which could get you started and when you get too big for these plans you could think about hosting these solutions for yourself.
P.S: I have also found a non-blocking solution in PHP which is called React. This solution might scale(better) in PHP.
Use this:
set_timeout_limit(0)
http://se.php.net/manual/ru/function.set-time-limit.php

PHP - enforce user wait before using server resources

I have a PHP function that I want to make available publically on the web - but it uses a lot of server resources each time it is called.
What I'd like to happen is that a user who calls this function is forced to wait for some time, before the function is called (or, at the least, before they can call it a second time).
I'd greatly prefer this 'wait' to be enforced on the server-side, so that it can't be overridden by dubious clients.
I plan to insist that users log into an online account.
Is there an efficient way I can make the user wait, without using server resources?
Would 'sleep()' be an appropriate way to do this?
Are there any suggested problems with using sleep()?
Is there a better solution to this?
Excuse my ignorance, and thanks!
sleep would be fine if you were using PHP as a command line tool for example. For a website though, your sleep will hold the connection open. Your webserver will only have a finite number of concurrent connections, so this could be used to DOS your site.
A better - but more involved - way would be to use a job queue. Add the task to a queue which is processed by a scheduled script and update the web page using AJAX or a meta-refresh.
sleep() is a bad idea in almost all possible situations. In your case, it's bad because it keeps the connection to the client open, and most webservers have a limit of open connections.
sleep() will not help you at all. The user could just load the page twice at the same time, and the command would be executed twice right after each other.
Instead, you could save a timestamp in your database for when your function was last invoked. Then, before invoking it, you should check the database to see if a suitable amount of time has passed. If it has, invoke the function and update the timestamp in the database.
If you're planning on enforcing a user login, than the problem just got a whole lot simpler.
Have a record inn the database listing users and the last time they used your resource consuming service, and measure the time difference between then and now. If the time difference is too low, deny access and display an error message.
This is best handled at the server level. No reason to even invoke PHP for repeat requests.
Like many sites, I use Nginx and you can use it's rate-limiting to block repeat requests over a certain number. So like, three requests per IP, per hour.

break up recursive function in php

What is the best way to break up a recursive function that is using a ton of resources
For example:
function do_a_lot(){
//a lot of code and processing is done here
//it takes a lot of execution time
if($true){
//if true we have to do all of that processing again
do_a_lot();
}
}
Is there anyway to make the server only have to take the brunt of the first execution and then break up the recursion into separate processes? Or am I dreaming?
Honestly, if your function is using up that much of your system's resources, I'd most likely refactor my code. However, it's not truly multithreading, but you could perhaps look at using popen to fork your process.
One of the rule of PHP is "Share nothing". That means every PHP process is independant and shares nothing with the others. So if you want to break your execution on several PHP process you'll have to store the data somewhere. It can be a memcached storage, or a database, or the session, as you want.
Then you'll need to 'fork' your PHp process. They're solutions available to get this done on the server side. IMHO this is all hacks. Dangerous and not minded in the PHP/web way. With the exception of 'work queues' tools.
I think the nicest way is to break your task with ajax. This will allow you a clean user interface and will avoid any long response timeout in the web process. i.e. show a 'working zone' to you user, then ask in ajax for next step of the job (first one), get response (in server side stor you response), then ask for next step, store new response and respond , next step, etc. You can even add a 'stop that stuff' function on the client side.
You can check as well for 'php work queue' on google.
If it's a long running task, divide and conquer with gearman

Faster alternative to file_get_contents()

Currently I'm using file_get_contents() to submit GET data to an array of sites, but upon execution of the page I get this error:
Fatal error: Maximum execution time of 30 seconds exceeded
All I really want the script to do is start loading the webpage, and then leave. Each webpage may take up to 5 minutes to load fully, and I don't need it to load fully.
Here is what I currently have:
foreach($sites as $s) //Create one line to read from a wide array
{
file_get_contents($s['url']); // Send to the shells
}
EDIT: To clear any confusion, this script is being used to start scripts on other servers, that return no data.
EDIT: I'm now attempting to use cURL to do the trick, by setting a timeout of one second to make it send the data and then stop. Here is my code:
$ch = curl_init($s['url']); //load the urls
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 1); //Only send the data, don't wait.
curl_exec($ch); //Execute
curl_close($ch); //Close it off.
Perhaps I've set the option wrong. I'm looking through some manuals as we speak. Just giving you an update. Thank you all of you that are helping me thus far.
EDIT: Ah, found the problem. I was using CURLOPT_CONNECTTIMEOUT instead of CURLOPT_TIMEOUT. Whoops.
However now, the scripts aren't triggering. They each use ignore_user_abort(TRUE); so I can't understand the problem
Hah, scratch that. Works now. Thanks a lot everyone
There are many ways to solve this.
You could use cURL with its curl_multi_* functions to execute asynchronously the requests. Or use cURL the common way but using 1 as timeout limit, so it will request and return timeout, but the request will be executed.
If you don't have cURL installed, you could continue using file_get_contents but forking processes (not so cool, but works) using something like ZendX_Console_Process_Unix so you avoid the waiting between each request.
As Franco mentioned and I'm not sure was picked up on, you specifically want to use the curl_multi functions, not the regular curl ones. This packs multiple curl objects into a curl_multi object and executes them simultaneously, returning (or not, in your case) the responses as they arrive.
Example at http://php.net/curl_multi_init
Re your update that you only need to trigger the operation:
You could try using file_get_contents with a timeout. This would lead to the remote script being called, but the connection being terminated after n seconds (e.g. 1).
If the remote script is configured so it continues to run even if the connection is aborted (in PHP that would be ignore_user_abort), it should work.
Try it out. If it doesn't work, you won't get around increasing your time_limit or using an external executable. But from what you're saying - you just need to make the request - this should work. You could even try to set the timeout to 0 but I wouldn't trust that.
From here:
<?php
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 1
)
)
);
file_get_contents("http://example.com/", 0, $ctx);
?>
To be fair, Chris's answer already includes this possibility: curl also has a timeout switch.
it is not file_get_contents() who consume that much time but network connection itself.
Consider not to submit GET data to an array of sites, but create an rss and let them get RSS data.
I don't fully understands the meaning behind your script.
But here is what you can do:
In order to avoid the fatal error quickly you can just add set_time_limit(120) at the beginning of the file. This will allow the script to run for 2 minutes. Of course you can use any number that you want and 0 for infinite.
If you just need to call the url and you don't "care" for the result you should use cUrl in asynchronous mode. This case any call to the URL will not wait till it finished. And you can call them all very quickly.
BR.
If the remote pages take up to 5 minutes to load, your file_get_contents will sit and wait for that 5 minutes. Is there any way you could modify the remote scripts to fork into a background process and do the heavy processing there? That way your initial hit will return almost immediately, and not have to wait for the startup period.
Another possibility is to investigate if a HEAD request would do the trick. HEAD does not return any data, just headers, so it may be enough to trigger the remote jobs and not wait for the full output.

Categories