Suppose a website with 'high' traffic, I want to use the php sleep(4) function to avoid flooding. Is it a good idea or should I use different delay ways ? sleep() keeps a connection open, could this be a problem ?
I do:
index.php -> stuff.php -> index.php
Stuff.php does something and then sleep(4); so the user waits 4 seconds with a blank screen, and then goes back to index. Thanks.
Update: My enemies are both, hackers, that wants a DOS, and stressed pepole that click fast on the search button, lets say... Thats why I would use a server-side delay.
It is not good approach because even doing 'sleep' apache/php still occupies OS process for that connection. So, on website with high traffic you will get lots of simultaneously running Apache processes that will eat all your server's RAM.
Instead, You can modify one of your pages and put some Javascript code to it, so it could wait for few seconds, and then navigate to the next page by javascript. That should solve your problem.
You can't really avoid keeping the connection open, otherwise there's no waiting that could happen. You'd have to either do it client side or server side. However, if you run PHP via nginx and php-fpm, you should be able to get much better performance out of it than, say, Apache 2 and mod_php with the Worker MPM.
However, sleep() itself is fairly efficient, so you shouldn't have to worry about it eating CPU or anything. See here for more information on how it's implemented in the lower layers.
In general, the best way to "wait efficiently" is to be using as much of an asynchronous stack as possible.
Related
This is supposed to be part of a protection against hacking-attempts. The idea is to keep the hacker/attacker waiting for a long time, after a hacking-attempt is detected.
The detection of such attempts is not part of my question.
I only need to know if/how it is possible to create a PHP-script that will simply keep loading, as if the website/server is exceptionally slow, preferably without creating a high server-load by the connection being kept open.
I thought there must be some way to simply stop the PHP script on the server without notifying the user/client that there won't be a response from the server. Or maybe something similar?
The easiest way to do this would be to use sleep(), but there’s a huge downside:
While sleep() doesn’t use any CPU time, the connection has to be kept open for a long time (which means you could reach the connection limit on your server) and a PHP process would be running uselessly (which consumes quite a bit of memory and might also make you hit a process limit).
So that means that someone you have considered to be an attacker would actually have more leverage to run a Denial of Service attack against your website.
I don’t think there’s a resource-friendly way of doing this with PHP alone.
I am looking for the PHP equivalent for VB doevents.
I have written a realtime analysis package in VB and used doevents to release to the operating system.
Doevents allows me to stay in memory and run continuously without filling up memory and allows me to respond to user input.
I have rewritten the package in PHP and I am looking for that same doevents feature.
If it doesn't exist I could reschedule myself and exit.
But I currently don't know how to do that and I think that would add a lot more overhead.
Thank you, gerardg
usleep is what you are looking for.. Delays program execution for the given number of micro seconds
http://php.net/manual/en/function.usleep.php
It's been almost 10 years since I last wrote anything in VB and as I recall, doevents() function allowed the application to yield to the processor during intensive processing (usually to allow other system events to fire - the most common being WM_PAINT so that your UI won't appear hung).
I don't think PHP has such functionality - your script will run as a single process and end (either when it's done or when it hits the default 30 second timeout).
If you are thinking in terms of threads (as most Windows programmers tend to do) and needing to spawn more than 1 instance of your script, perhaps you should take look at PHP's Process Control functions as a start.
I'm not entirely sure which aspects of doevents you're looking to emulate, so here's pretty much everything that could be useful for you.
You can use ob_implicit_flush(true) at the top of your script to enable implicit output buffer flushing. That means that whenever your script calls echo or print or whatever you use to display stuff, PHP will automatically send it all to the user's browser. You could also just use ob_flush() after each call to display something, which acts more like Application.DoEvents() in VB with regards to keeping your UI active, but must be called each time something is output.
Naturally if your script uses the output buffer already, you could build a copy of the buffer before flushing, with ob_get_contents().
If you need to allow the script to run for more time than usual, you can set a longer tiemout with set_time_limit($time). If you need more memory, and you have access to edit your .htaccess file, place the following code and edit the value:
php_value memory_limit 64M
That sets the memory limit to 64 megabytes.
For running multiple scripts at once, you can use pcntl_exec to start another one running.
If I am missing something important about DoEvents(), let me know and I will try to help you make it work.
PHP is designed for asynchronous on demand processing. However it can be forced to become a background task with a little hackery.
As PHP is running as a single thread you do not have to worry about letting the CPU do other things as that is already taken care of. If this was not the case then a web server would only be able to serve up one page at a time and all other requests would have to sit in a queue. You will need to write some sort of look that never expires until some detectable condition happens (like the "now please exit" message you set in the DB or something).
As pointed out by others you will need to set_time_limit($something); with perhaps usleep stopping the code from running "too fast" if it eats very much CPU each loop. However if you are also using a Database connection most of your script time is actually the script waiting for the Database (by far the biggest overhead for a script).
I have seen PHP worker threads created by using screen and detatching it to a background task. Other approaches also work so long as you do not have a session that will time out or exit (say when the web browser is closed). A cron that starts a script to check if the script is running every x mins or hours gives you automatic recovery from forced exists and/or system restarts.
TL;DR: doevents is "baked in" to PHP and you don't have to worry about it.
As all my requests goes through an index script, I tried to time the respond time of all my requests.
Its simply the difference between the start time (start of the script) and end time (end of the script).
As I cache my data on memcached and user are all served using memcached.
I mostly get less than a second respond time but at times there's wierd spike of more than a seconds. the worse case can go up to 200+ seconds.
I was wondering if mobile users had a slow connection, does that reflect on my respond time?
I am serving primary mobile users.
Thanks!
No, it's the runtime of your script. It does not count the latency to the user, that's something the underlying web server is worrying about. Something in your script just takes very long. I recommend you profile your script to find what that is. Xdebug is a good way to do so.
If you're measuring in PHP (which it sounds like you are), thats the time it takes for the page to be generated on the server side, not the time it takes to be downloaded.
Drop timers in throughout the page, and try and break it down to a section that is causing the huge delay of 200+ seconds.
You could even add a small script that will email you details of how long each section took to load if it doesn't happen often enough to see it yourself.
It could be that the script cannot finish because a client downloads the results very-very slowly. If you don't use a front-end server like nginx, the first thing to do is to try it.
Someone already mentioned xdebug, but normally you would not want to run xdebug in production. I would suggest using xhprof to profile pages on development/staging/production. You can turn on xhprof conditionally, which makes it really easy to run on production.
I have a "generate website" command, that parses through all the tables to republish an entire website into fixed html pages. That's a heavy process, at least on my local machine (the CPU rises up). On the production server it does not seem to be a problem so far but i would like to keep it future proof. Therefore i'm considering using the php sleep() function between each step of the heavy script so that the server has time to "catch its breath" between the heavy steps.
Is that a good idea or will it be useless?
If you're running php5, and it's being used in CGI (rather than mod_php) mode, then you could consider using proc_nice instead.
This could allow the "generate website" command to use as much CPU as it wants while no-one else is trying to use the site.
I would simply not do this on the Production Server, the steps I have followed before:
Rent a low cost PHP server - or get a proper Dev server setup that replicates the production
All Dynamic files are copied to DEV - they dont even need to be on the production
Run the HTMLizer script - no sleep just burn it out
Validate the ouput and then RSYNC this to the live server - backing up the live directory as you do it so you can safely fall back
Anyway since Caching / Memcaching came up to speed I havent had to do this at all - use Zend Framework and Zend Cache and you basically have a dynamic equivalent to what you need to do automatically.
I think it is a good idea. Sleep means a repititive comparison of ticks until a period occurs. The overhead on CPU on a sleep operation should be lower.
It depends on how many times you'll be calling it and for how long. You'll need to balance your need to a quick output vs. low CPU usage.
In short: yes, it'll help.
Based on the task, I don't think it will help you.
Sleep would only really be useful if you were continuously looping and waiting for user input or trigger signal.
In this case, to get the job done ASAP you may as well omit the sleep command, thus reducing the task time and freeing up the CPU time faster for other processes.
Some of the posters above may be able to better assist you with code optimisation.
I'm running Apache on Linux within VMWare.
One of the PHP pages I'm requesting does a sleep(), and I find that if I attempt to request a second page whilst the first page is sleep()'ing, the second page hangs, waiting for the sleep() from the first page to finish.
Has anyone else seen this behaviour?
I know that PHP isn't multi-threaded, but this seems like gross mishandling of the CPU.
Edit: I should've mentioned that the CPU usage doesn't spike. What I mean by CPU "hogging" is that no other PHP page seems able to use the CPU whilst the page is sleep()'ing.
It could be that the called page opens a session and then doesn't commit it, in this case see this answer for a solution.
What this probably means is that your Apache is only using 1 child process.
Therefore:
The 1 child process is handling a request (in this case sleeping but it could be doing real work, Apache can't tell the difference), so when a new request comes it, it will have to wait until the first process is done.
The solution would be to increase the number of child processes Apache is allowed to spawn (MaxClients directive if you're using the prefork MPM), simply remove the sleep() from the PHP script.
Without exactly knowing what's going on in your script it's hard to say, but you can probably get rid of the sleep().
Are you actually seeing the CPU go to 100% or just that no other pages are being served? How many apache-instances are you runnning? Are they all stopping when you run sleep() in of of the threads?
PHP's sleep() function essentially runs through an idle loop for n seconds. It doesn't release any memory, but it should not increase CPU load significantly.