I'm running Apache on Linux within VMWare.
One of the PHP pages I'm requesting does a sleep(), and I find that if I attempt to request a second page whilst the first page is sleep()'ing, the second page hangs, waiting for the sleep() from the first page to finish.
Has anyone else seen this behaviour?
I know that PHP isn't multi-threaded, but this seems like gross mishandling of the CPU.
Edit: I should've mentioned that the CPU usage doesn't spike. What I mean by CPU "hogging" is that no other PHP page seems able to use the CPU whilst the page is sleep()'ing.
It could be that the called page opens a session and then doesn't commit it, in this case see this answer for a solution.
What this probably means is that your Apache is only using 1 child process.
Therefore:
The 1 child process is handling a request (in this case sleeping but it could be doing real work, Apache can't tell the difference), so when a new request comes it, it will have to wait until the first process is done.
The solution would be to increase the number of child processes Apache is allowed to spawn (MaxClients directive if you're using the prefork MPM), simply remove the sleep() from the PHP script.
Without exactly knowing what's going on in your script it's hard to say, but you can probably get rid of the sleep().
Are you actually seeing the CPU go to 100% or just that no other pages are being served? How many apache-instances are you runnning? Are they all stopping when you run sleep() in of of the threads?
PHP's sleep() function essentially runs through an idle loop for n seconds. It doesn't release any memory, but it should not increase CPU load significantly.
Related
Suppose a website with 'high' traffic, I want to use the php sleep(4) function to avoid flooding. Is it a good idea or should I use different delay ways ? sleep() keeps a connection open, could this be a problem ?
I do:
index.php -> stuff.php -> index.php
Stuff.php does something and then sleep(4); so the user waits 4 seconds with a blank screen, and then goes back to index. Thanks.
Update: My enemies are both, hackers, that wants a DOS, and stressed pepole that click fast on the search button, lets say... Thats why I would use a server-side delay.
It is not good approach because even doing 'sleep' apache/php still occupies OS process for that connection. So, on website with high traffic you will get lots of simultaneously running Apache processes that will eat all your server's RAM.
Instead, You can modify one of your pages and put some Javascript code to it, so it could wait for few seconds, and then navigate to the next page by javascript. That should solve your problem.
You can't really avoid keeping the connection open, otherwise there's no waiting that could happen. You'd have to either do it client side or server side. However, if you run PHP via nginx and php-fpm, you should be able to get much better performance out of it than, say, Apache 2 and mod_php with the Worker MPM.
However, sleep() itself is fairly efficient, so you shouldn't have to worry about it eating CPU or anything. See here for more information on how it's implemented in the lower layers.
In general, the best way to "wait efficiently" is to be using as much of an asynchronous stack as possible.
I need help with a problem:
I have a PHP code and it runs multiple times with multiple files and that is using the processor a lot and consequently bringing the server down. I would like to limit the processor usage for this user to the less possible, making it stop crashing the server and runs untill it ends. Even if it runs very slowly, the important thing is that it finishes without the server going down.
Anyone have any idea?
Already limit it bt /etc/security/limits.conf
#user hard core 10000
But I had no result.
Any idea?
In Linux system you can use proc_nice to reduce the priority of the script
Also, you can add a sleep(1) function somewhere in your code (for example in a big/infinit loop)
sleeping for 1 second is like a year for a processor :)
According to the documentation:
max_execution_time only affect the execution time of the script itself.
Any time spent on activity that happens outside the execution of the script
such as system calls using system(), stream operations, database queries, etc.
is not included when determining the maximum time that the script has been running.
This is not true on Windows where the measured time is real.
This is confirmed by testing:
Will not time out
<?php
set_time_limit(5);
$sql = mysqli_connect('localhost','root','root','mysql');
$query = "SELECT SLEEP(10) FROM mysql.user;";
$sql->query($query) or die($query.'<br />'.$sql->error);
echo "You got the page";
Will time out
<?php
set_time_limit(5);
while (true) {
// do nothing
}
echo "You got the page";
Our problem is that we really would like PHP to timeout, regardless of what it is doing, after a given amount of time (as we don't want to keep resources busy if we know we've failed delivering a page in an acceptable amount of time, like 10 seconds). We know we can play with settings such as the MySQL wait_timeout for the SQL queries, but the page timeout will depend on the number of queries that are executed.
Some people have tried to come up with workarounds and it doesn't seem implementable.
Q: Is there an easy way to get a real PHP max_execution_time on linux, or are we better timing out elsewhere, such as Apache level?
This is quite a tricky advice, but it will definitely do what you want, if you are willing to modify and recompile PHP.
Take a look at the PHP source code at https://github.com/php/php-src/blob/master/Zend/zend_execute_API.c (the file is Zend/zend_execute_API.c), at function zend_set_timeout. This is the function that implements time limit. Here's how it works on different platforms:
on Windows, create a new thread, start a timer on it, and when it finishes, set a global variable called timed_out to 1, the PHP execution core checks this variable for every instruction, then exits (very simplified)
on Cygwin, use itimer with ITIMER_REAL, which measures real time, including any sleep, wait, whatever, then raise a signal that will interrupt any processing and stop processing
on other unix systems, use itimer with ITIMER_PROF, which only measures CPU time spent by the current process (but both in user-mode and kernel-mode). This means waiting for other processes (like MySQL) doesn't count into this.
Now what you want to do is to change the itimer on your Linux from ITIMER_PROF to ITIMER_REAL, which of course you need to do manually, recompile, install etc. The other difference between these two is that they also use different signal when the timer runs out. So my suggestion is to change the ifdef:
# ifdef __CYGWIN__
into
# if 1
so that you set both ITIMER_REAL and the signal that PHP waits for to SIGALRM.
Anyway this whole idea is untested (I use it for some very specific system, where ITIMER_PROF is broken, and it seems to work), unsupported, etc. Use it at your own risk. It may work with PHP itself, but it could break other modules, in PHP and in Apache, if they for whatever reason, use the SIGALRM signal or other timer.
This is an old and answered question. But for the sake of helping others, I wanted to point out the request_terminate_timeout php-fpm option. If you're using PHP-FPM, it is most likely what you need.
If set, this option allows you to tell PHP-FPM to kill a request after N seconds, regardless of what PHP does.
See http://php.net/manual/en/install.fpm.configuration.php#request-terminate-timeout for details.
From httpd.conf:
Timeout: The number of seconds before receives and sends time out
Timeout 300
I have a PHP-script running on my server via a cronjob. The job runs every minute. In the php script i have a loop that executes, then waits one sevond and loops again. Essentially creating a script to run once every second.
Now I'm wondering, if i make the cronjob run only once per hour and have the script still loop for an entire hour or possible an entire day.. Would this have any impact on the servers cpu and or memory and if so, will it be positive or negative?
I spot a design flaw.
You can always have a PHP script permanently running in a loop performing whatever functionality you require, without dependency upon a webserver or clients.
You are obviously checking something with this script, any incites into what? There may be better solutions for you. For example if it is a database consider SQL triggers.
In my opinion it would have a negative impact. since the scripts keeps using resources.
cron is called on a time based scale that is already running on the server.
But cronjob can only run once a minute at most.
Another thing is if the script times out, fails, crashes for whatever reason you end up with not running the script for at max one hour. Would have a positive impact on server load but not what you're looking for i guess? :)
maybe run it every 2 or even 5 minutes to spare server load?
OR maybe change the script so it does not wait but just executes once and calling it from cron job. should have a positive impact on server load.
I think you should change script logic if it is possible.
If tasks your script executes are not periodic but are triggered by some events, the you can use some Message Queue (like Gearman).
Otherwise your solution is OK. Memory leaks can occurs, but in new PHP versions (5.3.x) Garbage Collector is pretty good. Some extensions can lead to memory leaks. Or your application design can lead to hungry memory usage (like Doctrine ORM loaded objects cache).
But you can control script memory usage by tools like monit and restart your script when mempry limit reaches some point or start script again when your script unexpectedly shuts down.
I am looking for the PHP equivalent for VB doevents.
I have written a realtime analysis package in VB and used doevents to release to the operating system.
Doevents allows me to stay in memory and run continuously without filling up memory and allows me to respond to user input.
I have rewritten the package in PHP and I am looking for that same doevents feature.
If it doesn't exist I could reschedule myself and exit.
But I currently don't know how to do that and I think that would add a lot more overhead.
Thank you, gerardg
usleep is what you are looking for.. Delays program execution for the given number of micro seconds
http://php.net/manual/en/function.usleep.php
It's been almost 10 years since I last wrote anything in VB and as I recall, doevents() function allowed the application to yield to the processor during intensive processing (usually to allow other system events to fire - the most common being WM_PAINT so that your UI won't appear hung).
I don't think PHP has such functionality - your script will run as a single process and end (either when it's done or when it hits the default 30 second timeout).
If you are thinking in terms of threads (as most Windows programmers tend to do) and needing to spawn more than 1 instance of your script, perhaps you should take look at PHP's Process Control functions as a start.
I'm not entirely sure which aspects of doevents you're looking to emulate, so here's pretty much everything that could be useful for you.
You can use ob_implicit_flush(true) at the top of your script to enable implicit output buffer flushing. That means that whenever your script calls echo or print or whatever you use to display stuff, PHP will automatically send it all to the user's browser. You could also just use ob_flush() after each call to display something, which acts more like Application.DoEvents() in VB with regards to keeping your UI active, but must be called each time something is output.
Naturally if your script uses the output buffer already, you could build a copy of the buffer before flushing, with ob_get_contents().
If you need to allow the script to run for more time than usual, you can set a longer tiemout with set_time_limit($time). If you need more memory, and you have access to edit your .htaccess file, place the following code and edit the value:
php_value memory_limit 64M
That sets the memory limit to 64 megabytes.
For running multiple scripts at once, you can use pcntl_exec to start another one running.
If I am missing something important about DoEvents(), let me know and I will try to help you make it work.
PHP is designed for asynchronous on demand processing. However it can be forced to become a background task with a little hackery.
As PHP is running as a single thread you do not have to worry about letting the CPU do other things as that is already taken care of. If this was not the case then a web server would only be able to serve up one page at a time and all other requests would have to sit in a queue. You will need to write some sort of look that never expires until some detectable condition happens (like the "now please exit" message you set in the DB or something).
As pointed out by others you will need to set_time_limit($something); with perhaps usleep stopping the code from running "too fast" if it eats very much CPU each loop. However if you are also using a Database connection most of your script time is actually the script waiting for the Database (by far the biggest overhead for a script).
I have seen PHP worker threads created by using screen and detatching it to a background task. Other approaches also work so long as you do not have a session that will time out or exit (say when the web browser is closed). A cron that starts a script to check if the script is running every x mins or hours gives you automatic recovery from forced exists and/or system restarts.
TL;DR: doevents is "baked in" to PHP and you don't have to worry about it.