What is the best way to limit PHP scripts CPU usage from within that script?
I am not looking to re-nice the whole PHP system process, but rather keep a PHP script running for longer and adjusting the CPU usage of that script.
Basically, it would need to "renice" itself dynamically and specifically to only the script or it would need to slow down the computations / activities it is doing.
Tried proc_nice(), but could not get PHP to increase CPU usage of other scripts after my script finished. The change in my script affected other subsequent scripts/requests. This is, when used in my script, and after increasing the nice level, the nice value stayed for the PHP process in the system.
You could usleep() at regular intervals in your code. This will delay script execution for a given number of microseconds leaving more time for other processes.
Related
I'm probably overthinking this, but wanted others' input. I run a particular PHP script around 1 million times per day in the background to process incoming SNMP traps. I revise and test my PHP script from the command line and have a lot of echo() functions throughout the script in order to check the data as it's being processed when I test the script. I am wondering if I should wrap these echo() functions into a quick checksum in order to verify this script is actually being run from the command line before I echo to the screen in an effort to save cpu cycles when this script is being run in the background, or if the cpu cycles vs the checksum creates a negligible situation and it's worth just ignoring the few extra clock cycles I might be wasting by calling echo() when the process is running in the background. Thanks in advance!
I have about 8 cron tasks running every minute, every one of them takes time as they download data from other website by curl (single script makes multiple curl requests). Is there any way to lower the cpu or memory usage? Does unstetting variables help?
Yes , unsetting variables will lower the memory usage.
If you want to lower the cpu usage you have to give them fewer tasks per second. You can start each of your scripts after some time intervals. Since each script make multiple requests that would be the best way.
The bottleneck here should be the I/O usage , not the cpu , basically if its not going at 100% , You dont have to worry about it.
I have a PHP-script running on my server via a cronjob. The job runs every minute. In the php script i have a loop that executes, then waits one sevond and loops again. Essentially creating a script to run once every second.
Now I'm wondering, if i make the cronjob run only once per hour and have the script still loop for an entire hour or possible an entire day.. Would this have any impact on the servers cpu and or memory and if so, will it be positive or negative?
I spot a design flaw.
You can always have a PHP script permanently running in a loop performing whatever functionality you require, without dependency upon a webserver or clients.
You are obviously checking something with this script, any incites into what? There may be better solutions for you. For example if it is a database consider SQL triggers.
In my opinion it would have a negative impact. since the scripts keeps using resources.
cron is called on a time based scale that is already running on the server.
But cronjob can only run once a minute at most.
Another thing is if the script times out, fails, crashes for whatever reason you end up with not running the script for at max one hour. Would have a positive impact on server load but not what you're looking for i guess? :)
maybe run it every 2 or even 5 minutes to spare server load?
OR maybe change the script so it does not wait but just executes once and calling it from cron job. should have a positive impact on server load.
I think you should change script logic if it is possible.
If tasks your script executes are not periodic but are triggered by some events, the you can use some Message Queue (like Gearman).
Otherwise your solution is OK. Memory leaks can occurs, but in new PHP versions (5.3.x) Garbage Collector is pretty good. Some extensions can lead to memory leaks. Or your application design can lead to hungry memory usage (like Doctrine ORM loaded objects cache).
But you can control script memory usage by tools like monit and restart your script when mempry limit reaches some point or start script again when your script unexpectedly shuts down.
I have few doubts about maximum execution time set in php.ini.
Assuming max_execution_time is 3 minutes, consider the following cases:
I have a process which will end in 2 minutes.
But it's in a loop and it should work 5 times. So it become 10 minutes.
Will the script run properly without showing error for timeout? Why?
PHP function just prints the data and it will take only 2 minutes.
But the query execution is taking 5 minutes.
Will the script run without error? Why?
My single php process itself take 5 minutes.
But am calling the script from command line.
Will it work properly? Why?
How are memory allowed and execution time related?
If execution time for a script is very high
But it returns small amount of data
Will it affect memory or not? Why?
I want to learn what is happening internally, that is why am asking these.
I don't want to just increase time limit and memory limit.
The rules on max_execution_time are relatively simple.
Execution time starts to count when the file is interpreted. Time needed before to prepare the request, prepare uploaded files, the web server doing its thing etc. does not count towards the execution time.
The execution time is the total time the script runs, including database queries, regardless whether it's running in loops or not. So in the first and second case, the script will terminate with a timeout error because that's the defined behaviour of max_execution_time.
External system calls using exec() and such do not count towards the execution time except on Windows. (Source) That means that you could run a external program that takes longer than max_execution_time.
When called from the command line, max_execution_time defaults to 0. (Source) So in the third case, your script should run without errors.
Execution time and memory usage have nothing to do with each other. A script can run for hours without reaching the memory limit. If it does, then often due to a loop where variables are not unset, and previously reserved memory not freed properly.
I have a few time-consuming and (potentially) memory-intensive functions in my LAMP web application. Most of these functions will be executed every minute via cron (in some cases, the cron job will execute multiple instances of these functions).
Since memory is finite, I don't want to run into issues where I am trying to execute a function the environment can no longer handle. What is a good approach at dealing with potential memory problems?
I'm guessing that I need to determine how much memory is available to me, how much memory each function requires before executing it, determine what other functions are being executed by the cron AND their memory usage, etc.
Also, I don't want to run into the issue where a certain function somehow gets execution priority over other functions. If any priority is given, I'd like to have control over that somehow.
you could look into caching technologies like APC which lets you write stuff right into the RAM so that you can access it fast which of use if you dont want to do expensive tasks like mysql queries repeatedly.
an example for caching i could think of would be that you could cache emails rather than retreiving them again and again from the email server. basicaly ram caching is a very useful technique if you have things in your script that you want to preserve for the next time of script execution but if your script does unique things every time it is executed it would be useless. also as for contoll you could call memory_get_usage() on each script execution and write that value into the apc cache so that every cron could retreive that value and look whether enough memory is free for it to complete.
as for average usage you could write an array with the last lets say 100 function executions and when you call that function again it could apc_fetch that from the ram and calculate the average memory usage for that function then compare it to how much ram is being used right now and then decide wheter to start. furthermore it could write that estimate into the current memory usage variable to prevent other scripts from being run. at the end of that function you subtract that amount from the variable again.
tl;dr:
look into the apc_fetch, apc_store and memory_get_usage functions
Part of your problem may be the fact you are doing a cron every minute? Why not set some flags so only one instance of that cron is running before another executes the full logic? i.e. create a flat file thats deleted at the end of the cron to act as a 'lock'. This will make sure one cron process fully completes before any others go forward. However, I urge you to refer to my comment on your post so that I and others can give you more solid advice.
Try optimizing your algorithms. Like...
Once you're finished with a variable you should destroy it if you no longer need it.
Close MySQL connections after you've finished with them.
Use recursion.
Also as Jauzsika said change your memory limit in your php.ini, although don't make it too high. If you need more than 256MB RAM then I would suggest changing to a different language instead of PHP.
In your position, I'd consider writing a daemon instead of relying on cron. The daemon could monitor a queue and be aware of the number of child processes it has running. Managing multiple processes definitely isn't php's biggest strength, but you can do it. Pear even includes a System_Daemon package.
Your daemon could use memory_get_usage and call out out free, uptime, and friends to throttle the number of workers to match system conditions.
I don't have any direct experience with this, and I wouldn't be too too surprised if a daemon written in PHP gradually leaked memory. But if it's acceptably slow, cron could cycle the daemon every so often...
You can find out how much memory is currently in use by your script using memory_get_usage
But you can not determine how much your next function will need, before executing it. You can only see after execuiting, using memory_get_usage. You can however store the memory your function used the last times in a database and calculate with the average memory amount.
Regarding the eecution priority, I don't think it is posible to determine with PHP. Apache (or whatever webserver you are using) spawns multiple processes and the operating system schedules which one will be executed in which order.