How to limit process usage (user or process) - php

I need help with a problem:
I have a PHP code and it runs multiple times with multiple files and that is using the processor a lot and consequently bringing the server down. I would like to limit the processor usage for this user to the less possible, making it stop crashing the server and runs untill it ends. Even if it runs very slowly, the important thing is that it finishes without the server going down.
Anyone have any idea?
Already limit it bt /etc/security/limits.conf
#user hard core 10000
But I had no result.
Any idea?

In Linux system you can use proc_nice to reduce the priority of the script
Also, you can add a sleep(1) function somewhere in your code (for example in a big/infinit loop)
sleeping for 1 second is like a year for a processor :)

Related

Keeping a long running (forever) php CLI process running on Windows

I have written a script that runs on a windows machine doing various tasks in a loop. It works great, except it exists, for no apparent reason every two hours.
I have used:
set_time_limit(0);
to ensure the script should keep running forever. The loop is actually caused by a class method calling itself - maybe there's a program counter limit or something?
I have written a bat file which automatically restarts the process if it dies, but I'd really rather it didn't die in the first place.
Does anyone have any suggestions?
I'm not 100% certain, but I am fairly sure that I did indeed hit some kind of recursion limit in PHP. Using a while() loop the script has been running for at least 4 hours so far without a problem.
I'm not sure what the recursion limit is (or why an error wasn't displayed) but this has fixed the issue.

Developing always running PHP script

(Our server is Linux based)
I'm an experienced PHP developer but first time i'll develop a bot which always running and fetch some datas.
I'll explain my application with a simple (and sample) scenario. I have about 2000 web site url and my application will visit this url's and record contents of web page's . This application will work 7 days 24 hours. It will start working again when it's finish 2000 web sites.
But i need some suggestions for my server. As you see, my application will be run infinity until i shut down server. I can do this infinity loop with this :
while(true)
{
APPLICATION CODES HERE
}
But i think this will be an evil for server :) Is it possible to doing something like this, on server side?
Also i think using cronjobs but it's not work for my scenario. Because my script start working again asap it's finish working. I have to "start again when you finish your work" , not "start every 30 minutes" . Because i don't know, maybe fetching all 2000 websites, will take more than 30 minutes or less than 30 minutes.
I hope i explained it very well.
Also i'm worried about memory usage. As you know garbage collector cleans memory after every PHP script stop. But as i said, my app won't stop for days (maybe weeks) . So garbage collector won't be triggered. I'm manually unsetting (unset() function) all used variables at end of script. Is it enough?
I need some suggestions from server administrators :)
PS. I'm developing it as console application, not a web application. I can execute it from command line.
Batch processing.. store all the sites in a csv or something, mark them after completion, then work on all the ones non-marked, then work on all the marked.. etc. Only do say 1 or 5 at a time, initiate batch script every minute from cron..
Don't even try to work on all of them at once.. any errors and you won't know what happened..
Could even store the jobs in a database, store processing stats etc.. allows for fine-tuning and better reporting.
You will probably hit time-limits trying to run infinite php scripts, even from the command line.. also your server admin will hate you. Will probably run into Memory limits if you don't release resources properly.. far too easily done with php.
Read: http://www.ibm.com/developerworks/opensource/library/os-php-batch/
Your script could just run through the list once and quit. That way, what ever resources php is holding can be freed.
Then have a shell script that calls the php script in an infinite loop.
As php is not designed for long running task, I am not sure if the garbage collection is up to the task. Quiting after every run will force it to release everything.

Possible to optimize Php script to limit impact on server memory and cpu?

I have a PHP-script running on my server via a cronjob. The job runs every minute. In the php script i have a loop that executes, then waits one sevond and loops again. Essentially creating a script to run once every second.
Now I'm wondering, if i make the cronjob run only once per hour and have the script still loop for an entire hour or possible an entire day.. Would this have any impact on the servers cpu and or memory and if so, will it be positive or negative?
I spot a design flaw.
You can always have a PHP script permanently running in a loop performing whatever functionality you require, without dependency upon a webserver or clients.
You are obviously checking something with this script, any incites into what? There may be better solutions for you. For example if it is a database consider SQL triggers.
In my opinion it would have a negative impact. since the scripts keeps using resources.
cron is called on a time based scale that is already running on the server.
But cronjob can only run once a minute at most.
Another thing is if the script times out, fails, crashes for whatever reason you end up with not running the script for at max one hour. Would have a positive impact on server load but not what you're looking for i guess? :)
maybe run it every 2 or even 5 minutes to spare server load?
OR maybe change the script so it does not wait but just executes once and calling it from cron job. should have a positive impact on server load.
I think you should change script logic if it is possible.
If tasks your script executes are not periodic but are triggered by some events, the you can use some Message Queue (like Gearman).
Otherwise your solution is OK. Memory leaks can occurs, but in new PHP versions (5.3.x) Garbage Collector is pretty good. Some extensions can lead to memory leaks. Or your application design can lead to hungry memory usage (like Doctrine ORM loaded objects cache).
But you can control script memory usage by tools like monit and restart your script when mempry limit reaches some point or start script again when your script unexpectedly shuts down.

Cron job for php script that requires VERY long execution time

I have a php script run as a cron job that executes a set of simple tasks that loops for each user in the database and takes about 30 mins to complete. This process starts over every hour and needs to be as fast and efficient as possible. The problem Im having, is like with any server script, execution time varies and I need to figure out the best cron time settings.
If I run cron every minute, I need to stop the last loop of the script 20 seconds before the end of the minute to make sure that the current loop finishes in time. Over the course of the hour this adds up to a lot of wasted time.
Im wondering if its a bad idea to simple remove the php execution time limit and run the script once an hour and let it run to completion.... is this a bad idea?
Instead of setting the max_execution_time you could also use set_time_limit() to reset the counter on every loop. This will ensure your script is never running out of time unless there is something serious hanging within the current loop (and taking longer than the max_execution_time).
Basically this should make your script run as long as it needs while giving it a 30 seconds timeout between two set_time_limit() calls.
Assuming you'd like the work done ASAP, don't use cron. Cron is good for things that need to happen at specific times. It's often abused to simulate a background process that would ideally process work as soon as work appears. You should probably write a daemon that runs continuously. (Note: you could also look at a message/work-queue type system, there are nice libraries out there to do this too)
You can write a daemon from scratch using the pcntl functions (since you don't care about multiple worker processes, it's super-easy to get a process running in the background.), or cheat and just make a script that runs forever and run it via screen, or leverage some solid library code like PEAR's System:Daemon or nanoserv
Once the daemonization stuff is taken care of, all you really care about is having a loop that runs forever. You'll want to take care that your script doesn't leak memory, or consume too many resources.
Generally, you can do something like:
<?PHP
// some setup code
while(true){
$todo = figureOutIfIHaveWorkToDo();
foreach($todo as $something){
//do stuff with $something
//remember to clean up resources so you don't leak memory!
usleep(/*some integer*/);
}
usleep(/* some other integer */);
}
And it'll work pretty well.
Setting the time limit to 0 and letting it do its thing is fairly typical of PHP based cronjobs (in my experience), but this is also the point when you should ask yourself a few important questions, such as "Should I rewrite this job in a compiled language?" and "Am I using all of my tools (database, etc) to their maximum efficiency?"
That said, maybe better than completely removing the time limit would be to set it to the upper limit you actually want. If that means 48 minutes, then set_time_limit(48 * 60);
I really think you shouldn't set the time out to 0, that is just looking for trouble. At most, set it to 59*60 seconds, but setting it to 0 might cause security problems, if a script hangs, it will hang almost forever until the server host stops the execution. It is considered bad practice to do so.
I have used the php command-line interface for similar long running tasks in the past. You probably do not want to remove the execution time limit for any request.
Sounds like a great idea if there's little chance that it will take more than an hour. Note, however, that the wrong bug can be a really good way of making it take longer than expected..
To avoid all sorts of nasty problems, you should have a guard file with the process ID of the script. On startup, you should check to make sure the file doesn't exist, or if it does that the process ID in the file doesn't exist (through a kill( pid, 0 ) call). If these conditions are met, create a new file with the script's PID and delete the file when you're done.
This is the same trick that many daemons use to ensure it isn't already running. If the daemon was killed suddenly, the file will still exist but the PID of the process therein is unlikely to be running.
Depending on what your script does, it can lead to problems if you remove the time limit. If per example, you are polling an external server that is unresponsive while the job is running, and that your cron takes 2 hours instead of 30 minutes to complete, you may get a stack of PHP processes being fired up even if the previous ones haven't completed yet. This can cause system instability and crashes.
You probably have two options:
Make sure that no other instance of your script is running beforehand, otherwise exit() on start.
Consider changing your cronjob into a daemon.
Does it have to run hourly like clockwork?
If not split the job (you mentioned it was more than one simple task) do each task every hour?
Or split it per user, do A-M on hour, then N-Z the next?

sleep() silently hogs CPU

I'm running Apache on Linux within VMWare.
One of the PHP pages I'm requesting does a sleep(), and I find that if I attempt to request a second page whilst the first page is sleep()'ing, the second page hangs, waiting for the sleep() from the first page to finish.
Has anyone else seen this behaviour?
I know that PHP isn't multi-threaded, but this seems like gross mishandling of the CPU.
Edit: I should've mentioned that the CPU usage doesn't spike. What I mean by CPU "hogging" is that no other PHP page seems able to use the CPU whilst the page is sleep()'ing.
It could be that the called page opens a session and then doesn't commit it, in this case see this answer for a solution.
What this probably means is that your Apache is only using 1 child process.
Therefore:
The 1 child process is handling a request (in this case sleeping but it could be doing real work, Apache can't tell the difference), so when a new request comes it, it will have to wait until the first process is done.
The solution would be to increase the number of child processes Apache is allowed to spawn (MaxClients directive if you're using the prefork MPM), simply remove the sleep() from the PHP script.
Without exactly knowing what's going on in your script it's hard to say, but you can probably get rid of the sleep().
Are you actually seeing the CPU go to 100% or just that no other pages are being served? How many apache-instances are you runnning? Are they all stopping when you run sleep() in of of the threads?
PHP's sleep() function essentially runs through an idle loop for n seconds. It doesn't release any memory, but it should not increase CPU load significantly.

Categories