Why is it so bad to run a PHP script continuously? - php

I have a map. On this map I want to show live data collected from several tables, some of which have astounding amounts of rows. Needless to say, fetching this information takes a long time. Also, pinging is involved. Depending on servers being offline or far away, the collection of this data could vary from 1 to 10 minutes.
I want the map to be snappy and responsive, so I've decided to add a new table to my database containing only the data the map needs. That means I need a background process to update the information in my new table continuously. Cron jobs are of course a possibility, but I want the refreshing of data to happen as soon as the previous interval has completed. And what if the number of offline IP addresses suddenly spike and the loop takes longer to run than the interval of the Cron job?
My own solution is to create an infinite loop in PHP that runs by the command line. This loop would refresh the data for the map into MySQL as well as record other useful data such as loop time and failed attempts at pings etc, then restart after a short pause (a few seconds).
However - I'm being repeatedly told by people that a PHP script running for ever is BAD. After a while it will hog gigabytes of RAM (and other terrible things)
Partly I'm writing this question to confirm if this is in fact the case, but some tips and tricks on how I would go about writing a clean loop that doesn't leak memory (If that is possible) wouldn't go amiss. Opinions on the matter would also be appreciated.
The reply I feel sheds the most light on the issue I will mark as correct.

The loop should be in one script which will activate/call the actual script as a different process...much like cron is doing.
That way, even if memory leaks, and non collected memory is accumulating, it will/should be free after each cycle.

However - I'm being repeatedly told by people that a PHP script running for ever is BAD. After a while it will hog gigabytes of RAM (and other terrible things)
This used to be very true. Previous versions of PHP had horrible garbage collection, so long-running scripts could easily accidentally consume far more memory than they were actually using. PHP 5.3 introduced a new garbage collector that can understand and clean up circular references, the number one cause of "memory leaks." It's enabled by default. Check out that link for more info and pretty graphs.
As long as your code takes steps to allow variables to go out of scope at proper times and otherwise unset variables that will no longer be used, your script should not consume unnecessary amounts of memory just because it's PHP.

I don't think its bad, as with anything that you want to run continuously you have to be more careful.
There are libraries out there to help you with the task. Have a look at System_Daemon, which release RC 1 just over a month ago, which allows you to "Set options like max RAM usage".

Rather than running an infinite loop I'd be tempted to go with the cron option you mention in conjunction with a database table entry or flat-file that you'd use to store a "currently active" status bit to ensure that you didn't have overlapping processes attempting to run at the same time.
Whilst I realise that this would mean a minor delay before you perform the next iteration, this is probably a better idea anyway as:
It'll let the RDBMS perform any pending low-priority updates, etc. that may well been on-hold due to the amount of activity that you've been carrying out.
Even if you neatly unset all the temporary variables you've been using, it's still possible that PHP will "leak" memory, although recent improvements (5.2 introduced a new memory management system and garbage collection was overhauled in 5.3) should hopefully mean that this less of an issue.
In general, it'll also be easier to deal with other issues (if the DB connection temporarily goes down due to a config change and restart for example) if you use the cron approach, although in an ideal world you'd cater for such eventualities in your code anyway. (That said, the last time I checked, this was far from an ideal world.)

First I fail to see how you need a daemon script in order to provide the functionality you describe.
Cron jobs are of course a possibility, but I want the refreshing of data to happen as soon as the previous interval has completed
The neither a cron job nor a daemon are the way to solve the problem (unless the daemon becomes the data sink for the scripts). I'd spawn a dissociated process when the data is available using a locking strategy to aoid concurrency.
Long running PHP scripts are not intrinsically bad - but there reference counting garbage collector does not deal with all possible scenarios for cleaning up memory - but more recent implementations have a more advanced collector which should clean up a lot more (circular reference checker).

Related

Prevent PHP script using up all resources while it runs?

I have a daily cron job which takes about 5 minutes to run (it does some data gathering and then various database updates). It works fine, but the problem is that, during those 5 minutes, the site is completely unresponsive to any requests, HTTP or otherwise.
It would appear that the cron job script takes up all the resources while it runs. I couldn't find anything in the PHP docs to help me out here - how can I make the script know to only use up, say, 50% of available resources? I'd much rather have it run for 10 minutes and have the site available to users during that time, than have it run for 5 minutes and have user complaints about downtime every single day.
I'm sure I could come up with a way to configure the server itself to make this happen, but I would much prefer if there was a built-in approach in PHP to resolving this issue. Is there?
Alternatively, as plan B, we could redirect all user requests to a static downtime page while the script is running (as opposed to what's happening now, which is the page loading indefinitely or eventually timing out).
A normal script can't hog up 100% of resources, resources get split over the processes. It could slow everything down intensly, but not lock all resources in (without doing some funky stuff). You could get a hint by doing top -s in your commandline, see which process takes up a lot.
That leads to conclude that something locks all further processes. As Arkascha comments, there is a fair chance that your database gets locked. This answer explains which table type you should use; If you do not have it set to InnoDB, you probally want that, at least for the locking tables.
It could also be disk I/O if you write huge files, try to split it into smaller read/writes or try to place some of the info (e.g. if it are files with lists) to your database (assuming that has room to spare).
It could also be CPU. To fix that, you need to make your code more efficient. Recheck your code, see if you do heavy operations and try to make those smaller. Normally you want this as fast as possible, now you want them as lightweight as possible, this changes the way you write code.
If it still locks up, it's time to debug. Turn off a large part of your code and check if the locking still happens. Continue turning on code untill you notice locking. Then fix that. Try to figure out what is costing you so much. Only a few scripts require intense resources, it is now time to optimize. One option might be splitting it into two (or more) steps. Run a cron that prepares/sanites the data, and one that processed the data. These dont have to run at syncronical, there might be a few minutes between them.
If that is not an option, benchmark your code and improve as much as you can. If you have a heavy query, it might improve by selecting only ID's in the heavy query and use a second query just to fetch the data. If you can, use your database to filter, sort and manage data, don't do that in PHP.
What I have also implemented once is a sleep every N actions.
If your script really is that extreme, another solution could be moving it to a time when little/no visitors are on your site. Even if you remove the bottleneck, nobody likes a slow website.
And there is always the option of increasing your hardware.
You don't mention which resources are your bottleneck; CPU, memory or disk I/O.
However if it is CPU or memory you can do something this in you script:
http://php.net/manual/en/function.sys-getloadavg.php
http://php.net/manual/en/function.memory-get-usage.php
$yourlimit = 100000000;
$load = sys_getloadavg();
if ($load[0] > 0.80 || memory_get_usage() > $yourlimit) {
sleep(5);
}
Another thing to try would be to set your process priority in your script.
This requires SU though, which should be fine for a cronjob?
http://php.net/manual/en/function.proc-nice.php
proc_nice(50);
I did a quick test for both and it work like a charm, thanks for asking I have cronjob like that as well and will implement it. It looks like the proc_nice only will do fine.
My test code:
proc_nice(50);
$yourlimit = 100000000;
while (1) {
$x = $x+1;
$load = sys_getloadavg();
if ($load[0] > 0.80 || memory_get_usage() > $yourlimit) {
sleep(5);
}
echo $x."\n";
}
It really depend of your environment.
If using a unix base, there is built-in tools to limit cpu/priority of a given process.
You can limit the server or php alone, wich is probably not what you are looking for.
What you can do first is to separate your task in a separate process.
There is popen for that, but i found it much more easier to make the process as a bash script. Let''s name it hugetask for the example.
#!/usr/bin/php
<?php
// Huge task here
Then to call from the command line (or cron):
nice -n 15 ./hugetask
This will limit the scheduling. It mean it will low the priority of the task against others. The system will do the job.
You can as well call it from your php directly:
exec("nice -n 15 ./hugetask &");
Usage: nice [OPTION] [COMMAND [ARG]...] Run COMMAND with an adjusted
niceness, which affects process scheduling. With no COMMAND, print the
current niceness. Niceness values range from
-20 (most favorable to the process) to 19 (least favorable to the process).
To create a cpu limit, see the tool cpulimit which has more options.
This said, usually i am just putting some usleep() in my scripts, to slow it down and avoid to create a funnel of data. This is ok if you are using loops in your script. If you slow down your task to run in say 30 minutes, there won't be much issues.
See also proc_nice http://php.net/manual/en/function.proc-nice.php
proc_nice() changes the priority of the current process by the amount
specified in increment. A positive increment will lower the priority
of the current process, whereas a negative increment will raise the
priority.
And sys_getloadavg can also help. It will return an array of the system load in the last 1,5, and 15 minutes.
It can be used as a test condition before launching the huge task.
Or to log the average to find the best day time to launch huge task. It can be susrprising!
print_r(sys_getloadavg());
http://php.net/manual/en/function.sys-getloadavg.php
You could try to delay execution using sleep. Just cause your script to pause between several updates of your database.
sleep(60); // stop execution for 60 seconds
Although this depends a lot on the kind of process you are doing in your script. Maybe or not helpful in your case. Worth a try, so you could
Split your queries
do the updates in steps with sleep inbetween
References
Using sleep for cron process
I could not describe it better than the quote in the above answer:
Maybe you're walking the database of 9,000,000 book titles and updating about 10% of them. That process has to run in the middle of the day, but there are so many updates to be done that running your batch program drags the database server down to a crawl for other users.
So modify the batch process to submit, say, 1000 updates, then sleep for 5 seconds to give the database server a chance to finish processing any requests from other users that have backed up.
Sleep and server resources
sleep resources depend on OS
adding sleep to allevaite server resources
Probably to minimize you memory usage you should process heavy and lengthy operations in batches. If you query the database using an ORM like doctrine you can easily use existing functions
http://docs.doctrine-project.org/projects/doctrine-orm/en/latest/reference/batch-processing.html
It's hard to tell what exactly the issue may be without having a look at your code (cron script). But to confirm that the issue is caused by the cron job you can run the script manually and check website responsiveness. If you notice the site being down when running the cron job then we would have to have a look at your script in order to come up with a solution.
Many loops in your cron script might consume a lot of CPU resources.
To prevent that and reduce CPU usage simply put some delays in your script, for example:
while($long_time_condition) {
//Do something here
usleep(100000);
}
Basically, you are giving the processor some time to do something else.
Also you can use the proc_nice() function to change the process priority. For example proc_nice(20);//very low priority. Look at this question.
If you want to find the bottlenecks in your code you can try to use Xdebug profiler.
Just set it up in your dev environment, start the cron manually and then profile any page. Also you can profile your cron script as well php -d xdebug.profiler_enable=On script.php, look at this question.
If you suspect that the database is your bottleneck than import pretty large dataset (or entire database) in your local database and repeat the steps, logging and inspecting all the queries.
Alternatively if it possible setup the Xdebug on the staging server where the server is as close as possible to production and profile the page during cron execution.

Building an event scheduling/custom cronjob system in PHP and MySQL, is this a sane approach?

I have an application where I intend users to be able to add events at any time, that is, chunks of code that should only run at a specific time in the future determined by user input. Similar to cronjobs, except at any point there may be thousands of these events that need to be processed, each at its own specific due time. As far as I understand, crontab would not be able to handle them since it is not meant to have massive number of cronjobs, and additionally, I need precision to the second, and not the minute. I am aware it is possible to programmatically add cronjobs to crontab, but again, it would not be enough for what I'm trying to accomplish.
Also, I need these to be real time, faking them by simply checking if there are due items whenever the pages are visited is not a solution; they should also fire even if no pages are visited by their due time. I've been doing some research looking for a sane solution, I read a bit about queue systems such as gearman and rabbitmq but a FIFO system would not work for me either (the order in which the events are added is irrelevant, since it's perfectly possible one adds an event to fire in 1 hour, and right after another that is supposed to trigger in 10 seconds)
So far the best solution that I found is to build a daemon, that is, a script that will run continuously checking for new events to fire. I'm aware PHP is the devil, leaks memory and whatnot, but I'm still hoping nonetheless that it is possible to have a php daemon running stably for weeks with occasional restarts, so as long as I spawn new independent processes to do the "heavy lifting", the actual processing of the events when they fire.
So anyway, the obvious questions:
1) Does this sound sane? Is there a better way that I may be missing?
2) Assuming I do implement the daemon idea, the code naturally needs to retrieve which events are due, here's the pseudocode of how it could look like:
while 1 {
read event list and get only events that are due
if there are due events
for each event that is due
spawn a new php process and run it
delete the event entry so that it is not run twice
sleep(50ms)
}
If I were to store this list on a MySQL DB, and it certainly seems the best way, since I need to be able to query the list using something on the lines of "SELECT * FROM eventlist where duetime >= time();", is it crazy to have the daemon doing a SELECT every 50 or 100 milliseconds? Or I'm just being over paranoid, and the server should be able to handle it just fine? The amount of data retrieved in each iteration should be relatively small, perhaps a few hundred rows, I don't think it will amount for more than a few KBs of memory. Also the daemon and the MySQL server would run on the same machine.
3) If I do use everything described above, including the table on a MySQL DB, what are some things I could do to optimize it? I thought about storing the table in memory, but I don't like the idea of losing its contents whenever the server crashes or is restarted. The closest thing I can think of would be to have a standard InnoDB table where writes and updates are done, and another, 1:1 mirror memory table where reads are performed. Using triggers it should be doable to have the memory table mirror everything, but on the other hand it does sound like a pain in the ass to maintain (fubar situations can easily happen if some reason the tables get desynchronized).

How to handle multiple PHP processes on multiple servers to only start when there is work, and sleep when there's nothing to do?

I have a challenge I don't seem to get a good grip on.
I am working on an application that generates reports (big analysis from database but that's not relevant here). I have 3 identical scripts that I call "process scripts".
A user can select multiple variables to generate a report. If done, I need one of the three scripts to pick up the task and start generating the report. I use multiple servers so all three of them can work simultaneously. When there is too much work, a queue will start so the first "process script" to be ready can pick up the next and so on.
I don't want to have these scripts go to the database all the time, so I have a small file "thereiswork.txt". I want the three scripts to read the file and if there is something to do go do it. If not, do nothing.
At first, I just randomly let a "process script" to be chosen & they all have their own queue. However, I now see that in some cases 1 process script has a queue of hours while the other 2 are doing nothing. Just because they had the "luck" of not getting very big reports to generate so I need a more fair solutions to equally balance the work.
How can I do this? Have a queue multiple scripts can work on?
PS
I use set_time_limit(0); for these scripts and they all currently are in a while() loop, and sleep(5) all the time...
No, no, no.
PHP does not have the kind of sophisticated lock management facilities to support concurrent raw file access. Few languages do. That's not to say it's impossible to implement them (most easily with mutexes).
I don't want to have these scripts go to the database all the time
DBMS provide great support for concurrent access. And while there is an overhead in perfroming an operation on the DB, it's very small in comparison to the amount of work which each request will generate. It's also a very convenient substrate for managing the queue of jobs.
they all have their own queue
Why? using a shared queue on a first-come, first-served basis will ensure the best use of resources.
At first, I just randomly let a "process script" to be chosen
This is only going to distribute work evenly with a very large number of jobs and a good random number generator. One approach is to shard data (e.g. instance 1 picks up jobs where mod(job_number, number_of_instances)=0, instance picks up jobs where mod(job_number, number_of_instances)=1....) - but even then it doesn't make best use of available resources.
they all currently are in a while() loop, and sleep(5) all the time
No - this is wrong too.
It's inefficient to have the instances constantly polling an empty queue - so you implement a back-ofr plan, e.g.
$maxsleeptime=100;
$sleeptime=0;
while (true) {
$next_job=get_available_job_from_db_queue();
if (!$next_job) {
$sleeptime=min($sleeptime*2, $maxsleeptime);
sleep($sleeptime);
} else {
$sleeptime=0;
process_job($next_job);
mark_job_finished($next_job);
}
}
No job is destined for a particular processor until that processor picks it up from the queue. By logging sleeptime (or start and end of processing) it's also a lot easier to see when you need to add more processor scripts - and if you handle the concurrency on the database, then you don't need to worry about configuring each script to know about the number of other scripts running - you can add and retired instances as required.
For this task, I use the Gearman job server. Your PHP code sends out jobs and you have a background script running to pick them up. It comes down to a solution similar to symcbean's, but the dispatching does not require arbitrary sleeps. It waits for events instead and essentially wakes up exactly when needed.
It comes with an excellent PHP extension and is very well documented. Most examples are in PHP too, although it works transparently with other languages too.
http://gearman.org/

Should I be using message queuing for this?

I have a PHP application that currently has 5k users and will keep increasing for the forseeable future. Once a week I run a script that:
fetches all the users from the database
loops through the users, and performs some upkeep for each one (this includes adding new DB records)
The last time this script ran, it only processed 1400 users before dieing due to a 30 second maximum execute time error. One solution I thought of was to have the main script still fetch all the users, but instead of performing the upkeep process itself, it would make an asynchronous cURL call (1 for each user) to a new script that will perform the upkeep for that particular user.
My concern here is that 5k+ cURL calls could bring down the server. Is this something that could be remedied by using a messaging queue instead of cURL calls? I have no experience using one, but from what I've read it seems like this might help. If so, which message queuing system would you recommend?
Some background info:
this is a Symfony project, using Doctrine as my ORM and MySQL as my DB
the server is a Windows machine, and I'm using Windows' task scheduler and wget to run this script automatically once per week.
Any advice and help is greatly appreciated.
If it's possible, I would make a scheduled task (cron job) that would run more often and use LIMIT 100 (or some other number) to process a limited number of users at a time.
A few ideas:
Increase the Script Execution time-limit - set_time_limit()
Don't go overboard, but more than 30 seconds would be a start.
Track Upkeep against Users
Maybe add a field for each user, last_check and have that field set to the date/time of the last successful "Upkeep" action performed against that user.
Process Smaller Batches
Better to run smaller batches more often. Think of it as being the PHP equivalent of "all of your eggs in more than one basket". With the last_check field above, it would be easy to identify those with the longest period since the last update, and also set a threshold for how often to process them.
Run More Often
Set a cronjob and process, say 100 records every 2 minutes or something like that.
Log and Review your Performance
Have logfiles and record stats. How many records were processed, how long was it since they were last processed, how long did the script take. These metrics will allow you to tweak the batch sizes, cronjob settings, time-limits, etc. to ensure that the maximum checks are performed in a stable fashion.
Setting all this may sound like alot of work compared to a single process, but it will allow you to handle increased user volumes, and would form a strong foundation for any further maintenance tasks you might be looking at down the track.
Why don't you still use the cURL idea, but instead of processing only one user for each, send a bunch of users to one by splitting them into groups of 1000 or something.
Have you considered changing your logic to commit changes as you process each user? It sounds like you may be running a single transaction to process all users, which may not be necessary.
How about just increasing the execution time limit of PHP?
Also, looking into if you can improve your upkeep-procedure to make it faster can help too. Depending on what exactly you are doing, you could also look into spreading it out a bit. Do a couple once in a while rather than everyone at once. But depends on what exactly you're doing of course.

Execute php script every 40 milliseconds?

There is some way to execute a php script every 40 milliseconds?
I don't know if cronjob is the right way, because 25 times per second require a lot of CPU.
Well, If php isn't the correct language, what language I should use?
I am making a online game, but I need something to process what is happening in the game, to move the characters, to calculate projectiles paths, etc.
If you try to invoke a PHP script every 40 milliseconds, that will involve:
Create a process
Load PHP
Load and compile the script
Run the compiled script
Remove the process and all of the memory
You're much better off putting your work into the body of a loop, and then using time_sleep_until at the end of the loop to finish out the rest of your 40 milliseconds. Then your run your PHP program once.
Keep in mind, this needs to be a standalone PHP program; running it out of a web page will cause the web server to timeout on that page, and then end your script prematurely.
Every 40 milliseconds would be impressive. It's not really suited for cron, which runs on 1-minute boundaries.
Perhaps if you explained why you need that level of performance, we could make some better suggestions.
Another thing you have to understand is that it takes time to create processes under UNIX - this may be better suited to a long running task started once and just doing the desired activity every 40ms.
Update: For an online game with that sort of performance, I think you seriously need to consider having a fat client running on the desktop.
By that I mean a language compiled to machine language (not interpreted) and where the bulk of the code runs on the client, using the network only for transmitting information that needs to be shared.
I don't doubt that the interpreted languages are suitable for less performance intensive games but I don't think, from personal experience, you'll be able to get away with them for this purpose.
PHP is a slow, interpreted language. For it to open a file takes almost that amount of time. Rxecuting a PHP script every 40 milliseconds would lead to a huge queue, and a crash very quickly. This definitely sounds like a task you don't want to use PHP for, but a daemon or other fast, compiled binary. What are you looking to do?
As far as I know a cronjob can only be executed every minute. That's the smallest amount of time possible. I'm left wondering why you need such a small amount of time of execution?
If you really want it to be PHP, I guess you should keep the process running through a shell, as some kind of deamon, instead of opening/closing it all the time.
I do not know how to do it but I guess you can at least get some inspiration from this post:
http://kevin.vanzonneveld.net/techblog/article/create_daemons_in_php/
As everyone else is saying, starting a new process every 40ms doesn't sound like a good idea. It would be interesting to know what you're trying to do. What do you want to do if one execution for some reason takes more than 40ms? If you're now careful you might get lots of processes running simultaneous stepping on each other toes.
What language will depend a lot on what you're trying to do, but you should chose a language with thread support so you don't have to fork a new process all the time. Java, Python might be suited.
I'm not so sure every 40 MS is realistic if the back end job has to deal with things like database queries. You'd probably do better working out a way to be adaptive to system conditions and trying hard to run N times per second, rather than every 40 MS like clockwork. Again, this depends on the complexity of what you need to accomplish behind the curtain.
PHP is probably not the best language to write this with. This is for several reasons:
Depending on the version of PHP, garbage collection may be broken. If you daemonize, you run a risk of leaking memory N times a second.
Other reasons detailed in this answer.
Try using C or Python and keep track of how long each iteration takes. This lets you make a 'best effort' to run N times a second, or every 40 MS, whichever is greater. This avoids your process perpetually running since every time it finishes, its already late to get started again.
Again, I'm not sure how long these tasks should take on a 'worst case' scenario system load .. so my answer may or may not apply in full. Regardless, I advise you to not write a stand alone daemon in PHP.
PHP is the wrong language for this job. If you want to do something updating that fast in a browser, you need to use Javascript. PHP is only for the backend, which means everything PHP does has to be send from your server to the browser and then rendered.

Categories