PHP Script Crashes and Max Execution Time - php

I currently have a website that has twice been suspended by my hosting provider for "overusing system resources". In each case, there were 300 - 400 crashed copies of one of my PHP scripts left running on the server.
The scripts themselves pull and image from a web camera at home and copy it to the server. They make use of file locks to ensure only one can write at a time. The scripts are called every 3 seconds by any client viewing the page.
Initially I was confused, as I had understood that a PHP script either completes (returning the result), or crashes (returning the internal server error page). I am, however, informed that "defunct scripts" are a very common occurrence.
Would anyone be able to educate me? I have Googled this to death but I cannot see how a script can end up in a crashed state. Would it not time out when it reaches the max execution time?
My hosting provider is using PHP set up as CGI on a Linux platform. I believe that I have actually identified the problem with my script in that I did not realise that flock was a blocking function (and I am not using the LOCK_NB mask). I am assuming that somehow hundreds of copies of my script end up blocked waiting for a resource to become available and this leads to a crash? Does this sound plausible? I am reluctant to re-enable the site for fear of it being suspended again.
Any insights greatly appreciated.

Probably the approach I would recommend is to use tempnam() first and write the contents inside (which may take a while). Once done, you do the file locking, etc.
Not sure if this happens when a PUT request is being done; typically PHP will handle file uploads first before handing over the execution to your script.

Script could crash on these two limitations
max_execution_time
memory_limit
while working with resources, unless you have no other errors in script / check for notice errors too

Related

PHP script keeps "restarting" creating new instances of itself

I developed a site using Zend Framework 2. It is basically a price comparison site that integrates with many of the top affiliate networks out there. I wrote a script that checks prices from each affiliate network, and then updates my local DB with that price. Depending on which affiliate network I am contacting, I may be making an API call (Amazon or CJ.com), or I may be looking at an XML product feed (Pepperjam or LinkShare). The XML product feed would be hosted locally.
At present, there are around 3,500 sku's that I am checking with this script. The vast majority of them (95%+) are targeting an XML product feed. I would estimate that this script should probably take in the neighborhood of 10 minutes to complete. Some of the XML files I am looking at are around 8 MB in size.
I have tested this script thoroughly in my local environment and taken great lengths to make sure that there is no memory leak or something of that nature which would cause performance issues. As an example, I made sure to use data streams where possible to avoid putting the XML file in memory over and over, etc. Suffice to say, the script runs locally without issue.
This script is intended to be run as a cron job, however I do have a way to trigger it via the secure admin interface ad-hoc. Locally, this is how I initiate the script to run, and everything goes rather smoothly.
When I deploy my code to the shared hosting account, I am having all sorts of problems. In order to troubleshoot, I attached logging to various stages of this script to track when it starts, how it progresses, and when each step completes, etc. All of this is being logged to a MySQL database.
Problem #1: If I run the script ad-hoc via an HTTP request, I find that it will run for a couple minutes, and then the script starts again (so there are now two instance apparently running). Wait another couple minutes, and a third one will start, etc..... Here is an example when I triggered the script to run at 10:09pm via an HTTP request.
Screenshot of process manager
Needless to say, I DO NOT run it via an HTTP request because it only serves to get me in trouble with my web hosting provider :)
Problem #2: When the script runs on the server, triggered via a cron job, it is failing to complete. I have taken the production copy of the database and taken it locally along with the XML files, it runs fine. So it should not be a problem with bad data exposing bad code. My observation is - the script nearly runs for the exact same amount of time - before aborts, or is terminated, or whatever. The last record updated is generally timestamped around 4 minutes and 30 seconds or so (if memory serves) after the script is triggered. The SKU list is constantly changing so the record that it ends on differs, but the the time of the last update is nearly the same each time. Nothing is being logged in the error logs. I monitored server resources via SSH top command and there is nothing out of the ordinary. CPU usage is in check and memory used does not go up.
I have a shared hosting account through Bluehost. My thoughts were that perhaps it was a script max execution time issue. I extended the max execution time in the script itself and via php.ini. Made no difference.
So I guess what I am looking for is some fresh ideas of where to go next. What questions should I be asking my hosting company so they can help me get to the bottom of this. They are only somewhat helpful to say the least. Could it be some limitation on my hosting account? Triggering some sort of automatic monitor that is killing the script? What types of Apache settings could be problematic for a script of this nature? PHP.ini settings? Absolutely any input you can provide would be helpful.
And why, when triggered via HTTP, would it keep spinning up new instances? I guess I could live w/o running it manually, and only run it via a cron job, but that isn't working either. So .... interested in hearing the communities thoughts on this. Thanks!
I haven't seen your script, neither did I work with your hoster, so everything below is just a guess - and a suggestion.
Given your description, I would say you're right that your script might have been killed by timeout when run from cron. I'm not sure why it keeps spawning new instances of your script when you execute it manually via an HTTP request, but it may also be related to a timeout (e.g. if they have a logic that restarts a script if it has not produced an output within a certain time, or something like that).
You can follow up with your hosting provider about running long-running (or memory-consuming) script in their environment, and they might have some FAQ or document already written that covers this topic.
Let me suggest an option for you in case if your provider is unable to help.
From what you said, I expect your script runs an SQL query to get a list of SKUs, and then slowly iterates over this list, performing some job on every item (and eventually dies for whatever reason, as we learned).
How about if you create a temporary table (or file - just any kind of persistent storage on the server) that would save the last processed record ID of the script, or NULL if the script successfully completed. That way you'll be able to make your script start with the last processed record (if the last processed record had id = 1000, add ... WHERE id > 1000 to the main query that fetches SKUs), and you won't really care if the script completed its first attempt or not (if not, it will keep processing from that very point when it was killed, on its second try).
Alternatively, to extend this approach, you can limit one invocation to the certain amount of records to process (e.g. 100 or 1000), again, saving the last processed record ID in the database or somewhere else.
The main idea is: if the script fails to process all SKUs at once, just make it restartable so that it does not lose its progress.

Long lasting script prevents handling new requests

I have a PHP script on my Apache web server, which starts another several hours running PHP script. Right after the long-lasting script is started no other PHP script requests are handled. The browser just hangs eternally.
The background script crawls other sites and gathers data from ones. Therefore it takes quite long time.
At the same time static pages are got without problems. Also at the same time any PHP script started locally on the server from bash are executed without problems.
CPU and RAM usage are low. In fact it's test server and my requests are only ones being handled.
I tried to decrease Apache processes in order to be able to trace all of them to see where requests are hung. But when I decreased amount of processes to 2 the problem has gone.
I found no errors neither in syslog nor in apache/error.log
What else can I check?
Though I didn't find the reason of Apache hanging I have solved the task in a different way.
I've set a schedule to run a script every 5 minutes. From web script I'm just creating a file with necessary parameters. Script check existence of the file and if it exists it reads its content and deletes to prevent further scheduled start.

Does apache / php blocks a single file to be processed from the same client concurrently?

I have everything setup and running with KalturaCE & Drupak and the server (Ubuntu 8.02+Apache2+PHP5+MySql) is working fine.
The issue here I am having is unclassified.
When I play two videos together from my site, the second video which I played later doesn't start until the first completes its buffering. I did some HTTP watch and came to know that,
both of the entries request the file with the URL as follows,
/kalturace/p/1/sp/100/flvclipper/entry_id/xxxxxx/flavor/1/version/100000
so the first video which I played receives a 302 redirect response to the URL as follows,
/kalturace/p/1/sp/100/flvclipper/entry_id/xxxxxxx/flavor/1/version/100000/a.flv?novar=0
and starts buffering and playing. While the second video which I started later just wait for a response till the first video end its buffering and then the second video receives its 302 redirect and start buffering
My question is, why can't both videos buffer concurrently? and Obviously this is what I require.
Your help is highly anticipated and much welcome.
PHP file-based sessions will lock the session file while a request is active. If you intend to use parallel requests like this, you'll have to make sure each script closes the session as soon as possible (ie: after writing out any changes) with session_write_close() to keep the lock time to a minimum.
I found the solution to my problem and it was surely the Marc at first place suggested but it was some problem with mysql timeout as well.
So putting session_write_close() in both the files at right places solved my problem.
For a complete overview please visit the suggested thread at http://www.kaltura.org/videos-not-playing-simultaneously-0
I posted several suggestions at http://drupal.org/node/1002144 for the same question, this is one of them:
In Apache space, a possible cause might be MaxClients (or, the capacity of your Apache server to respond to multiple requests). If that apache setting or server capacity is low, then the server may be dropping your connections into its backlog until the first connection is completed. You can test this using regular large files which take some time to download on the server - that will establish whether it's KalturaCE or Apache which is causing the issue. To see this you'd either need a site which is already sitting at max for MaxClients (ie, not an unused dev site), or a very low MaxClients setting (like, 1!).
Another suggestion posted at http://drupal.org/node/1002144 for the same question -
It's possible that this behaviour might be due to a mysql query lock. If KalturaCE's SQL was locking a table until the first request is completed (I have no reason to believe this is the case, just floating possible causes), then a second request might hang like this.
I'm not familiar enough with CE to suggest that this is the case, but you could easily debug using mtop on the server during the request to see if this is what's actually happening.

why does apache/server die when i make a typo and cause a never ending loop?

this is more of a fundamental question at how apache/threading works.
in this hypothetical (read: sometimes i suck and write terrible code), i write some code that enters the infinite-recursion phases of it's life. then, what's expected, happens. the serve stalls.
even if i close the tab, open up a new one, and hit the site again (locally, of course), it does nothing. even if i hit a different domain i'm hosting through a vhost declaration, nothing. i normally have to wait a number of seconds before apache can begin handling traffic again. most of the time i just get tired and restart the server manually.
can someone explain this process to me? i have the php runtime setting 'ignore_user_abort' set to true to allow ajax calls that are initiated to keep running even if they close their browser, but would this being set to false affect it?
any help would be appreciated. didn't know what to search for.
thanks.
ignore_user_abort() allows your script (and Apache) to ignore a user disconnecting (closing browser/tab, moving away from page, hitting ESC, esc..) and continue processing. This is useful in some cases - for instance in a shopping cart once the user hits "yes, place the order". You really don't want an order to die halfway through the process, e.g. order's in the database, but the charge hasn't been sent to the payment facility yet. Or vice-versa.
However, while this script is busilly running away in "the background", it will lock up resources on the server, especially the session file - PHP locks the session file to make sure that multiple parallel requests won't stomp all over the file, so while your infinite loop is running in the background, you won't be able to use any session-enabled other part of the site. And if the loop is intensive enough, it could tie up the CPU enough that Apache is unable to handle any other requests on other hosted sites, where the session lock might not apply.
If it is an infinite loop, you'll have to wait until PHP's own maximum allowed run time (set_time_limit() and max_execution_time()) kicks in and kills the script. There's also some server-side limiters, like Apache's RLimitCPU and TimeOut that can handle situations like this.
Note that except on Windows, PHP doesn't count "external" time in the set_time_limit. So if your runaway process is doing database stuff, calling external programs via system() and the like, the time spent running those external calls is NOT accounted for in the parent's time limit.
If you write code that causes an (effectively) neverending loop, then apache will execute that, and be unable to respond to any additional new requests for a page, because it's trying to determine the page content (for the served page which caused the neverending loop) by executing the (non-terminating) php code.
Solution: don't write code that doesn't terminate (in a reasonable amount of time). Understand loop invariants.

How to Increase the time till a read timeout error occurs?

I've written in PHP a script that takes a long time to execute [Image processing for thousands of pictures]. It's a meter of hours - maybe 5.
After 15 minutes of processing, I get the error:
ERROR
The requested URL could not be retrieved
The following error was encountered while trying to retrieve the URL: The URL which I clicked
Read Timeout
The system returned: [No Error]
A Timeout occurred while waiting to read data from the network. The network or server may be down or congested. Please retry your request.
Your cache administrator is webmaster.
What I need is to enable that script to run for much longer.
Now, here are all the technical info:
I'm writing in PHP and using the Zend Framework. I'm using Firefox. The long script that is processed is done after clicking a link. Obviously, since the script is not over I see the web page on which the link was and the web browser writes "waiting for ...".
After 15 minutes the error occurs.
I tried to make changes to Firefox threw about:config but without any success. I don't know, but the changes might be needed somewhere else.
So, any ideas?
Thanks ahead.
set_time_limit(0) will only affect the server-side running of the script. The error you're receiving is purely browser-side. You have to send SOMETHING to keep the browser from deciding the connection's dead - even a single character of output (followed by a flush() to make sure it actually get sent out over the wire) will do. Maybe once every image that's processed, or on a fixed time interval (if last char sent more than 5 minutes ago, output another one).
If you don't want any intermediate output, you could do ignore_user_abort(TRUE), which will allow the script to keep running even if the connection gets shut down from the client side.
If the process runs for hours then you should probably look into batch processing. So you just store a request for image processing (in a file, database or whatever works for you) instead of starting the image processing. This request is then picked up by a scheduled (cron) process running on the server, which will do the actual processing (this can be a PHP script, which calls set_time_limit(0)). And when processing is finished you could signal the user (by mail or any other way that works for you) that the processing is finished.
use set_time_limit
documentation here
http://nl.php.net/manual/en/function.set-time-limit.php
If you can split your work in batches, after processing X images display the page with some javascript (or META redirects) on it to open the link http://server/controller/action/nextbatch/next_batch_id.
Rinse and repeat.
batching the entire process also has the added benefit that once something goes wrong, you don't have to start out the entire thing anew.
If you're running on a server of your own and can get out of safe_mode, then you could also fork background processes to do the actual heavy lifting, independent of your browser view of things. If you're in a multicore or multiprocessor environment, you can even schedule more than one running process at any time.
We've done something like that for large computation scripts; synchronization of the processes happened over a shared database---but luckily enough, they processes were so independent that the only thing we needed to see was their completion or termination.

Categories