I have some code that I know will reach max timeout at some point.
I tried using try-catch to handle this error, but I have just been informed timeouts can't be caught this way.
Is there a way I can catch the error, or count processing time and handle the error just before it reaches timeout?
As rightly mentioned by iTom, first you need to figure out, why your code is taking 30 seconds.
If it is expected & you are doing some sort to database updation then in drupal, we have provision to handle this. You need to use Drupal batch APIs.
Check here for more details: https://api.drupal.org/api/drupal/includes!form.inc/group/batch/7
Functions allowing forms processing to be spread out over several page
requests, thus ensuring that the processing does not get interrupted
because of a PHP timeout, while allowing the user to receive feedback
on the progress of the ongoing operations.
Maximum execution time is a PHP Error not a PHP Exception, therefore your error handling code will be unable to catch an exception that doesn't actually exist. The Execution time limit is really a last resort for the PHP Server to kill a function that's basically gone out of control.
You really need to look into why your database code is taking 30~ seconds to execute and resolve the code/database issue. Another (not recommended) option would be to increase the Maximum execution time in PHP to something suitable for your code.
You can echo time() at before and after of function to count processing
Related
I've got a script that does heavy database manipulation.
it worked all okay untill i reached a higher number of databases where to fire the manipulations. now i collide with the max_script_execution timeout.
is it possible to handle the databases one by one when i redirect to the same script with a param for the next db by header location?
or would this not affect the max_script_execution and timeout anyway?
Try this,
Add this line in the top of your php script.
ini_set('max_execution_time', 0);
But, one thing remember, it is not a good trick to follow. You should check you script to minimize the execution time.
Hi I have had an issue where two visitors have hit a php function within a second of each other. This function sends them a one time use code from a pool of codes and it sent both people the same code.
What methods can I use in my script to check if someone else is already being processed and either delay or wait for the other person to finish?
I know this seems a really general question its hard to explain what I mean! Hopefully someone can help!
What methods can I use in my script to check if someone else is already being processed and either delay or wait for the other person to finish?
That would be what we call a "mutex", short for mutually exclusive.
Notice that without knowing how your PHP is run on your server, it's hard to know whether PHP's built-in mutex routines will work. PHP is a bad language when it comes to multithreading.
If your pool of codes lives in the database you could use transactions and lock tables for reading when one of the requests is trying to obtain the code. Wherever the data are, you will have to introduce some way locking or queuing requests to deal with concurrent requests.
I have a MVC framework with a controller in it. A controller downloads images from server. I need to refresh my database with those images every 5 minutes. So, I planned to create php script which downloads the file and persists it to my database. In order to do this every 5 minutes. I will be setting up Cron job.
Now the question is,
What is the best practise to handle errors inside php script?
Because Cron will keep executing at every 5 minutes without knowing that the last queried image is already lost and not being saved.
How do I notify myself that something unusual happend and I need to maintain the DB consistency by my self (Which I don't mind for few instances).
What is the best practise to handle errors inside php script? Because
Cron will keep executing at every 5 minutes without knowing that the
last queried image is already lost and not being saved.
use asserts as described here: http://php.net/manual/en/function.assert.php
How do I notify myself that something unusual happend and I need to
maintain the DB consistency by my self (Which I don't mind for few
instances).
use mail() in asserts
Use try-catch along with database transactions (if possible). You can dump errors to error_log() and either set that up to generate email or add email to your error handler.
In addition to the other comments. I have often found it useful in cron scripts that could run into problems or take longer than the desired execution interval to where multiple execution instances could be running, to provide some text file that indicates last execution time, execution success, etc. that you can inspect to determine if the script should run as scheduled. It could be something as simple as writing a file at script start and deleting it on successful execution, and then checking for this file on next execution to decide whether to run or not.
I have a PHP function that I want to make available publically on the web - but it uses a lot of server resources each time it is called.
What I'd like to happen is that a user who calls this function is forced to wait for some time, before the function is called (or, at the least, before they can call it a second time).
I'd greatly prefer this 'wait' to be enforced on the server-side, so that it can't be overridden by dubious clients.
I plan to insist that users log into an online account.
Is there an efficient way I can make the user wait, without using server resources?
Would 'sleep()' be an appropriate way to do this?
Are there any suggested problems with using sleep()?
Is there a better solution to this?
Excuse my ignorance, and thanks!
sleep would be fine if you were using PHP as a command line tool for example. For a website though, your sleep will hold the connection open. Your webserver will only have a finite number of concurrent connections, so this could be used to DOS your site.
A better - but more involved - way would be to use a job queue. Add the task to a queue which is processed by a scheduled script and update the web page using AJAX or a meta-refresh.
sleep() is a bad idea in almost all possible situations. In your case, it's bad because it keeps the connection to the client open, and most webservers have a limit of open connections.
sleep() will not help you at all. The user could just load the page twice at the same time, and the command would be executed twice right after each other.
Instead, you could save a timestamp in your database for when your function was last invoked. Then, before invoking it, you should check the database to see if a suitable amount of time has passed. If it has, invoke the function and update the timestamp in the database.
If you're planning on enforcing a user login, than the problem just got a whole lot simpler.
Have a record inn the database listing users and the last time they used your resource consuming service, and measure the time difference between then and now. If the time difference is too low, deny access and display an error message.
This is best handled at the server level. No reason to even invoke PHP for repeat requests.
Like many sites, I use Nginx and you can use it's rate-limiting to block repeat requests over a certain number. So like, three requests per IP, per hour.
Basically I need to get around max execution time.
I need to scrape pages for info at varying intervals, which means calling the bot at those intervals, to load a link form the database and scrap the page the link points to.
The problem is, loading the bot. If I load it with javascript (like an Ajax call) the browser will throw up an error saying that the page is taking too long to respond yadda yadda yadda, plus I will have to keep the page open.
If I do it from within PHP I could probably extend the execution time to however long is needed but then if it does throw an error I don't have the access to kill the process, and nothing is displayed in the browser until the PHP execute is completed right?
I was wondering if anyone had any tricks to get around this? The scraper executing by itself at various intervals without me needing to watch it the whole time.
Cheers :)
Use set_time_limit() as such:
set_time_limit(0);
// Do Time Consuming Operations Here
"nothing is displayed in the browser until the PHP execute is completed"
You can use flush() to work around this:
flush()
(PHP 4, PHP 5)
Flushes the output buffers of PHP and whatever backend PHP is using (CGI, a web server, etc). This effectively tries to push all the output so far to the user's browser.
take a look at how Sphider (PHP Search Engine) does this.
Basically you will just process some part of the sites you need, do your thing, and go on to the next request if there's a continue=true parameter set.
run via CRON and split spider into chunks, so it will only do few chunks at once. call from CRON with different parameteres to process only few chunks.