In a Magento system, I have a global max_execution_time set to 60 seconds for PHP requests. However, this limit is not enough for some calls to the SOAP API. What I would like to do is increase the execution time to say 10 minutes for API requests, without affecting the normal frontend pages.
How can I increase max_execution_time for Magento SOAP requests only?
For Apache
This can be configured in your vhost with a <LocationMatch "/api/"> node. Place max execution time inside that node (along with any other rules such as max memory) and they will only be applied to requests that hit /api/.*
For Nginx
In Nginx, you should be able to accomplish the same thing with:
location ~ ^/api {
fastcgi_read_timeout 600;
}
Related
How do I increase the apache timeout directive in .htaccess? I have a LONG $_POST['script'] that takes a user probably 10 minutes to fill in all the data. The problem is if it takes too long than the page times out or something and it goes to a webpage is not found error page. Would increasing the apache timeout directive in .htaccess be the answer I'm looking for. I guess it's set to 300 seconds by default, but I don't know how to increase that or if that's even what I should do... Either way, how do I increase the default time? Thank you.
if you have long processing server side code, I don't think it does fall into 404 as you said ("it goes to a webpage is not found error page")
Browser should report request timeout error.
You may do 2 things:
Based on CGI/Server side engine increase timeout there
PHP : http://www.php.net/manual/en/info.configuration.php#ini.max-execution-time - default is 30 seconds
In php.ini:
max_execution_time 60
Increase apache timeout - default is 300 (in version 2.4 it is 60).
In your httpd.conf (in server config or vhost config)
TimeOut 600
Note that first setting allows your PHP script to run longer, it will not interferre with network timeout.
Second setting modify maximum amount of time the server will wait for certain events before failing a request
Sorry, I'm not sure if you are using PHP as server side processing, but if you provide more info I will be more accurate.
Just in case this helps anyone else:
If you're going to be adding the TimeOut directive, and your website uses multiple vhosts (eg. one for port 80, one for port 443), then don't forget to add the directive to all of them!
This solution is for Litespeed Server (Apache as well)
Add the following code in .htaccess
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
Litespeed reference
So I have a script which loops doing multiple cURL calls. After about 7-9 minutes it randomly stops execution. I have set the .user.ini file to adjust these settings:
max_execution_time = 30000
max_input_time = 200
I believe I have fastCGI but can't for the life of me figure out why this keeps dying on me. I have a submit form on the front end and I just get a 500 when it dies with nothing in the error log. Anything else I could be missing? Some PHP setting somewhere limiting the number of cURLs or execution time?
EDIT: So this issue was definitely FastCGI limiting my time with the param "FcgidBusyTimeout". My hosting company upped it for me as a test and everything worked great. The issue now is that because I'm on shared hosting they wont up FastCGI timeouts for people. I'm going to try and loop my script onto itself (kind of like a function loop where it calls itself again) and see if the new process' get me past the timeout issue.
FastCGI has its own timeout.
<IfModule mod_fcgid.c>
IPCConnectTimeout 20
IPCCommTimeout 120
FcgidBusyTimeout 200
</IfModule>
So if your PHP timeout is high enough its possible that your FastCGI process were killed after that time.
If you have heavy scripts its better to run the script over your CLI then you have only the PHP Timeout.
Before I use nginx and php-fpm, I used Apache, so when I wanted only one of my cron jobs to run without time execution limitation, I used these lines in my PHP code:
set_time_limit(0);
ini_set('max_execution_time', 0);
but after I migrated from Apache to nginx, this code doesn't work. I know ways to change nginx.conf to increase maximum execution time.
But I want to handle this with php code. Is there a way?
I want to specify only one file that can run PHP code without time limitation.
Try This:
Increase PHP script execution time with Nginx
You can follow the steps given below to increase the timeout value. PHP default is 30s. :
Changes in php.ini
If you want to change max execution time limit for php scripts from 30 seconds (default) to 300 seconds.
vim /etc/php5/fpm/php.ini
Set…
max_execution_time = 300
In Apache, applications running PHP as a module above would have suffice. But in our case we need to make this change at 2 more places.
Changes in PHP-FPM
This is only needed if you have already un-commented request_terminate_timeout parameter before. It is commented by default, and takes value of max_execution_time found in php.ini
Edit…
vim /etc/php5/fpm/pool.d/www.conf
Set…
request_terminate_timeout = 300
Changes in Nginx Config
To increase the time limit for example.com by
vim /etc/nginx/sites-available/example.com
location ~ \.php$ {
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_read_timeout 300;
}
If you want to increase time-limit for all-sites on your server, you can edit main nginx.conf file:
vim /etc/nginx/nginx.conf
Add following in http{..} section
http {
#...
fastcgi_read_timeout 300;
#...
}
Reload PHP-FPM & Nginx
Don’t forget to do this so that changes you have made will come into effect:
service php5-fpm reload
service nginx reload
or try this
fastcgi_send_timeout 50;
fastcgi_read_timeout 50;
fastcgi has it's own set of timeouts and checks to prevent it from stalling out on a locked up process. They would kick in if you for instance set php's execuction time limit to 0 (unlimited) then accidentally created an infinite loop. Or if you were running some other application besides PHP which didn't have any of it's own timeout protections and it failed.
I think that if you have php-fpm and nginx "you can't" set this time only from PHP.
What you could do is a redirect with the parameters indicating you where to continue, but you must control the time that your script is running to avoid timeout.
If your process runs in a browser window, then do it with Javascript the redirect because the browser could limit the number of redirects... or do it with ajax.
Hope that helps.
You can add request_terminate_timeout = 300 to your server's php-fpm pool configuration if you are tried all of solutions here.
ini_set('max_execution_time', 0);
do this if "Safe Mode" is off
set_time_limit(0);
Place this at the top of your PHP script and let your script loose!
Note: if your PHP setup is running in safe mode, you can only change it from the php.ini file.
I have tried to use nginx (http://nginx.org/) to limit the amount of requests per minute. For example my settings have been:
server{
limit_req_zone $binary_remote_addr zone=pw:5m rate=20r/m;
}
location{
limit_req zone=pw nodelay;
}
What I have found with Nginx is that even if I try 1 request per minute, I am allowed back in many times within that minute. Of course fast refreshing of a page will give me the limit page message which is a "503 Service Temporarily Unavailable" return code.
I want to know what kind of settings can be applied to limit a request exactly to 20 requests a minute. I am not looking for flood protection only because Nginx provides this where if a page is constatnly refreshed for example it limits the user and lets them back in after some time with some delay (unless you apply a nodelay setting).
If there is an alternative to Nginx other than HAProxy (because its quite slow). Also the setup I have on Nginx is acting as a reverse proxy to the real site.
Right there's 2 things:
the limit_conn directive in combination with a limit_conn_zone lets you limit the number of (simultaneous) connnections from an ip (see http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn)
the limit_req directive in combination with a limit_req_zone lets you limit the number of request from a given ip per timeunit (see http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req)
note:
you need to do the limit_conn_zone/limit_req_zone in the http block not the serverblock
you then refer to the zone name you set up in the http block from within the server/location block with the etup zone with the limit_con/limit_req settings (as approriate)
since you stated below you're looking to limit requests you need the limit_req directives. Specically to get a max 5 requests per minute, try adding the following:
http {
limit_req_zone $binary_remote_addr zone=example:10m rate=5r/m;
}
server {
limit_req zone=example burst=0 nodelay;
}
note: obviously add those to your existing http/server blocks
how to handle timeouts with PHP in php5-fpm + ngnix configurations?
I tried to make a simple script with just
sleep(60);
php.ini
max_execution_time = 30
fast_cgi
fastcgi_connect_timeout 60;
fastcgi_send_timeout 50;
fastcgi_read_timeout 50;
The script stops at 50s for timeout of the backend. What do I have to do to
enable the max_execution_time in php.ini
enable ini_set to change the execution time to 0 directly in the
script
Why does fast_cgi get to control timeouts over everything instead of php itself?
It was basically the fact that on Linux the timeout counts only for the actual "php work" and not for all stream function times and moreover not for sleep that's why I never reached the limit and fastgci timeout always kicked in. Instead on Windows the actual "human" time elapsed counts.
from the PHP doc:
The set_time_limit() function and the configuration directive
max_execution_time only affect the execution time of the script
itself. Any time spent on activity that happens outside the execution
of the script such as system calls using system(), stream operations,
database queries, etc. is not included when determining the maximum
time that the script has been running. This is not true on Windows
where the measured time is real.
Try using set_time_limit in your PHP code.
When use php-cgi(php-fpm) php.ini's max_execution_timewill not take effects, but
fpm configuration item request_terminate_timeout will handle script execution time .
In php-fpm.conf set this item like below:
request_terminate_timeout = 60s