How can I limit connections to my web application per minute? - php

I have tried to use nginx (http://nginx.org/) to limit the amount of requests per minute. For example my settings have been:
server{
limit_req_zone $binary_remote_addr zone=pw:5m rate=20r/m;
}
location{
limit_req zone=pw nodelay;
}
What I have found with Nginx is that even if I try 1 request per minute, I am allowed back in many times within that minute. Of course fast refreshing of a page will give me the limit page message which is a "503 Service Temporarily Unavailable" return code.
I want to know what kind of settings can be applied to limit a request exactly to 20 requests a minute. I am not looking for flood protection only because Nginx provides this where if a page is constatnly refreshed for example it limits the user and lets them back in after some time with some delay (unless you apply a nodelay setting).
If there is an alternative to Nginx other than HAProxy (because its quite slow). Also the setup I have on Nginx is acting as a reverse proxy to the real site.

Right there's 2 things:
the limit_conn directive in combination with a limit_conn_zone lets you limit the number of (simultaneous) connnections from an ip (see http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn)
the limit_req directive in combination with a limit_req_zone lets you limit the number of request from a given ip per timeunit (see http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req)
note:
you need to do the limit_conn_zone/limit_req_zone in the http block not the serverblock
you then refer to the zone name you set up in the http block from within the server/location block with the etup zone with the limit_con/limit_req settings (as approriate)
since you stated below you're looking to limit requests you need the limit_req directives. Specically to get a max 5 requests per minute, try adding the following:
http {
limit_req_zone $binary_remote_addr zone=example:10m rate=5r/m;
}
server {
limit_req zone=example burst=0 nodelay;
}
note: obviously add those to your existing http/server blocks

Related

Webserver stops loading during long API call (never shows return) [duplicate]

How do I increase the apache timeout directive in .htaccess? I have a LONG $_POST['script'] that takes a user probably 10 minutes to fill in all the data. The problem is if it takes too long than the page times out or something and it goes to a webpage is not found error page. Would increasing the apache timeout directive in .htaccess be the answer I'm looking for. I guess it's set to 300 seconds by default, but I don't know how to increase that or if that's even what I should do... Either way, how do I increase the default time? Thank you.
if you have long processing server side code, I don't think it does fall into 404 as you said ("it goes to a webpage is not found error page")
Browser should report request timeout error.
You may do 2 things:
Based on CGI/Server side engine increase timeout there
PHP : http://www.php.net/manual/en/info.configuration.php#ini.max-execution-time - default is 30 seconds
In php.ini:
max_execution_time 60
Increase apache timeout - default is 300 (in version 2.4 it is 60).
In your httpd.conf (in server config or vhost config)
TimeOut 600
Note that first setting allows your PHP script to run longer, it will not interferre with network timeout.
Second setting modify maximum amount of time the server will wait for certain events before failing a request
Sorry, I'm not sure if you are using PHP as server side processing, but if you provide more info I will be more accurate.
Just in case this helps anyone else:
If you're going to be adding the TimeOut directive, and your website uses multiple vhosts (eg. one for port 80, one for port 443), then don't forget to add the directive to all of them!
This solution is for Litespeed Server (Apache as well)
Add the following code in .htaccess
RewriteRule .* - [E=noabort:1]
RewriteRule .* - [E=noconntimeout:1]
Litespeed reference

Nginx to cache only for bots

I have a decent website (nginx -> apache -> mod_php/mysql) to tune it a bit, and I find the biggest problem is that search bots used to overload it sending many requests at once.
There is a cache in site's core (that is, in PHP), so the site's author reported there should be no problem but in fact the bottleneck is that apache's reply is too long as there is too many requests for the page.
What I can imagine is to have some nginx based cache to cache pages only for bots. The TTL may be high enough (there is nothing that dynamic on page that can't wait another 5-10 minutes to be refreshed) Let's define 'bot' as any client that have 'Bot' in its UA string ('BingBot' as an example).
So I try to do something like that:
map $http_user_agent $isCache {
default 0;
~*(google|bing|msnbot) 1;
}
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
...
location / {
proxy_cache my_cache;
proxy_cache_bypass $isCache;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_pass http://my_upstream;
}
# location for images goes here
}
Am I right with my approach? Looks like it won't work.
Any other approaches to limit load from bots? Surely without sending 5xx codes to them (as Search Engines can lower positions for sites that are too 5xx-ed).
Thank you!
If your content pages may differ (i.e. say a user is logged in and it the page contains "welcome John Doe", then that version of the page may be cached, as each request is updating the cached copy (i.e. a logged in person will update the cached version, including their session cookies, which is bad).
It is best to do something similar to the following:
map $http_user_agent $isNotBot {
~*bot "";
default "IAmNotARobot";
}
server {
...
location / {
...
# Bypass the cache for humans
proxy_cache_bypass $isNotBot;
# Don't cache copies of requests from humans
proxy_no_cache $isNotBot;
...
}
...
}
This way, only requests by a bot are cached for future bot requests, and only bots are served cached pages

Magento: Increase max_execution_time for SOAP calls only?

In a Magento system, I have a global max_execution_time set to 60 seconds for PHP requests. However, this limit is not enough for some calls to the SOAP API. What I would like to do is increase the execution time to say 10 minutes for API requests, without affecting the normal frontend pages.
How can I increase max_execution_time for Magento SOAP requests only?
For Apache
This can be configured in your vhost with a <LocationMatch "/api/"> node. Place max execution time inside that node (along with any other rules such as max memory) and they will only be applied to requests that hit /api/.*
For Nginx
In Nginx, you should be able to accomplish the same thing with:
location ~ ^/api {
fastcgi_read_timeout 600;
}

Limit Page to one Simultanious Request

Using PHP or Apache (I'd also consider moving to Nginx if necessary), how can I make a page only be ran once at a time? If three requests come in at the same time, one would complete entirely, then the next, and then the next. At no time should the page be accessed by more than one request at a time.
Think of it like transactions! How can I achieve this? It should be one page per server, no user or IP.
What do you want to do with the "busy" state on server? return an error right away or keep requests in waiting until the previous finishes?
If you just want the server to refuse content to the client, you can do it on both nginx and apache:
limit_req module for nginx
using mod_security on apache
The "tricky" part in your request is not to limit by an IP as people usually want, but globally per URI. I know it should be possible with mod_security bud I didn't do that myself but I have this configuration working for nginx:
http {
#create a zone by $package id (my internal variable, similar to a vhost)
limit_req_zone $packageid zone=heavy-stuff:10m rate=1r/s;
}
then later:
server {
set $packageid some-id;
location = /some-heavy-stuff {
limit_req zone=heavy-stuff burst=1 nodelay;
}
}
what it does for me is creating N limit zones, one for each of my servers. The zone is then used to count requests and allow only 1 per second.
Hope it helps
If the same user gives the request from the same page then Use session_start() this will block the other requests until finish the first request
Example:
http://codingexplained.com/coding/php/solving-concurrent-request-blocking-in-php
If you want to block the request from different browser/client keep the entries in database and process it one by one.

NginX issues HTTP 499 error after 60 seconds despite config. (PHP and AWS)

At the end of last week I noticed a problem on one of my medium AWS instances where Nginx always returns a HTTP 499 response if a request takes more than 60 seconds. The page being requested is a PHP script
I've spent several days trying to find answers and have tried everything that I can find on the internet including several entries here on Stack Overflow, nothing works.
I've tried modifying the PHP settings, PHP-FPM settings and Nginx settings. You can see a question I raised on the NginX forums on Friday (http://forum.nginx.org/read.php?9,237692) though that has received no response so I am hoping that I might be able to find an answer here before I am forced to moved back to Apache which I know just works.
This is not the same problem as the HTTP 500 errors reported in other entries.
I've been able to replicate the problem with a fresh micro AWS instance of NginX using PHP 5.4.11.
To help anyone who wishes to see the problem in action I'm going to take you through the set-up I ran for the latest Micro test server.
You'll need to launch a new AWS Micro instance (so it's free) using the AMI ami-c1aaabb5
This PasteBin entry has the complete set-up to run to mirror my test environment. You'll just need to change example.com within the NginX config at the end
http://pastebin.com/WQX4AqEU
Once that's set-up you just need to create the sample PHP file which I am testing with which is
<?php
sleep(70);
die( 'Hello World' );
?>
Save that into the webroot and then test. If you run the script from the command line using php or php-cgi, it will work. If you access the script via a webpage and tail the access log /var/log/nginx/example.access.log, you will notice that you receive the HTTP 1.1 499 response after 60 seconds.
Now that you can see the timeout, I'll go through some of the config changes I've made to both PHP and NginX to try to get around this. For PHP I'll create several config files so that they can be easily disabled
Update the PHP FPM Config to include external config files
sudo echo '
include=/usr/local/php/php-fpm.d/*.conf
' >> /usr/local/php/etc/php-fpm.conf
Create a new PHP-FPM config to override the request timeout
sudo echo '[www]
request_terminate_timeout = 120s
request_slowlog_timeout = 60s
slowlog = /var/log/php-fpm-slow.log ' >
/usr/local/php/php-fpm.d/timeouts.conf
Change some of the global settings to ensure the emergency restart interval is 2 minutes
# Create a global tweaks
sudo echo '[global]
error_log = /var/log/php-fpm.log
emergency_restart_threshold = 10
emergency_restart_interval = 2m
process_control_timeout = 10s
' > /usr/local/php/php-fpm.d/global-tweaks.conf
Next, we will change some of the PHP.INI settings, again using separate files
# Log PHP Errors
sudo echo '[PHP]
log_errors = on
error_log = /var/log/php.log
' > /usr/local/php/conf.d/errors.ini
sudo echo '[PHP]
post_max_size=32M
upload_max_filesize=32M
max_execution_time = 360
default_socket_timeout = 360
mysql.connect_timeout = 360
max_input_time = 360
' > /usr/local/php/conf.d/filesize.ini
As you can see, this is increasing the socket timeout to 3 minutes and will help log errors.
Finally, I'll edit some of the NginX settings to increase the timeout's that side
First I edit the file /etc/nginx/nginx.conf and add this to the http directive
fastcgi_read_timeout 300;
Next, I edit the file /etc/nginx/sites-enabled/example which we created earlier (See the pastebin entry) and add the following settings into the server directive
client_max_body_size 200;
client_header_timeout 360;
client_body_timeout 360;
fastcgi_read_timeout 360;
keepalive_timeout 360;
proxy_ignore_client_abort on;
send_timeout 360;
lingering_timeout 360;
Finally I add the following into the location ~ .php$ section of the server dir
fastcgi_read_timeout 360;
fastcgi_send_timeout 360;
fastcgi_connect_timeout 1200;
Before retrying the script, start both nginx and php-fpm to ensure that the new settings have been picked up. I then try accessing the page and still receive the HTTP/1.1 499 entry within the NginX example.error.log.
So, where am I going wrong? This just works on apache when I set PHP's max execution time to 2 minutes.
I can see that the PHP settings have been picked up by running phpinfo() from a web-accessible page. I just don't get, I actually think that too much has been increased as it should just need PHP's max_execution_time, default_socket_timeout changed as well as NginX's fastcgi_read_timeout within just the server->location directive.
Update 1
Having performed some further test to show that the problem is not that the client is dying I have modified the test file to be
<?php
file_put_contents('/www/log.log', 'My first data');
sleep(70);
file_put_contents('/www/log.log','The sleep has passed');
die('Hello World after sleep');
?>
If I run the script from a web page then I can see the content of the file be set to the first string. 60 seconds later the error appears in the NginX log. 10 seconds later the contents of the file changes to the 2nd string, proving that PHP is completing the process.
Update 2
Setting fastcgi_ignore_client_abort on; does change the response from a HTTP 499 to a HTTP 200 though nothing is still returned to the end client.
Update 3
Having installed Apache and PHP (5.3.10) onto the box straight (using apt) and then increasing the execution time the problem does appear to also happen on Apache as well. The symptoms are the same as NginX now, a HTTP200 response but the actual client connection times out before hand.
I've also started to notice, in the NginX logs, that if I test using Firefox, it makes a double request (like this PHP script executes twice when longer than 60 seconds). Though that does appear to be the client requesting upon the script failing
The cause of the problem is the Elastic Load Balancers on AWS. They, by default, timeout after 60 seconds of inactivity which is what was causing the problem.
So it wasn't NginX, PHP-FPM or PHP but the load balancer.
To fix this, simply go into the ELB "Description" tab, scroll to the bottom, and click the "(Edit)" link beside the value that says "Idle Timeout: 60 seconds"
Actually I faced the same issue on one server and I figured out that after nginx configuration changes I didn't restart the nginx server, so with every hit of nginx url I was getting a 499 http response. After nginx restart it started working properly with http 200 responses.
I thought I would leave my two cents. First the problem is not related with php(still could be a php related, php always surprises me :P ). Thats for sure. its mainly caused of a server proxied to itself, more specifically hostname/aliases names issue, in your case it could be the load balancer is requesting nginx and nginx is calling back the load balancer and it keeps going that way.
I have experienced a similar issue with nginx as the load balancer and apache as the webserver/proxy
In my case - nginx was sending a request to an AWS ALB and getting a timeout with a 499 status code.
The solution was to add this line:
proxy_next_upstream off;
The default value for this in current versions of nginx is proxy_next_upstream error timeout; - which means that on a timeout it tries the next 'server' - which in the case of an ALB is the next IP in the list of resolved ips.
You need to find in which place problem live. I dont' know exact answer, but just let's try to find it.
We have here 3 elements: nginx, php-fpm, php. As you told, same php settings under apache is ok. Does it's same no same setup? Did you try apache instead of nginx on same OS/host/etc.?
If we will see, that php is not suspect, then we have two suspects: nginx & php-fpm.
To exclude nginx: try to setup same "system" on ruby. See https://github.com/garex/puppet-module-nginx to get idea to install simplest ruby setup. Or use google (may be it will be even better).
My main suspect here is php-fpm.
Try to play with these settings:
php-fpm`s request_terminate_timeout
nginx`s fastcgi_ignore_client_abort

Categories