Limit Page to one Simultanious Request - php

Using PHP or Apache (I'd also consider moving to Nginx if necessary), how can I make a page only be ran once at a time? If three requests come in at the same time, one would complete entirely, then the next, and then the next. At no time should the page be accessed by more than one request at a time.
Think of it like transactions! How can I achieve this? It should be one page per server, no user or IP.

What do you want to do with the "busy" state on server? return an error right away or keep requests in waiting until the previous finishes?
If you just want the server to refuse content to the client, you can do it on both nginx and apache:
limit_req module for nginx
using mod_security on apache
The "tricky" part in your request is not to limit by an IP as people usually want, but globally per URI. I know it should be possible with mod_security bud I didn't do that myself but I have this configuration working for nginx:
http {
#create a zone by $package id (my internal variable, similar to a vhost)
limit_req_zone $packageid zone=heavy-stuff:10m rate=1r/s;
}
then later:
server {
set $packageid some-id;
location = /some-heavy-stuff {
limit_req zone=heavy-stuff burst=1 nodelay;
}
}
what it does for me is creating N limit zones, one for each of my servers. The zone is then used to count requests and allow only 1 per second.
Hope it helps

If the same user gives the request from the same page then Use session_start() this will block the other requests until finish the first request
Example:
http://codingexplained.com/coding/php/solving-concurrent-request-blocking-in-php
If you want to block the request from different browser/client keep the entries in database and process it one by one.

Related

Nginx to cache only for bots

I have a decent website (nginx -> apache -> mod_php/mysql) to tune it a bit, and I find the biggest problem is that search bots used to overload it sending many requests at once.
There is a cache in site's core (that is, in PHP), so the site's author reported there should be no problem but in fact the bottleneck is that apache's reply is too long as there is too many requests for the page.
What I can imagine is to have some nginx based cache to cache pages only for bots. The TTL may be high enough (there is nothing that dynamic on page that can't wait another 5-10 minutes to be refreshed) Let's define 'bot' as any client that have 'Bot' in its UA string ('BingBot' as an example).
So I try to do something like that:
map $http_user_agent $isCache {
default 0;
~*(google|bing|msnbot) 1;
}
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=10g inactive=60m use_temp_path=off;
server {
...
location / {
proxy_cache my_cache;
proxy_cache_bypass $isCache;
proxy_cache_min_uses 3;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_pass http://my_upstream;
}
# location for images goes here
}
Am I right with my approach? Looks like it won't work.
Any other approaches to limit load from bots? Surely without sending 5xx codes to them (as Search Engines can lower positions for sites that are too 5xx-ed).
Thank you!
If your content pages may differ (i.e. say a user is logged in and it the page contains "welcome John Doe", then that version of the page may be cached, as each request is updating the cached copy (i.e. a logged in person will update the cached version, including their session cookies, which is bad).
It is best to do something similar to the following:
map $http_user_agent $isNotBot {
~*bot "";
default "IAmNotARobot";
}
server {
...
location / {
...
# Bypass the cache for humans
proxy_cache_bypass $isNotBot;
# Don't cache copies of requests from humans
proxy_no_cache $isNotBot;
...
}
...
}
This way, only requests by a bot are cached for future bot requests, and only bots are served cached pages

Is there an easy way to automatically do a php-fpm restart after a 502 gateway timeout on server?

Do you have any useful links, tips or scripts about installing a heartbeat-tool for a bigger site that uses Wordpress and nginx. If too many people visit that site at the same time server shuts down. I need something to automatically restart the site immediately after that happens.
Regards
Your question is how to restart PHP on a 502. My first answer is an attempt at preventing the 502 from happening in the first place.
It's possible that PHP is consuming too much memory. My guess is that your number of php FCGI children is set too high. In your init script you should have an entry like PHP_FCGI_CHILDREN=20 or similar that controls the amount of PHP processes that will start. I would try reducing the number. If you can identify the average memory per PHP process (using top perhaps) then you can establish the max number of PHP processes that should run. For example, if you have a 2,000MB server and your PHP processes consume a max of 100MB each then you'll want to limit them to 20.
You can create another location and start the name with the # symbol. The # symbol is used for "internal" locations. I like to use the http://openresty.org distribution of nginx. It includes the ngx_lua http://wiki.nginx.org/HttpLuaModule module. Lua is a scripting language that can (among other things) execute shell commands. For example:
location / {
error_page 502 = #php502error;
}
location #php502error {
content_by_lua 'os.execute("/bin/restart-my-php-processes.sh")';
}
os.execute is blocking, so you'll want to keep that in mind... I've heard of people setting up a thttpd to run scripts. So you'd proxy_pass in your #php502error location.
Although kaicurry is correct, you should be editing your php.ini file to hopefully tackle the source of the problem; to actually answer your question:
Edit your php-fpm.service file. Eg, type nano /lib/systemd/system/php-fpm.service.
Add the following 2 lines at the bottom of [Service]:
Restart=on-failure
RestartSec=5s
Restart systemctl: systemctl daemon-reload.
Php will now automatically restart whenever it fails. You should ocassional check the logs to make sure it's not failing often.
You may also wish to do the same for nginx: /lib/systemd/system/nginx.service

How to Enable geoip on magento with varnish page cache

I currently have 3 stores online with 3 different domains, running magento with Apache and varnish (using Phoenix page cache extension) running on centos
One store is for uk, another for Ireland and another for USA
Trouble is (Example) If an US user hits the uk store , I would like the user to be notified to go to the correct store on the page, (I do not want them automatically redirected)
I was able to php-pecl-geoip with maxmind database to get this to work, but as users on my website have increased I had to begin using varnish.
how could I implement this functionality on with varnish so I know what country the user is from so I can display a message to the user to view their relevant website?
Gunah, I think you missed the point here.
When put Varnish in front of Apache, the client IP that PHP would see will always be the IP of Varnish (127.0.0.1 if it stay in the same server).
molleman, In this case you need to look at X-Forwarded-For header set by Varnish to get the real client IP. You can see how Varnish set it in the default.vcl:
if (req.http.x-forwarded-for) {
set req.http.X-Forwarded-For =
req.http.X-Forwarded-For + ", " + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
If your web server is behind a load balancer, then you need more works. Please refer here for a solution: Varnish removes Public IP from X-Forwarded-for
you can create your Crontroller with a JSON Action Result in Magento.
then you can check these with JavaScript and output the result.
Do not forget to add your controller to the withlist in Varnish.

How do you monitor a file on a web server and log every access, ideally by IP address, in a database (MySQL)?

For security reasons, there is a certain file on my web server I want to be able to monitor access to. Every time it is accessed, I want to have an entry added to a MySQL log table. This way, I can actively respond to security breaches from within the web application.
The Apache HTTP Server provides logging capabilities.
The server access log records all requests processed by the server. The location and content of the access log are controlled by the CustomLog directive. The LogFormat directive can be used to simplify the selection of the contents of the logs. This section describes how to configure the server to record information in the access log.
It can be used to write the log to a file. If you need to store in a MySQL table, run a cron job to import the file into the database.
Further information on logs is here:
http://httpd.apache.org/docs/1.3/logs.html#accesslog
Its been removed from PHP7 but for anyone else who finds this post there are a number of options within the FAM (now PECL) extension. This function http://php.net/manual/en/function.fam-monitor-file.php seems to describe what is needed here
Additionally you can access a lot of detail about the files status with http://php.net/manual/en/function.stat.php. Put this within a cron or sleep driven script and you can then see when its changed.
The file may be accessed from three points:
Direct filesystem access
Call to the url like www.example.com/importantfile.jpg (apache serves the file)
Call to some php script on your server www.example.com/readfile.php?name=important.jpg which reads the file.
If you are concerned only about case 2 then check the solution of Rishi Dua.
But if you want more than that then you should write a script with fileatime() call and then add it to cron to run every minute for example.
The pseudocode for it:
<?php
$previous_access_time = get_previous_access_time(); // get the previous last access time from you remembered in db or textfile or whatever
$current_access_time = fileatime('path/to/very_important_file.jpg');
if ($previous_access_time != $current_access_time) {
log_access_to_db();
save_new_access_time(); // update the new last access time
}
This solution however has some problems.
First is that you can get only the access time but not the user-id or ip of who accessed the file.
Second is that as the manual says, some Unix system do not update the access time and so the solution would fail.
If you are seriously concerned about the security, then I think you have to check for some audit util like this

How can I limit connections to my web application per minute?

I have tried to use nginx (http://nginx.org/) to limit the amount of requests per minute. For example my settings have been:
server{
limit_req_zone $binary_remote_addr zone=pw:5m rate=20r/m;
}
location{
limit_req zone=pw nodelay;
}
What I have found with Nginx is that even if I try 1 request per minute, I am allowed back in many times within that minute. Of course fast refreshing of a page will give me the limit page message which is a "503 Service Temporarily Unavailable" return code.
I want to know what kind of settings can be applied to limit a request exactly to 20 requests a minute. I am not looking for flood protection only because Nginx provides this where if a page is constatnly refreshed for example it limits the user and lets them back in after some time with some delay (unless you apply a nodelay setting).
If there is an alternative to Nginx other than HAProxy (because its quite slow). Also the setup I have on Nginx is acting as a reverse proxy to the real site.
Right there's 2 things:
the limit_conn directive in combination with a limit_conn_zone lets you limit the number of (simultaneous) connnections from an ip (see http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn)
the limit_req directive in combination with a limit_req_zone lets you limit the number of request from a given ip per timeunit (see http://nginx.org/en/docs/http/ngx_http_limit_req_module.html#limit_req)
note:
you need to do the limit_conn_zone/limit_req_zone in the http block not the serverblock
you then refer to the zone name you set up in the http block from within the server/location block with the etup zone with the limit_con/limit_req settings (as approriate)
since you stated below you're looking to limit requests you need the limit_req directives. Specically to get a max 5 requests per minute, try adding the following:
http {
limit_req_zone $binary_remote_addr zone=example:10m rate=5r/m;
}
server {
limit_req zone=example burst=0 nodelay;
}
note: obviously add those to your existing http/server blocks

Categories