Use of .htaccess to mitigate denial of service attacks - php

I have an application that requires logon.
It is only possible to access the site via a single logon page.
I am concerned about DDOS and have (thanks to friends here) been able to write a script that will recognise potential DDOS attacks and lock the particular IP to prevent site access (also a security measure to prevent multiple password/username combination guesses)
Is there any value in blocking those IPs that offend with .htaccess. I can simply modify the file to prevent my server allowing access to the offending IP for a period of time but will it do any good? Will the incoming requests still bung up the system, even though .htaccess prevents them being served or will it reduce the load allowing genuine requests in?
it is worth noting that most of my requests will come from a limited range of genuine IPs so the implementation I intend is along the lines of:
If DDOS attack suspected, Allow access only from IPs from which there has been a previous good logon for a set time period. Block all suspect IPs where there has been no good logon permanently, unless a manual request to unblock has been made.
Your sage advice would be greatly appreciated. If you think this is a waste of time, please let me know!
Implementation is pretty much pure PHP.

Load caused by a DDOS attack will be lower if blocked by .htaccess as the unwanted connections will be refused early and not allowed to call your PHP scripts.
Take for example a request made for the login script, your apache server will call the PHP script which will (I'm assuming) do a user lookup in a database of some kind. This is load.
Request <---> Apache <---> PHP <---> MySQL (maybe)
If you block and ip (say 1.2.3.4) your htacess will have an extra line like this:
Deny from 1.2.3.4
And the request will go a little like this:
Request <---> Apache <-x-> [Blocked]
And no PHP script or database calls will happen, this is less load than the previous example.
This also has the added bonus of preventing bruteforce attacks on the login form. You'll have to decide when to add IPs to a blocklist, maybe when they give incorrect credentials 20 times in a minute or continuously over half an hour.
Firewall
It would be better to block the requests using a firewall though, rather than with .htaccess. This way the request never gets to apache, it's a simple action for the server to drop the packet based on a IP address rule.
The line below is a shell command that (when run as root) will add an iptables rule to drop all packets originating from that IP address:
/sbin/iptables -I INPUT -s 1.2.3.4 -j DROP

Related

Is there a setting to set too many requests?

I am programming in PHP and I am wondering if there is a way to set up too many requests or does this need to be coded up manually? For example, if someone has opened 30 pages in 60 seconds, that is too many requests (and they may potentially be a bot), thus they get sent a too many requests HTTP status code.
If it is supposed to be done manually, what is the best practice to set up something like this?
You could try using ratelimit by Apache.
Here is a sample provided by Apache. The rate limit is 400kb/second for the particular IP.
<Location "/downloads">
SetOutputFilter RATE_LIMIT
SetEnv rate-limit 400
</Location>
More specifically, you can try a module like Mod Evasive to prevent multiple requests from accessing the server. You can use a product like CloudFlare to mitigate DDOS attacks.
If you really want to use PHP for this you can log the amount of requests from a given IP, and if requests from that IP is greater than a certain value, you can block the IP from accessing your page.
To do this, you can store the IP addresses in a database along with a date column indicating when they accessed your page, and calculate aggregates of their access in a particular period using SQL.
Just for anyone that might not be using apache here is the nginx documenation
https://docs.nginx.com/nginx/admin-guide/security-controls/controlling-access-proxied-http/

Identify a scraper in realtime - PHP - tail the access file in Memory

Background
Legitimate spiders are great. Its part of the web, I'm happy for them to access my site.
None authorised spiders which scrape my site are bad and I want rid of them.
I have a PHP application that monitors my website access files. Every time a user with a suspect UserAgent hits the site the system check the access log for entries from the same IP address and makes a judgement about its behaviour. If its not a human, and I have not authorised it then it gets logged and I may (or may not) take action such as blocking etc.
The way it works is that every time a page loads this process of checking the access file happens. I only check suspect UserAgent's so the number of checks is kept to a minimum.
Question
What I want to do is check every single visit that hits the site (i.e. check the last 50 lines of the access file to see if any relate to that visits IP). But that means every child process my web server handles will want to open the one single access log file. This sounds like a resource and I/O blocking nightmare.
Is there a way I can 'tail' the access.log file into some sort of central memory that all the web processes have access to at the same time (or very quickly at least). Perhaps loading it into Memcache or similar. But how would I do that in realtime? So the last 500 lines of the access.log file loads into memory continuously (but only 500 expunging as it goes, not an ever increasing number).
So in simple terms: is there a php or linux or 'other' way of buffering an ever increasing file (i.e. nginx log file) into memory so that other processes can access the information concurrently (or at least quicker than all reading the file off the hard drive).
It is important to know that a well-written service will always be able to mimic a browser's behaviour, unless you do some very weird stuff that will influence the user experience of legitimate visitors.
However, there are a few measures to deal even with sophisticated scrapers:
0. Forget about …
… referrer and UA strings. Those are easy to forge, and some legitimate users don't have a common one. You will get lots of false positives/negatives and not gain much.
1. Throttle
Web servers like Apache or nginx have core or addon features to throttle the request rate for certain requests. For example, you could allow the downloading of one *.html page per two seconds, but not limit assets like JS/CSS. (Keep in mind that you should also notify legitimate bots via robots.txt of the delays).
2. Fail2ban
Fail2ban does something similar to what you want to do: it scans log files for malicious requests and blocks them. It works great against malware bots, it should be possible to configure it to deal with scrapers (at least the less clever ones).
--
These are the ones that specifically answer your question, but there are a couple more, which you could consider:
3. Modify contents
This is actually a real fun one: From time to time, we make minor (automated) modifications of the HTML pages and of the JSON feeds, which force the scrapers to adapt their parsers. The fun part is when we see outdated data on their websites for a couple of days until they catch up. Then we modify it again.
4. Restrict: Captchas and Logins
Apart from the throttling on the web server level, we count the requests per IP address per hour. If it's more than a certain number (which should be enough for a legitimate user), each request to the API requires solving a captcha.
Other APIs require authentication, so they won't even get into those areas.
5. Abuse nofifications
If get regular visits from a certain IP address or subnet, you can do a WHOIS lookup for the network service from which they are running their bots. Usually, they have Abuse contacts, and usually those contacts are very eager to hear about policy violations. Because the last thing they want is to get on blacklists (which we will submit them to, if they don't cooperate).
Also, if you see advertising on the scraper's website, you should notify the advertising networks of the fact that they're being used in the context of stolen material.
6. IP bans
Quite obviously you can block a single IP address. What we do is even blocking entire data centers like those of AWS, Azure, etc. There are lists of IP ranges available on the web for all of those services.
Of course, if there are partner services legitimately accessing your site from a data-center, you must whitelist them.
By the way, we don't do this in the web server but on the firewall level (IPtables).
7. Legal measures
If you think that the scraper might be afraid of legal actions from your side, you should not hesitate to contact them and make clear that they infringe on your copyright and terms of usage, and they may become subject to legal actions.
8. Conclusion
After all, fighting scrapers is a “fight against windmills”, and it may take a lot of effort. You will not be able to prevent all of it, but you should concentrate on the ones that harm you, e.g. by wasting your ressources or making money that would belong to you.
Good luck!

Avoiding automatically created traffic by jerks

I was thinking about web-security and then this thought popped into my head.
Say that there's this jerk who hates me and knows how to program. I am managing a nice website/blog with a considerable amount of traffic. Then that jerk creates a program that automatically request my website over and over again.
So if I am hosting my website on a shared hosting provider then obviously my website will stop responding.
This type of attacks may not be common, but if someone attempts something like that on my website i must do something about it. I don't think that popular CMS's like wordpress or drupal do something about this type of attacks.
My assumption is ;
If a user requests more than x times (let's say 50) in 1-minute, block that user. (stop responding)
My questions are;
Is my assumption ok ? If not what to do about it ?
Do websites like Google, Facebook, Youtube...[etc] do something about this type of attacks.
What you are facing is the DoS.[Denial of Service] Attack. Where one system tries to go on sending packets to your webserver and makes it unresponsive.
You have mentioned about a single jerk, what if the same jerk had many friends and here comes DDoS [Distributed DoS] Attack. Well this can't be prevented.
A Quick fix from Apache Docs for the DoS but not for the DDoS ...
All network servers can be subject to denial of service attacks that
attempt to prevent responses to clients by tying up the resources of
the server. It is not possible to prevent such attacks entirely, but
you can do certain things to mitigate the problems that they create.
Often the most effective anti-DoS tool will be a firewall or other
operating-system configurations. For example, most firewalls can be
configured to restrict the number of simultaneous connections from any
individual IP address or network, thus preventing a range of simple
attacks. Of course this is no help against Distributed Denial of
Service attacks (DDoS).
Source
The issue is partly one of rejecting bad traffic, and partly one of improving the performance of your own code.
Being hit with excess traffic by malicious intent is called a Denial of Service attack. The idea is to hit the site with traffic to the point that the server can't cope with the load, stops responding, and thus no-one can get through and the site goes off-line.
But you can also be hit with too much traffic simply because your site becomes popular. This can easily happen overnight and without warning, for example if someone posts a link to your site on another popular site. This traffic might actually be genuine and wanted (hundred of extra sales! yay!), but can have the same effect on your server if you're not prepared for it.
As others have said, it is important to configure your web server to cope with high traffic volumes; I'll let the other answers speak for themselves on this, and it is an important point, but there are things you can do in your own code to improve things too.
One of the main reasons that a server fails to cope with increased load is because of the processing time taken by the request.
Your web server will only have the ability to handle a certain number of requests at once, but the key word here is "simultaneous", and the key to reducing the number of simultaneous requests is to reduce the time it takes for your program to run.
Imagine your server can handle ten simultaneous requests, and your page takes one second to load.
If you get up to ten requests per second, everything will work seamlessly, because the server can cope with it. But if you go just slightly over that, then the eleventh request will either fail or have to wait until the other ten have finished. It will then run, but will eat into the next second's ten requests. By the time ten seconds have gone by, you're a whole second down on your response time, and it keeps getting worse as long as the requests keep pouring in at the same level. It doesn't take long for the server to get overwhelmed, even when it's only just a fraction over it's capacity.
Now imagine the same page could be optimised to take less time, lets say half a second. Your same server can now cope with 20 requests per second, simply because the PHP code is quicker. But also, it will be easier for it recover from excess traffic levels. And because the PHP code takes less time to run, there is less chance of any two given requests being simultaneous anyway.
In short, the server's capacity to cope with high traffic volumes increases enormously as you reduce the time taken to process a request.
So this is the key to a site surviving a surge of high traffic: Make it run faster.
Caching: CMSs like Drupal and Wordpress have caching built in. Make sure it's enabled. For even better performance, consider a server-level cache system like Varnish. For a CMS type system where you don't change the page content much, this is the single biggest thing you can do to improve your performance.
Optimise your code: while you can't be expected to fix performance issues in third-party software like Drupal, you can analyse the performance of your own code, if you have any. Custom Drupal modules, maybe? Use a profiler tool to find your bottlenecks. Very often, this kind of analysis can reveal that a single bottleneck is responsible for 90% of the page load time. Don't bother with optimising the small stuff, but if you can find and fix one or two big bottlenecks like this, it can have a dramatic effect.
Hope that helps.
These types of attacks are called (D)DoS (Distributed Denial of Service) attacks and are usually prevented by the webserver hosting your PHP Application. Since apache is used the most, I found an article you might find interesting: http://www.linuxforu.com/2011/04/securing-apache-part-8-dos-ddos-attacks/.
The article states that apache has multiple mods available specifically created to prevent (D)DoS attacks. These still need to be installed and configured to match your needs.
I do believe that Facebook, Google etc. have their own similar implementations to prevent DoS attacks. I know for a fact that Google Search engine uses a captcha if alot of search requests are coming from the same network.
Why it is not wise to prevent DoS within a PHP script is because the PHP processor still needs to be started whenever a request is made, which causes alot of overhead. By using the webserver for this you will have less overhead.
EDIT:
As stated in another answer it is also possible to prevent common DoS attacks by configuring the server's firewall. Checking for attacks with firewall rules happens before the webserver is getting hit, so even less overhead there. Furthermore you can detect attacks on other ports aswell (such as portscans). I believe a combination of the 2 works best as both complement each other.
In my opinion, the best way to prevent DoS is to set the firewall to the lower level: at the entry of the server. By settings some network firewall config with iptables, you can drop packets from senders which are hitting too hard your server.
It'll be more efficient than passing through PHP and Apache, since them need to use a lot (relatively) of processus to do the checking and they may block your website, even if you detect your attacker(s).
You can check on this topic for more information: https://serverfault.com/questions/410604/iptables-rules-to-counter-the-most-common-dos-attacks

Security vulnerabilities with file_get_contents() using variable location

Part of my site's application process is that a user must prove ownership of a website. I quickly threw together some code but until now didn't realize that there could be some vulnerabilities with it.
Something like this:
$generatedCode="9s8dfOJDFOIesdsa";
$url="http://anyDomainGivenByUser.com/verification.txt";
if(file_get_contents($url)==$generatedCode){
//verification complete!
}
Is there any threat to having a user-provided url for file_get_contents()?
Edit: The code above is just an example. The generatedCode is obviously a bit more elaborate but still just a string.
Yes, this could possibly be a Server Side Request Forgery vulnerability - if $url is dynamic, you should validate that it is an external internet address and the scheme specifies the HTTP or HTTPS protocol. Ideally you'd use the HTTPS protocol only and then validate the certificate to guard against any DNS hijacking possibilities.
If $url is user controllable, they could substitute internal IP addresses and probe the network behind the firewall using your application as a proxy. For example, if they set the host in $url to 192.168.123.1, your script would request http://192.168.123.1/verification.txt and they might be able to ascertain that another machine is in the hosted environment due to differences in response times between valid and invalid internal addresses. This is known as a Timing Attack. This could be a server that you might not necessarily want exposed publicly. Of course, this is unlikely to attack your network in isolation, but it is a form of Information Leakage and might help an attacker enumerate your network ready for another attack.
You would need to validate that the URL or resolved DNS each time it was requested, otherwise an attacker could set this to external to pass the validation, and then immediately re-point it to an internal address in order to begin probing.
file_get_contents in itself appears safe, as it retrieves the URL and places it into a string. As long as you're not processing the string in any script engine or using is as any execution parameter you should be safe. file_get_contents can also be used to retrieve a local file, but if you validate that it is a valid internet facing HTTP URL as described above, this measure should prevent reading of local files should you decide to show the user what verification.txt contained in case of mismatch. In addition, if you were to display the contents of verification.txt anywhere on your site, you should make sure the output is properly encoded to prevent XSS.

How to enable DDoS protection?

DDoS (Distributed Denial of Service Attacks) are generally blocked on a server level right?
Is there a way to block it on a PHP level, or at least reduce it?
If not, what is the fastest and most common way to stop DDoS attacks?
DDOS is a family of attacks which overwhelm key systems in the datacenter including:
The hosting center's network connection to the internet
The hosting center's internal network and routers
Your firewall and load balancers
Your web servers, application servers and database.
Before you start on building your DDOS defence, consider what the worst-case value-at-risk is. For a non-critical, free-to-use service for a small community, the total value at risk might be peanuts. For a paid-for, public-facing, mission-critical system for an established multi-billion dollar business, the value might be the worth of the company. In this latter case, you shouldn't be using StackExchange :) Anyway, to defend against DDOS, you need a defence in-depth approach:
Work with your hosting center to understand the services they offer, including IP and port filtering at their network connections to the internet and firewall services they offer. This is critical: Many sites are pulled from the internet by the hosting company as the hosting company deals with the data center-wide disruption caused by the DDOS to one customer. Also, during an DDOS attack, you will be working very closely with the hosting center's staff, so know their emergency numbers and be on good terms with them :) They should be able to block of whole international regions, completely block specific services or network protocols and other broad-spectrum defensive measures, or alternatively allow only whitelisted IPs (depending on your business model)
While on the hosting center - use a Content Delivery Network to distribute (mainly static) services close to your end users and hide your real servers from the DDOS architects. The full CDN is too big for a DDOS to take out all nodes in all countries; if the DDOS is focused on one country, at least other users are still OK.
Keep all your systems and software packages updated with the latest security patches - and I mean all of them:
Managed switches - yup these sometimes need updating
Routers
Firewalls
Load balancers
Operating systems
Web servers
Languages and their libraries
Ensure that you have a good firewall or security appliance set up and regularly reviewed by a qualified security expert. Strong rules on the firewall are a good defence against many simple attacks. It's also useful to be able to manage bandwidth available for each open service.
Have good network monitoring tools in place - this can help you understand:
That you're under attack rather than simply being under heavy load
Where the attack is coming from (which may include countries you don't normally do business with) and
What the attack actually is (ports, services, protocols, IPs and packet contents)
The attack might simply be heavy use of legitimate web site services (eg hitting 'legal' URIs running queries or inserting/updating/deleting data) - thousands or millions of requests coming from tens to millions of different IP addresses will bring a site to its knees. Alternatively, some services might be so expensive to run that only a few requests cause a DOS - think a really expensive report. So you need good application level monitoring of what is going on:
Which services have been invoked and what arguments/data are sent (i.e. logging in your application)
Which users are doing the invoking and from which IPs (i.e. logging in your application)
What queries and inserts/updates/deletes the DB is performing
Load average, CPU utilization, disk i/o, network traffic on all computers (and VMs) in your system
Making sure that all this information is easily retrievable and that you can correlate logs from different computers and services (i.e. ensure all computers are time synchronized using ntp).
Sensible constraints and limits in your application. For example, you might:
Use a QoS feature in the load balancer to send all anonymous sessions to separate application servers in your cluster, while logged-on users use another set. This prevents an application-level anonymous DDOS taking out valuable customers
Using a strong CAPCHA to protect anonymous services
Session timeouts
Have a session-limit or rate-limit on certain types of request like reports. Ensure that you can turn off anonymous access if necessary
Ensure that a user has a limit to the number of concurrent sessions (to prevent a hacked account logging on a million times)
Have different database application users for different services (eg transactional use vs. reporting use) and use database resource management to prevent one type of web request from overwhelming all others
If possible make these constraints dynamic, or at least configurable. This way, while you are under attack, you can set aggressive temporary limits in place ('throttling' the attack), such as only one session per user, and no anonymous access. This is certainly not great for your customers, but a lot better than having no service at all.
Last, but not least, write a DOS Response Plan document and get this internally reviewed by all relevant parties: Business, Management, the SW dev team, the IT team and a security expert. The process of writing the document will cause you and your team to think through the issues and help you to be prepared if the worst should happen at 3am on your day off. The document should cover (among other things):
What is at risk, and the cost to the business
Measures taken to protect the assets
How an attack is detected
The planned response and escalation procedure
Processes to keep the system and this document up-to-date
So, preamble aside, here are some specific answers:
DDOS are generally blocked on a server level, right?
Not really - most of the worst DDOS attacks are low-level (at the IP packet level) and are handled by routing rules, firewalls, and security devices developed to handle DDOS attacks.
Is there a way to block it on a PHP level, or at least reduce it?
Some DDOS attacks are aimed at the application itself, sending valid URIs and HTTP requests. When the rate of requests goes up, your server(s) begin to struggle and you will have an SLA outage. In this case, there are things you can do at the PHP level:
Application level monitoring: Ensure each service/page logs requests in a way that you can see what is going on (so you can take actions to mitigate the attack). Some ideas:
Have a log format that you can easily load into a log tool (or Excel or similar), and parse with command-line tools (grep, sed, awk). Remember that a DDOS will generate millions of lines of log. You will likely need to slice'n'dice your logs (especially with respect to URI, time, IP and user) to work out what is going on, and need to generate data such as:
What URIs are being accessed
What URIs are failing at a high rate (a likely indicator of the specific URIs the attackers are attacking)
Which users are accessing the service
How many IPs are each user accessing the service from
What URIs are anonymous users accessing
What arguments are being used for a given service
Audit a specific users actions
Log the IP address of each request. DON'T reverse DNS this - ironically the cost of doing this makes a DDOS easier for the attackers
Log the whole URI and HTTP method, eg "GET http://example.com/path/to/service?arg1=ddos"
Log the User ID if present
Log important HTTP arguments
Sensible rate limits: You might implement limits on how many requests a given IP or User can make in a given time period. Could a legitimate customer make more than 10 requests per second? Can anonymous users access expensive reports at all?
CAPTCHA for anonymous access: Implement a CAPTCHA for all anonymous requests to verify that the user is a person, not a DDOS bot.
What's the fastest and most common way to stop DDOS attacks?
The fastest is probably to give in to the blackmail, although this might not be desirable.
Otherwise, the first thing you to do is contact your hosting and/or CDN provider and work with them (if they haven't contacted you already asking what the hell is going on...). When a DDOS occurs, it will likely collaterally affect other customers of the hosting provider, and the provider may be under considerable pressure to shut down your site simply to protect their resources. Be prepared to share your logs (any and all information) with the provider; these logs, combined with their network monitors, may together provide enough information to block/mitigate the attack.
If you are expecting a DDOS, it's a very good idea to qualify your hosting provider on the level of protection they can provide. They should have DDOS experience and tools to mitigate it - understand their tools, processes and escalation procedures. Also ask about what support the hosting provider has from their upstream providers. These services might mean more up-front or monthly cost, but treat this as an insurance policy.
While under attack, you will need to grab your logs and mine them - try and work out the pattern of the attack. You should consider switching off anonymous access and throttling the services under attack (i.e. decrease the application's rate limit for the service).
If lucky and you have a small, fixed customer-base, you might be able to determine your valid customers IP addresses. If this is the case, you might switch to a white-list approach for a short while. Make sure all your customers know this is going on so they can call if they need to access from a new IP :)
Doug McClean has some great advice at: https://stackoverflow.com/a/1029613/1395668
According the PHP part of the question;
Although I don't rely on PHP for this, it could be implemented but needs to consider all these possiblities or more;
Attacker may change IP for each request
Attacker may pass parameter(s) to URI that target site doesn't care these parameter(s)
Attacker may restart the session before expiry
...
Simple pseudo;
<?php
// Assuming session is already started
$uri = md5($_SERVER['REQUEST_URI']);
$exp = 3; // 3 seconds
$hash = $uri .'|'. time();
if (!isset($_SESSION['ddos'])) {
$_SESSION['ddos'] = $hash;
}
list($_uri, $_exp) = explode('|', $_SESSION['ddos']);
if ($_uri == $uri && time() - $_exp < $exp) {
header('HTTP/1.1 503 Service Unavailable');
// die('Easy!');
die;
}
// Save last request
$_SESSION['ddos'] = $hash;
?>
The php level is too late in the request chain.
Putting your apache server behind an open source appliance may be a good option for you.
http://tengine.taobao.org/ has some documentation and source code more modules aimed at DDOS prevention. It is a expansion of nginx, so you can easily set it up as a reverse proxy for your apache instance.
See: http://blog.zhuzhaoyuan.com/2012/01/a-mechanism-to-help-write-web-application-firewalls-for-nginx/ for how to fight collision has DoS attacks.
Totally forgot too, http://www.cloudflare.com is one the top free web application firewall, they have free and paid plans and will save your ass from DDOS we use it for alot of our high traffic sites just for its caching capabilities. It is awsome!
You can not do this in PHP level. DDOS is a kind of attack that send too many requests to your webserver. Your webserver will reject request before it call your PHP script.
If you are using Apache, here is some tips from Apache:
http://httpd.apache.org/docs/trunk/misc/security_tips.html
DDoS is best handled by very expensive, purpose-built network appliances. Hosts are generally not good at doing DDoS protection because they are subject to relatively low performance, state exhaustion, limited bandwidth, etc. Use of iptables, apache mods, and similar services can help in some situations if you have no access to DDoS mitigation hardware or a DDoS mitigation service, but it is far from ideal and still leaves you at risk of attack.
Do NOT use PHP-based protection, it's horrible and will hardly have an impact at all! Configure your webserver to rate-limit requests, for example in Nginx using the limit_req module (http://nginx.org/en/docs/http/ngx_http_limit_req_module.html)
Although, I would recommend using CloudFlare to combat layer-4 - however not layer-7 based attacks unless you're willing to pay.
There are plugins you can use in apache for ddos/dos.
Good start here
http://www.debianadmin.com/how-to-protect-apache-against-dosddos-or-brute-force-attacks.html
If you're on LEMP, you can check here.
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html
These are good inexpensive starting points.
How about something like this on PHP side:
//if user does not change IP, then ban the IP when more than 10 requests per second are detected in 1 second
$limitps = 10;
if (!isset($_SESSION['first_request'])){
$_SESSION['requests'] = 0;
$_SESSION['first_request'] = $_SERVER['REQUEST_TIME'];
}
$_SESSION['requests']++;
if ($_SESSION['requests']>=10 && strtotime($_SERVER['REQUEST_TIME'])-strtotime($_SESSION['first_request'])<=1){
//write the IP to a banned_ips.log file and configure your server to retrieve the banned ips from there - now you will be handling this IP outside of PHP
$_SESSION['banip']==1;
}elseif(strtotime($_SERVER['REQUEST_TIME'])-strtotime($_SESSION['first_request']) > 2){
$_SESSION['requests'] = 0;
$_SESSION['first_request'] = $_SERVER['REQUEST_TIME'];
}
if ($_SESSION['banip']==1) {
header('HTTP/1.1 503 Service Unavailable');
die;
}
DDOS are generally blocked on a server level, Please enable DDOS protection in your Server Level. Please check the below notes for DDOS protections.
Apache HTTP Server configuration settings that can help prevent DDOS problems:
The RequestReadTimeout directive allows to limit the time a client may take to send the request.
Allow 10 seconds to receive the request including the headers and 30 seconds for receiving the request body:
RequestReadTimeout header=10 body=30
Allow at least 10 seconds to receive the request body. If the client sends data, increase the timeout by 1 second for every 1000 bytes received, with no upper limit for the timeout (except for the limit given indirectly by LimitRequestBody):
RequestReadTimeout body=10,MinRate=1000
RequestReadTimeout header=10-30,MinRate=500
RequestReadTimeout header=20-40,MinRate=500 body=20,MinRate=500
The KeepAliveTimeout directive may be also lowered on sites that are subject to DoS attacks. Some sites even turn off the keepalives completely via KeepAlive, which has of course other drawbacks on performance.
The values of various timeout-related directives provided by other modules should be checked.
The directives LimitRequestBody, LimitRequestFields, LimitRequestFieldSize, LimitRequestLine, and LimitXMLRequestBody should be carefully configured to limit resource consumption triggered by client input.
Tune the MaxRequestWorkers directive to allow the server to handle the maximum number of simultaneous connections without running out of resources.
Anti DDOS steps:
The very first important thing is to identify the ddos attack first. Identifying the ddos attack more early means more better for your server .
Getting better bandwidth available for your server. Always keep more than enough bandwidth which is required to for your server. This won’t prevent DDOS attack but it will take longer time. By which you will get some extra time to act.
If you own your own web server then you can defend at network parameter by rate limit your router, add filters to drop packets to different sources of attacks, time out half opened connections more aggressively. Also set lower SYN, ICMP and UDP flood drop thresholds.
If you don’t have much idea about these things, then go and contact your hosting providers quickly. They can try their best prevent the DDOS attacks.
There are also Special DDOS mitigation service provided by Cloudflare and many other companies. By which they can help you to prevent the DDOS attacks. Also many companies offer cheap ddos protection and dos protection.

Categories