Currently we are running NGINX as a reverse proxy with an MVC framework using TWIG, PHP, Elasticache, MySQL, NodeJS (socket.io) <- Instant Notification & Messaging
Our site has decent load speeds but we are constantly having to reload PHP because people keep DDoSing our site. We do not know how to mitigate this but we have created rate-limiting rules in CloudFlare for 60 requests per 10 seconds. The only luck we have had was to put the site on heavy attack mode but that causes the users to wait 5 seconds frequently when browsing the site. While we do not know who is committing the attacks we would like to prevent a majority of them because the site is being taken down almost every other day.
What can we do to prevent the site from serving users 502 pages after a DDoS attack?
What steps can we take to locate and block the source of the attacks as early as possible?
We don't have a large amount of money to spend paying a company to like imperva to handle this but we would like to continue developing our platform without our users constantly having to load a 502 or wait 5 seconds for a lot of the pages they load (from cloudflare).
I assume the account you have with CloudFlare is basic plan and does not provide Layer3/4/7 DDOS mitigation by default but still you can surely protect your site from common DDOS attacks by applying relevant WAF rules in CloudFlare when a DDOS is going on but for that you will have to observe the web server logs and CloudFlare panel to see the pattern of attack.
First step should be to decrease the rate limit you currently have which is 60 requests/10 seconds.
Secondly, I would suggest to seek the pattern of ongoing DDOS attacks which will help you to mitigate them by applying corresponding rules in CloudFlare (every DDOS has a different pattern which requires different mitigation steps).
As a general rule: Enable Javascript or Captcha challenge through CloudFlare on certain pages/endpoints of your website or when a certain rate limit exceeds. This is helpful because DDOS attacks are conducted by using bots and when you apply Javascript or Captcha challenge on your website then only actual human users can pass these challenges and bots get filtered out.
Also, I would suggest to set up the DDOS alerts through CloudFlare which will help you to take timely actions (as suggested above) to block those attack before your users get effected and hosting server(s) get chocked.
This is probably a odd question but is something that I have been wondering about lately.
I have a application that requests a page (php script, works like a API and outputs a simple string) from my webserver every second. That seems quite a lot of spam and I was wondering if any issue could arrive from that.
Like, I should probably have attention to the webserver logging, to make sure it doesnt spam the disk until its full. RAM/CPU isn't a problem at this point. APC is enabled. The scripts are optimized. What else should I look into, if anything ?
This is probably the same situation I would encounter with a lot of visitors comming to my site, but I never had that experience yet.
Thanks!
Every second? That's 86400 times a day per client. That's a lot for php! but it should be okay unless you have multiple clients, some kind of I/O heavy or database system behind it.
Otherwise, php5[-fpm] with APC on nginx sounds suited for this use, if you must use PHP.
If this component of your application aggregates data without a database, by mining other data sources over the internet, you may want to check with the data providers that realtime polling is permissible and to ensure your addresses are whitelisted explicitly.
Firewalls aren't to be forgotten: using a permit-by-exception security policy, i.e. iptables -t filter -P INPUT DROP, fine-turned to the packet level using the iptables -t raw table as well. One of the greatest threats to mission-critical webserver performance is the ability of an adversary to identify a node as critical by analyzing traffic frequency and volume. Closing all non-critical ports at the lowest-level is an easy defense.
Another option is automated failover strung together with node monitoring for this server and rapid deployment of a drop-in replacement appliance using a cloud VPS provider such as Digital Ocean or Amazon Web Services. This is an alternative to running redundant servers (or instances) permanently, and fun to setup.
Applications which require realtime request processing with failover are often seen in the financial industry in high-value risk environments, as well as in the security and transportation industries in safety-critical risk environments. If either of these scenarios applies to you, you may wish to consider rebuilding this component of your application from the ground up using a specially-purposed language set including Ada, Erlang, Haskell. This would allow you to optimize resource utilization at a lower-level, and therefore obtain optimum performance. Depending on your risk environment, this may or may not be worthwhile for you.
I was thinking about web-security and then this thought popped into my head.
Say that there's this jerk who hates me and knows how to program. I am managing a nice website/blog with a considerable amount of traffic. Then that jerk creates a program that automatically request my website over and over again.
So if I am hosting my website on a shared hosting provider then obviously my website will stop responding.
This type of attacks may not be common, but if someone attempts something like that on my website i must do something about it. I don't think that popular CMS's like wordpress or drupal do something about this type of attacks.
My assumption is ;
If a user requests more than x times (let's say 50) in 1-minute, block that user. (stop responding)
My questions are;
Is my assumption ok ? If not what to do about it ?
Do websites like Google, Facebook, Youtube...[etc] do something about this type of attacks.
What you are facing is the DoS.[Denial of Service] Attack. Where one system tries to go on sending packets to your webserver and makes it unresponsive.
You have mentioned about a single jerk, what if the same jerk had many friends and here comes DDoS [Distributed DoS] Attack. Well this can't be prevented.
A Quick fix from Apache Docs for the DoS but not for the DDoS ...
All network servers can be subject to denial of service attacks that
attempt to prevent responses to clients by tying up the resources of
the server. It is not possible to prevent such attacks entirely, but
you can do certain things to mitigate the problems that they create.
Often the most effective anti-DoS tool will be a firewall or other
operating-system configurations. For example, most firewalls can be
configured to restrict the number of simultaneous connections from any
individual IP address or network, thus preventing a range of simple
attacks. Of course this is no help against Distributed Denial of
Service attacks (DDoS).
Source
The issue is partly one of rejecting bad traffic, and partly one of improving the performance of your own code.
Being hit with excess traffic by malicious intent is called a Denial of Service attack. The idea is to hit the site with traffic to the point that the server can't cope with the load, stops responding, and thus no-one can get through and the site goes off-line.
But you can also be hit with too much traffic simply because your site becomes popular. This can easily happen overnight and without warning, for example if someone posts a link to your site on another popular site. This traffic might actually be genuine and wanted (hundred of extra sales! yay!), but can have the same effect on your server if you're not prepared for it.
As others have said, it is important to configure your web server to cope with high traffic volumes; I'll let the other answers speak for themselves on this, and it is an important point, but there are things you can do in your own code to improve things too.
One of the main reasons that a server fails to cope with increased load is because of the processing time taken by the request.
Your web server will only have the ability to handle a certain number of requests at once, but the key word here is "simultaneous", and the key to reducing the number of simultaneous requests is to reduce the time it takes for your program to run.
Imagine your server can handle ten simultaneous requests, and your page takes one second to load.
If you get up to ten requests per second, everything will work seamlessly, because the server can cope with it. But if you go just slightly over that, then the eleventh request will either fail or have to wait until the other ten have finished. It will then run, but will eat into the next second's ten requests. By the time ten seconds have gone by, you're a whole second down on your response time, and it keeps getting worse as long as the requests keep pouring in at the same level. It doesn't take long for the server to get overwhelmed, even when it's only just a fraction over it's capacity.
Now imagine the same page could be optimised to take less time, lets say half a second. Your same server can now cope with 20 requests per second, simply because the PHP code is quicker. But also, it will be easier for it recover from excess traffic levels. And because the PHP code takes less time to run, there is less chance of any two given requests being simultaneous anyway.
In short, the server's capacity to cope with high traffic volumes increases enormously as you reduce the time taken to process a request.
So this is the key to a site surviving a surge of high traffic: Make it run faster.
Caching: CMSs like Drupal and Wordpress have caching built in. Make sure it's enabled. For even better performance, consider a server-level cache system like Varnish. For a CMS type system where you don't change the page content much, this is the single biggest thing you can do to improve your performance.
Optimise your code: while you can't be expected to fix performance issues in third-party software like Drupal, you can analyse the performance of your own code, if you have any. Custom Drupal modules, maybe? Use a profiler tool to find your bottlenecks. Very often, this kind of analysis can reveal that a single bottleneck is responsible for 90% of the page load time. Don't bother with optimising the small stuff, but if you can find and fix one or two big bottlenecks like this, it can have a dramatic effect.
Hope that helps.
These types of attacks are called (D)DoS (Distributed Denial of Service) attacks and are usually prevented by the webserver hosting your PHP Application. Since apache is used the most, I found an article you might find interesting: http://www.linuxforu.com/2011/04/securing-apache-part-8-dos-ddos-attacks/.
The article states that apache has multiple mods available specifically created to prevent (D)DoS attacks. These still need to be installed and configured to match your needs.
I do believe that Facebook, Google etc. have their own similar implementations to prevent DoS attacks. I know for a fact that Google Search engine uses a captcha if alot of search requests are coming from the same network.
Why it is not wise to prevent DoS within a PHP script is because the PHP processor still needs to be started whenever a request is made, which causes alot of overhead. By using the webserver for this you will have less overhead.
EDIT:
As stated in another answer it is also possible to prevent common DoS attacks by configuring the server's firewall. Checking for attacks with firewall rules happens before the webserver is getting hit, so even less overhead there. Furthermore you can detect attacks on other ports aswell (such as portscans). I believe a combination of the 2 works best as both complement each other.
In my opinion, the best way to prevent DoS is to set the firewall to the lower level: at the entry of the server. By settings some network firewall config with iptables, you can drop packets from senders which are hitting too hard your server.
It'll be more efficient than passing through PHP and Apache, since them need to use a lot (relatively) of processus to do the checking and they may block your website, even if you detect your attacker(s).
You can check on this topic for more information: https://serverfault.com/questions/410604/iptables-rules-to-counter-the-most-common-dos-attacks
DDoS (Distributed Denial of Service Attacks) are generally blocked on a server level right?
Is there a way to block it on a PHP level, or at least reduce it?
If not, what is the fastest and most common way to stop DDoS attacks?
DDOS is a family of attacks which overwhelm key systems in the datacenter including:
The hosting center's network connection to the internet
The hosting center's internal network and routers
Your firewall and load balancers
Your web servers, application servers and database.
Before you start on building your DDOS defence, consider what the worst-case value-at-risk is. For a non-critical, free-to-use service for a small community, the total value at risk might be peanuts. For a paid-for, public-facing, mission-critical system for an established multi-billion dollar business, the value might be the worth of the company. In this latter case, you shouldn't be using StackExchange :) Anyway, to defend against DDOS, you need a defence in-depth approach:
Work with your hosting center to understand the services they offer, including IP and port filtering at their network connections to the internet and firewall services they offer. This is critical: Many sites are pulled from the internet by the hosting company as the hosting company deals with the data center-wide disruption caused by the DDOS to one customer. Also, during an DDOS attack, you will be working very closely with the hosting center's staff, so know their emergency numbers and be on good terms with them :) They should be able to block of whole international regions, completely block specific services or network protocols and other broad-spectrum defensive measures, or alternatively allow only whitelisted IPs (depending on your business model)
While on the hosting center - use a Content Delivery Network to distribute (mainly static) services close to your end users and hide your real servers from the DDOS architects. The full CDN is too big for a DDOS to take out all nodes in all countries; if the DDOS is focused on one country, at least other users are still OK.
Keep all your systems and software packages updated with the latest security patches - and I mean all of them:
Managed switches - yup these sometimes need updating
Routers
Firewalls
Load balancers
Operating systems
Web servers
Languages and their libraries
Ensure that you have a good firewall or security appliance set up and regularly reviewed by a qualified security expert. Strong rules on the firewall are a good defence against many simple attacks. It's also useful to be able to manage bandwidth available for each open service.
Have good network monitoring tools in place - this can help you understand:
That you're under attack rather than simply being under heavy load
Where the attack is coming from (which may include countries you don't normally do business with) and
What the attack actually is (ports, services, protocols, IPs and packet contents)
The attack might simply be heavy use of legitimate web site services (eg hitting 'legal' URIs running queries or inserting/updating/deleting data) - thousands or millions of requests coming from tens to millions of different IP addresses will bring a site to its knees. Alternatively, some services might be so expensive to run that only a few requests cause a DOS - think a really expensive report. So you need good application level monitoring of what is going on:
Which services have been invoked and what arguments/data are sent (i.e. logging in your application)
Which users are doing the invoking and from which IPs (i.e. logging in your application)
What queries and inserts/updates/deletes the DB is performing
Load average, CPU utilization, disk i/o, network traffic on all computers (and VMs) in your system
Making sure that all this information is easily retrievable and that you can correlate logs from different computers and services (i.e. ensure all computers are time synchronized using ntp).
Sensible constraints and limits in your application. For example, you might:
Use a QoS feature in the load balancer to send all anonymous sessions to separate application servers in your cluster, while logged-on users use another set. This prevents an application-level anonymous DDOS taking out valuable customers
Using a strong CAPCHA to protect anonymous services
Session timeouts
Have a session-limit or rate-limit on certain types of request like reports. Ensure that you can turn off anonymous access if necessary
Ensure that a user has a limit to the number of concurrent sessions (to prevent a hacked account logging on a million times)
Have different database application users for different services (eg transactional use vs. reporting use) and use database resource management to prevent one type of web request from overwhelming all others
If possible make these constraints dynamic, or at least configurable. This way, while you are under attack, you can set aggressive temporary limits in place ('throttling' the attack), such as only one session per user, and no anonymous access. This is certainly not great for your customers, but a lot better than having no service at all.
Last, but not least, write a DOS Response Plan document and get this internally reviewed by all relevant parties: Business, Management, the SW dev team, the IT team and a security expert. The process of writing the document will cause you and your team to think through the issues and help you to be prepared if the worst should happen at 3am on your day off. The document should cover (among other things):
What is at risk, and the cost to the business
Measures taken to protect the assets
How an attack is detected
The planned response and escalation procedure
Processes to keep the system and this document up-to-date
So, preamble aside, here are some specific answers:
DDOS are generally blocked on a server level, right?
Not really - most of the worst DDOS attacks are low-level (at the IP packet level) and are handled by routing rules, firewalls, and security devices developed to handle DDOS attacks.
Is there a way to block it on a PHP level, or at least reduce it?
Some DDOS attacks are aimed at the application itself, sending valid URIs and HTTP requests. When the rate of requests goes up, your server(s) begin to struggle and you will have an SLA outage. In this case, there are things you can do at the PHP level:
Application level monitoring: Ensure each service/page logs requests in a way that you can see what is going on (so you can take actions to mitigate the attack). Some ideas:
Have a log format that you can easily load into a log tool (or Excel or similar), and parse with command-line tools (grep, sed, awk). Remember that a DDOS will generate millions of lines of log. You will likely need to slice'n'dice your logs (especially with respect to URI, time, IP and user) to work out what is going on, and need to generate data such as:
What URIs are being accessed
What URIs are failing at a high rate (a likely indicator of the specific URIs the attackers are attacking)
Which users are accessing the service
How many IPs are each user accessing the service from
What URIs are anonymous users accessing
What arguments are being used for a given service
Audit a specific users actions
Log the IP address of each request. DON'T reverse DNS this - ironically the cost of doing this makes a DDOS easier for the attackers
Log the whole URI and HTTP method, eg "GET http://example.com/path/to/service?arg1=ddos"
Log the User ID if present
Log important HTTP arguments
Sensible rate limits: You might implement limits on how many requests a given IP or User can make in a given time period. Could a legitimate customer make more than 10 requests per second? Can anonymous users access expensive reports at all?
CAPTCHA for anonymous access: Implement a CAPTCHA for all anonymous requests to verify that the user is a person, not a DDOS bot.
What's the fastest and most common way to stop DDOS attacks?
The fastest is probably to give in to the blackmail, although this might not be desirable.
Otherwise, the first thing you to do is contact your hosting and/or CDN provider and work with them (if they haven't contacted you already asking what the hell is going on...). When a DDOS occurs, it will likely collaterally affect other customers of the hosting provider, and the provider may be under considerable pressure to shut down your site simply to protect their resources. Be prepared to share your logs (any and all information) with the provider; these logs, combined with their network monitors, may together provide enough information to block/mitigate the attack.
If you are expecting a DDOS, it's a very good idea to qualify your hosting provider on the level of protection they can provide. They should have DDOS experience and tools to mitigate it - understand their tools, processes and escalation procedures. Also ask about what support the hosting provider has from their upstream providers. These services might mean more up-front or monthly cost, but treat this as an insurance policy.
While under attack, you will need to grab your logs and mine them - try and work out the pattern of the attack. You should consider switching off anonymous access and throttling the services under attack (i.e. decrease the application's rate limit for the service).
If lucky and you have a small, fixed customer-base, you might be able to determine your valid customers IP addresses. If this is the case, you might switch to a white-list approach for a short while. Make sure all your customers know this is going on so they can call if they need to access from a new IP :)
Doug McClean has some great advice at: https://stackoverflow.com/a/1029613/1395668
According the PHP part of the question;
Although I don't rely on PHP for this, it could be implemented but needs to consider all these possiblities or more;
Attacker may change IP for each request
Attacker may pass parameter(s) to URI that target site doesn't care these parameter(s)
Attacker may restart the session before expiry
...
Simple pseudo;
<?php
// Assuming session is already started
$uri = md5($_SERVER['REQUEST_URI']);
$exp = 3; // 3 seconds
$hash = $uri .'|'. time();
if (!isset($_SESSION['ddos'])) {
$_SESSION['ddos'] = $hash;
}
list($_uri, $_exp) = explode('|', $_SESSION['ddos']);
if ($_uri == $uri && time() - $_exp < $exp) {
header('HTTP/1.1 503 Service Unavailable');
// die('Easy!');
die;
}
// Save last request
$_SESSION['ddos'] = $hash;
?>
The php level is too late in the request chain.
Putting your apache server behind an open source appliance may be a good option for you.
http://tengine.taobao.org/ has some documentation and source code more modules aimed at DDOS prevention. It is a expansion of nginx, so you can easily set it up as a reverse proxy for your apache instance.
See: http://blog.zhuzhaoyuan.com/2012/01/a-mechanism-to-help-write-web-application-firewalls-for-nginx/ for how to fight collision has DoS attacks.
Totally forgot too, http://www.cloudflare.com is one the top free web application firewall, they have free and paid plans and will save your ass from DDOS we use it for alot of our high traffic sites just for its caching capabilities. It is awsome!
You can not do this in PHP level. DDOS is a kind of attack that send too many requests to your webserver. Your webserver will reject request before it call your PHP script.
If you are using Apache, here is some tips from Apache:
http://httpd.apache.org/docs/trunk/misc/security_tips.html
DDoS is best handled by very expensive, purpose-built network appliances. Hosts are generally not good at doing DDoS protection because they are subject to relatively low performance, state exhaustion, limited bandwidth, etc. Use of iptables, apache mods, and similar services can help in some situations if you have no access to DDoS mitigation hardware or a DDoS mitigation service, but it is far from ideal and still leaves you at risk of attack.
Do NOT use PHP-based protection, it's horrible and will hardly have an impact at all! Configure your webserver to rate-limit requests, for example in Nginx using the limit_req module (http://nginx.org/en/docs/http/ngx_http_limit_req_module.html)
Although, I would recommend using CloudFlare to combat layer-4 - however not layer-7 based attacks unless you're willing to pay.
There are plugins you can use in apache for ddos/dos.
Good start here
http://www.debianadmin.com/how-to-protect-apache-against-dosddos-or-brute-force-attacks.html
If you're on LEMP, you can check here.
http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html
These are good inexpensive starting points.
How about something like this on PHP side:
//if user does not change IP, then ban the IP when more than 10 requests per second are detected in 1 second
$limitps = 10;
if (!isset($_SESSION['first_request'])){
$_SESSION['requests'] = 0;
$_SESSION['first_request'] = $_SERVER['REQUEST_TIME'];
}
$_SESSION['requests']++;
if ($_SESSION['requests']>=10 && strtotime($_SERVER['REQUEST_TIME'])-strtotime($_SESSION['first_request'])<=1){
//write the IP to a banned_ips.log file and configure your server to retrieve the banned ips from there - now you will be handling this IP outside of PHP
$_SESSION['banip']==1;
}elseif(strtotime($_SERVER['REQUEST_TIME'])-strtotime($_SESSION['first_request']) > 2){
$_SESSION['requests'] = 0;
$_SESSION['first_request'] = $_SERVER['REQUEST_TIME'];
}
if ($_SESSION['banip']==1) {
header('HTTP/1.1 503 Service Unavailable');
die;
}
DDOS are generally blocked on a server level, Please enable DDOS protection in your Server Level. Please check the below notes for DDOS protections.
Apache HTTP Server configuration settings that can help prevent DDOS problems:
The RequestReadTimeout directive allows to limit the time a client may take to send the request.
Allow 10 seconds to receive the request including the headers and 30 seconds for receiving the request body:
RequestReadTimeout header=10 body=30
Allow at least 10 seconds to receive the request body. If the client sends data, increase the timeout by 1 second for every 1000 bytes received, with no upper limit for the timeout (except for the limit given indirectly by LimitRequestBody):
RequestReadTimeout body=10,MinRate=1000
RequestReadTimeout header=10-30,MinRate=500
RequestReadTimeout header=20-40,MinRate=500 body=20,MinRate=500
The KeepAliveTimeout directive may be also lowered on sites that are subject to DoS attacks. Some sites even turn off the keepalives completely via KeepAlive, which has of course other drawbacks on performance.
The values of various timeout-related directives provided by other modules should be checked.
The directives LimitRequestBody, LimitRequestFields, LimitRequestFieldSize, LimitRequestLine, and LimitXMLRequestBody should be carefully configured to limit resource consumption triggered by client input.
Tune the MaxRequestWorkers directive to allow the server to handle the maximum number of simultaneous connections without running out of resources.
Anti DDOS steps:
The very first important thing is to identify the ddos attack first. Identifying the ddos attack more early means more better for your server .
Getting better bandwidth available for your server. Always keep more than enough bandwidth which is required to for your server. This won’t prevent DDOS attack but it will take longer time. By which you will get some extra time to act.
If you own your own web server then you can defend at network parameter by rate limit your router, add filters to drop packets to different sources of attacks, time out half opened connections more aggressively. Also set lower SYN, ICMP and UDP flood drop thresholds.
If you don’t have much idea about these things, then go and contact your hosting providers quickly. They can try their best prevent the DDOS attacks.
There are also Special DDOS mitigation service provided by Cloudflare and many other companies. By which they can help you to prevent the DDOS attacks. Also many companies offer cheap ddos protection and dos protection.
Just wanted to know what could be the security cautions I should know about PHP Hosting?
Thanks
Here are some of the things:
Disable functions like eval, passthru, shell_exec,etc
Remote url injection, disable allow_url_fopen
Disable register_globals
And don't forget:
You are responsible too. Write secure code, read security tutorials out there.
PHP Security Guide
Finally as suggested by Rook, you should run:
PHPSecInfo script to see security settings of your host.
http://phpsec.org/projects/phpsecinfo/
For webhosts and Development Teams
In development environments make sure you have appropriate coding standards. If you feel you are hosting insecure code which you did not write, consider installing a Web Application Firewall. Also consider steps to prevent bruteforce attacks (for example if you are hosting popular CMS tools), an Intrusion Prevention System like Fail2Ban can help you do this. A lot of these issues are covered in this talk Practical Web Security – Junade Ali, the video of the talk is here.
For PHP you can also consider using Suhosin which adds a number of security features to the PHP core. Be careful installing it first and test your sites afterwards to ensure it doesn't break anything.
If you speak as developer (and not as hoster), then don't rely on the server -- write secure code and you won't be harmed by any php configuration directive ever.
Clients often have access to Perl, PHP and shell accounts which makes it easy for a client to DoS or Denial of Service all the other clients with a badly written program.
External DoS on the whole hosting service, which means that if an IP is experiencing a DoS attack, you also suffer the same problem with the others.
Most often than not, clients of shared hosting solutions also share an IP address with other clients. This arrangement often works out fine but it is important to know that whatever happens to your neighbours sharing the same IP with you will also get to you. If your neighbour using the same IP will be placed on a spam blacklist, everybody else using the IP will also share the same fate.
It is very vulnerable to malware attacks.
Other harmful data can be uploaded in the other sites easily, putting your site at risk. These can be introduced to the server through vulnerabilities of a legitimate clients’ website and can be used steal data.
DDoS attacks launched by software loaded into the server allow hackers to control an entire hosting server and then attack other server, either from the same network or from other networks.