I was under DDOS attack and I started to use cloudflare.
I have a server that collects data from Android phones so there are many JAVA requests to PHP scrips that collect data.
Yesterday I added cloudflare because I had many repeating requests from many different ips and even my server was down because of this. After I added cloudflare, it all worked well and many of the unusual requests stopped.
The problem was that after 12 hours, my PHP scripts that collect data didn't work as they should. The script was run (with the insert into the database) but on Android I received an error. All the connections did not work.
I have seen that if I disable security, my system runs as normal. My security level was Medium. I had to disable security from Page Rules. This means that now I have no DDOS protection?
Do you know what was the problem and how I could fix this?
Thanks
Related
Currently we are running NGINX as a reverse proxy with an MVC framework using TWIG, PHP, Elasticache, MySQL, NodeJS (socket.io) <- Instant Notification & Messaging
Our site has decent load speeds but we are constantly having to reload PHP because people keep DDoSing our site. We do not know how to mitigate this but we have created rate-limiting rules in CloudFlare for 60 requests per 10 seconds. The only luck we have had was to put the site on heavy attack mode but that causes the users to wait 5 seconds frequently when browsing the site. While we do not know who is committing the attacks we would like to prevent a majority of them because the site is being taken down almost every other day.
What can we do to prevent the site from serving users 502 pages after a DDoS attack?
What steps can we take to locate and block the source of the attacks as early as possible?
We don't have a large amount of money to spend paying a company to like imperva to handle this but we would like to continue developing our platform without our users constantly having to load a 502 or wait 5 seconds for a lot of the pages they load (from cloudflare).
I assume the account you have with CloudFlare is basic plan and does not provide Layer3/4/7 DDOS mitigation by default but still you can surely protect your site from common DDOS attacks by applying relevant WAF rules in CloudFlare when a DDOS is going on but for that you will have to observe the web server logs and CloudFlare panel to see the pattern of attack.
First step should be to decrease the rate limit you currently have which is 60 requests/10 seconds.
Secondly, I would suggest to seek the pattern of ongoing DDOS attacks which will help you to mitigate them by applying corresponding rules in CloudFlare (every DDOS has a different pattern which requires different mitigation steps).
As a general rule: Enable Javascript or Captcha challenge through CloudFlare on certain pages/endpoints of your website or when a certain rate limit exceeds. This is helpful because DDOS attacks are conducted by using bots and when you apply Javascript or Captcha challenge on your website then only actual human users can pass these challenges and bots get filtered out.
Also, I would suggest to set up the DDOS alerts through CloudFlare which will help you to take timely actions (as suggested above) to block those attack before your users get effected and hosting server(s) get chocked.
I am running a PHP web application on a local network (APACHE server) that allows clients to write notes that are stored on the database.
Whenever 3 or more client machines connect to the server, two clients machines will end up waiting for their page to load indefinitely until client 3 performs an action on the application (i.e AJAX drop down menus, confirm button, open another page or refresh the page). The higher the number of clients, the more who wait for that one machine to make an action on the server.
At times (though rarely), all clients wait indefinitely and I have to reset the APACHE server.
When there are 2 clients connected, all is smooth (although I think its because it slims the chance of the waiting issue to almost never). I have tested the server settings and all seems well (just some tweaks from the default settings)
I believe this is related to my code but cannot track the issue.
Upon my research for a solution I have come across these possible solutions but want to ask around if anyone has experienced this issue and their solution:
Multi threading
Output Buffering
Disable Antivirus
We have develop a CURL function on our application. This curl function is mainly to map the data over from 1 site to our form-field in our application.
However, this function has been working fine all the while and ready for use for more than 2 months. Yesterday, this fucntion was broken down. the data from this website is no longer able to map over. We are trying to find out why the problem is. When we troubleshooting, it shows that there is response timeout issue.
To re-ensure there were nothing wrong on our coding and our server performance is working, we have duplicates this instance to another server and try out the function. It was working perfectly.
Wondering if any one out there facing such problem?
What could the possibility to cause this issue?
When we are using cURL, will the site owner know that we are calling their data to map into ours server application? If so, is there a way that we can overcome this?
Could be the owner that block our server ip address? tht's why it function works well on my another server but not in the original server?
Appreciate your help on this.
Thank you,
Your problem description is far too generic to determine a specific cause. Most likely however there is a specific block in place.
For example a firewall rule on the other end, or on your end, would cause all traffic to be dropped, thus causing the timeout. There could also be a regular network outage between both servers, but that's unlikely.
Yes, they will see it in their Apache (or IIS) logs regularly. No, you cannot hide from the server logs - it logs all successful requests. You either get the data, or you stay stealthy. Not both.
Yes, the webserver logs will contain the IP doing all the requests. Adding a DROP rule to the firewall is then a trivial task.
I have applied such a firewall rule to bandwidth and/or data leechers a lot of times in the past few years, although usually I prefer the more resilient deny from 1.2.3.4 approach in Apache vhost/htaccess. Usually, if you use someone else's facilities, it's nice to ask for proper permission - lessens the chance you get blocked this way.
I faced a similar problem some time ago
My server IP was blocked from the website owner
It can be seen in the server logs. Google Analytics, however, won't see this, as cURL doesn't execute javascript.
Try to ping the destination server from the one executing the cURL.
Some advices are:
Use a browser header to mask your request.
If you insist on using this server, you can run trough a proxy.
Put some sleep() between the requests.
I was thinking about web-security and then this thought popped into my head.
Say that there's this jerk who hates me and knows how to program. I am managing a nice website/blog with a considerable amount of traffic. Then that jerk creates a program that automatically request my website over and over again.
So if I am hosting my website on a shared hosting provider then obviously my website will stop responding.
This type of attacks may not be common, but if someone attempts something like that on my website i must do something about it. I don't think that popular CMS's like wordpress or drupal do something about this type of attacks.
My assumption is ;
If a user requests more than x times (let's say 50) in 1-minute, block that user. (stop responding)
My questions are;
Is my assumption ok ? If not what to do about it ?
Do websites like Google, Facebook, Youtube...[etc] do something about this type of attacks.
What you are facing is the DoS.[Denial of Service] Attack. Where one system tries to go on sending packets to your webserver and makes it unresponsive.
You have mentioned about a single jerk, what if the same jerk had many friends and here comes DDoS [Distributed DoS] Attack. Well this can't be prevented.
A Quick fix from Apache Docs for the DoS but not for the DDoS ...
All network servers can be subject to denial of service attacks that
attempt to prevent responses to clients by tying up the resources of
the server. It is not possible to prevent such attacks entirely, but
you can do certain things to mitigate the problems that they create.
Often the most effective anti-DoS tool will be a firewall or other
operating-system configurations. For example, most firewalls can be
configured to restrict the number of simultaneous connections from any
individual IP address or network, thus preventing a range of simple
attacks. Of course this is no help against Distributed Denial of
Service attacks (DDoS).
Source
The issue is partly one of rejecting bad traffic, and partly one of improving the performance of your own code.
Being hit with excess traffic by malicious intent is called a Denial of Service attack. The idea is to hit the site with traffic to the point that the server can't cope with the load, stops responding, and thus no-one can get through and the site goes off-line.
But you can also be hit with too much traffic simply because your site becomes popular. This can easily happen overnight and without warning, for example if someone posts a link to your site on another popular site. This traffic might actually be genuine and wanted (hundred of extra sales! yay!), but can have the same effect on your server if you're not prepared for it.
As others have said, it is important to configure your web server to cope with high traffic volumes; I'll let the other answers speak for themselves on this, and it is an important point, but there are things you can do in your own code to improve things too.
One of the main reasons that a server fails to cope with increased load is because of the processing time taken by the request.
Your web server will only have the ability to handle a certain number of requests at once, but the key word here is "simultaneous", and the key to reducing the number of simultaneous requests is to reduce the time it takes for your program to run.
Imagine your server can handle ten simultaneous requests, and your page takes one second to load.
If you get up to ten requests per second, everything will work seamlessly, because the server can cope with it. But if you go just slightly over that, then the eleventh request will either fail or have to wait until the other ten have finished. It will then run, but will eat into the next second's ten requests. By the time ten seconds have gone by, you're a whole second down on your response time, and it keeps getting worse as long as the requests keep pouring in at the same level. It doesn't take long for the server to get overwhelmed, even when it's only just a fraction over it's capacity.
Now imagine the same page could be optimised to take less time, lets say half a second. Your same server can now cope with 20 requests per second, simply because the PHP code is quicker. But also, it will be easier for it recover from excess traffic levels. And because the PHP code takes less time to run, there is less chance of any two given requests being simultaneous anyway.
In short, the server's capacity to cope with high traffic volumes increases enormously as you reduce the time taken to process a request.
So this is the key to a site surviving a surge of high traffic: Make it run faster.
Caching: CMSs like Drupal and Wordpress have caching built in. Make sure it's enabled. For even better performance, consider a server-level cache system like Varnish. For a CMS type system where you don't change the page content much, this is the single biggest thing you can do to improve your performance.
Optimise your code: while you can't be expected to fix performance issues in third-party software like Drupal, you can analyse the performance of your own code, if you have any. Custom Drupal modules, maybe? Use a profiler tool to find your bottlenecks. Very often, this kind of analysis can reveal that a single bottleneck is responsible for 90% of the page load time. Don't bother with optimising the small stuff, but if you can find and fix one or two big bottlenecks like this, it can have a dramatic effect.
Hope that helps.
These types of attacks are called (D)DoS (Distributed Denial of Service) attacks and are usually prevented by the webserver hosting your PHP Application. Since apache is used the most, I found an article you might find interesting: http://www.linuxforu.com/2011/04/securing-apache-part-8-dos-ddos-attacks/.
The article states that apache has multiple mods available specifically created to prevent (D)DoS attacks. These still need to be installed and configured to match your needs.
I do believe that Facebook, Google etc. have their own similar implementations to prevent DoS attacks. I know for a fact that Google Search engine uses a captcha if alot of search requests are coming from the same network.
Why it is not wise to prevent DoS within a PHP script is because the PHP processor still needs to be started whenever a request is made, which causes alot of overhead. By using the webserver for this you will have less overhead.
EDIT:
As stated in another answer it is also possible to prevent common DoS attacks by configuring the server's firewall. Checking for attacks with firewall rules happens before the webserver is getting hit, so even less overhead there. Furthermore you can detect attacks on other ports aswell (such as portscans). I believe a combination of the 2 works best as both complement each other.
In my opinion, the best way to prevent DoS is to set the firewall to the lower level: at the entry of the server. By settings some network firewall config with iptables, you can drop packets from senders which are hitting too hard your server.
It'll be more efficient than passing through PHP and Apache, since them need to use a lot (relatively) of processus to do the checking and they may block your website, even if you detect your attacker(s).
You can check on this topic for more information: https://serverfault.com/questions/410604/iptables-rules-to-counter-the-most-common-dos-attacks
I was recently asked by a client to load test their server to see if it could handle 10,000 concurrent users. To do this I've been using JMeter but getting less than favorable results.
Let me just say that this is my first time using jmeter so I'm not super sure of what Im doing, BUT here's what I've found.
On a test of 1000 concurrent users all launched at once and each user going to 2 pages, the failure rate is 96%. This seems bad...like really really bad.
Is there something that could possibly be going wrong in JMeter? All I'm doing is sending HTTP GET requests to their server.
I don't know what kind of plan the client is on but I do know that they are using GoDaddy as their provider and in my experience GoDaddy's "unlimited" bandwidth is rather limited. Is this the problem OR and I'm really hoping that this is the case, is the Apache server for the website blocking the repeated attempts.
I get an error saying org.apahe.http.com.HttpHostConnectException: Connection to ~~~.com refused.
Is this the server being smart?
Or the server being bogged down?
Thanks in advance for your help, let me know if you need any more information.
Apache can't protect you from ddos attacks, but you can use some modules to reduce risks, they are: mod_qos and mod_evasive.
If you are using shared hosting from GoDaddy, seems you are loading all websites in one server and Godaddy may block your site or they may treat your load testing as ddos attack. For experiments you need isolated VDS server or cloud server.
If you want protect your project you can:
Use load balancer
Use caching tool
Use firewall protection
OS tuning
Use CDN