Flooding a web script & bypassing security - php

I currently have a simple script that connects to the mysql. Each time a clients connects to that script i add +1 to the total max_connections inside the database for that IP.
In this script, I have a limit, for example
if($user['max_cons'] < 5)
{
# ... do some things
}
However if a user floods this web script with many threads at once, he will be able to bypass it and open more than 5 connections. I tried it a python flooding script and it worked.
I guess it's because of the MySQL queries that needs some time to be imported into the database.
What I can do to prevent that?
(btw: I don't want to block the user even if he floods)
Thank you!

MySQL keeps a count of connections for you. Refer this answer to obtain that number.

If you are concerned about flooding or other forms of attacks, you need to act also in the infrastructure and networking layers of your system. Once the attack got to your code, you don't have much room to maneuver, as the application layer would have been already compromised.
Moreover, if you design your defense this way, you would need to repeat or include this code in every other piece of code you program. Acting on the infrastructure and/or networking layers will give you the chance to add security and protection as a cross-cutting concern or an "aspect" of your system, adding it once and intercepting all requests.
Your code checking 'max_conns' for each user seems more like a quota check to me, a feature of your website if you will. You could use that to prevent a user accidentally using more connections than you want to allow, but if you want to defend against actual intended attacks, you need to do some research on infrastructure and networking security, as it's a very broad subject.
Two more notes:
Maybe your hosting provider already provides some sort of defense against this and you could rely on that? Or are you hosting it yourself?
Maybe take this to superuser.com?

You can use
sleep(1);//sleep for one second
just before checking number of connections, but after you've increased number of connections for the ip. Something like
increaseConnectionsCount($user);//but max_cons should be affected n this method
sleep(1);
$user = reloadUser();
if($user['max_cons'] < 5) {
...

Related

How to stop too many requests in web applications?

I am looking for alternative security cautions for Apache web server. I generally work with PHP and MySQL.
For processes like user login, I keep IP address, try count, and last try time in database, so if someone tries more than x times in last n minutes or seconds, I simply block IP address.
When there are a lot of different processes like user login, keeping IP addresses in database does not sound right (because of decreasing performance and a lot to do). I know if you want security, you need to sacrifice some performance but is there a better way to stop users making too many requests? Maybe a module to Apache? Or a lower level caution to server? I am especially trying to avoid unnecessary database work.
I considered using CAPTCHA but using it for every form kills user experience. And using it after x request in n minutes still requires first technique i mentioned.
A cache system might help but i can't see how it prevents a brute force attack or filling up database with garbage. AFAIK it only helps to reading from database (please correct me if i am wrong).
Other than #ranty's comment above (which is suitable unless you really have a lot of users at the same time), you could use a memory cache system such as memcached. It have a nice php interface and is very easy to use.
Dump every login attempt to memcache (using ip address as key and trycount as value, cleared by timespan). It's fast and should not cost too much in performance or development effort.
Pseudo code for this would look like this:
$memcache_obj = memcache_connect('memcache_host', 11211);
$ip = $_SERVER('REMOTE_ADDR');
$trycount = memcache_get($memcache_obj, $ip);
if ( $trycount == null ) $trycount=0;
if ( $trycount > 3 ) die('bad user');
memcache_set($memcache_obj, $ip, $trycount++ , 0, 30);
You should try CloudFlare, protects your website from all kind of bots/hackers. Keep in mind that stackoverflow is for questions about programming and not for questions about security issues or hosting issues.

Smart PHP Session Handling/ Security

I've decided the best way to handle authentication for my apps is to write my own session handler from the ground up. Just like in Aliens, its the only way to be sure a thing is done the way you want it to be.
That being said, I've hit a bit of a roadblock when it comes to my fleshing out of the initial design. I was originally going to go with PHP's session handler in a hybrid fashion, but I'm worried about concurrency issues with my database. Here's what I was planning:
The first thing I'm doing is checking IPs (or possibly even sessions) to honeypot unauthorized attempts. I've written up some conditionals that sleep naughtiness. Big problem here is obviously WHERE to store my blacklist for optimal read speed.
session_id generates, hashed, and gets stored in $_SESSION[myid]. A separate piece of the same token gets stored in a second $_SESSION[mytoken]. The corresponding data is then stored in TABLE X which is a location I'm not settled on (which is the root of this question).
Each subsequent request then verifies the [myid] & [mytoken] are what we expect them to be, then reissues new credentials for the next request.
Depending on the status of the session, more obvious ACL functions could then be performed.
So that is a high level overview of my paranoid session handler. Here are the questions I'm really stuck on:
I. What's the optimal way of storing an IP ACL? Should I be writing/reading to hosts.deny? Are there any performance concerns with my methodology?
II. Does my MitM prevention method seem ok, or am I being overly paranoid with comparing multiple indexes? What's the best way to store this information so I don't run into brick walls at 80-100 users?
III. Am I hammering on my servers unnecessarily with constant session regeneration + writebacks? Is there a better way?
I'm writing this for a small application initially, but I'd prefer to keep it a reusable component I could share with the world, so I want to make sure I make it as accessible and safe as possible.
Thanks in advance!
Writing to hosts.deny
While this is a alright idea if you want to completely IP ban a user from your server, it will only work with a single server. Unless you have some kind of safe propagation across multiple servers (oh man, it sounds horrible already) you're going to be stuck on a single server forever.
You'll have to consider these points about using hosts.deny too:
Security: Opening up access to as important a file as hosts.deny to the web server user
Pain in the A: Managing multiple writes from different processes (denyhosts for example)
Pain in the A: Safely making amends to the file if you'd like to grant access to an IP that was previously banned at a later date
I'd suggest you simply ban the IP address on the application level in your application. You could even store the banned IP addresses in a central database so it can be shared by multiple subsystems with it still being enforced at the application level.
I. Optimal way of storing IP ACL would be pushing banned IP's to an SQL database, which does not suffer from concurrency problems like writing to files. Then an external script, on a regular basis or a trigger, may generate IPTABLES rules. You do not need to re-read your database on every access, you write only when you detect mis-behavior.
II. Fixation to IP is not a good thing on public Internet if you offer service to clients behind transparent proxies, or mobile devices - their IP changes. Let users chose in preferences, if they want this feature (depends on your audience, if they know what does the IP mean...). My solution is to generate unique token per (page) request, re-used in that page AJAX requests (not to step into a resource problem - random numbers, session data store, ...). The tokens I generate are stored within session and remembered for several minutes. This let's user open several tabs, go back and submit in an earlier opened tab. I do not bind to IP.
III. It depends... there is not enough data from you to answer. Above may perfectly suit your needs for ~500 user base coming to your site for 5 minutes a day, once. Or it may fit even for 1000 unique concurent users in a hour at a chat site/game - it depends on what your application is doing, and how well you cache data which can be cached.
Design well, test, benchmark. Test if session handling is your resource problem, and not something else. Good algorithms should not throw you into resource problems. DoS defense included, and it should not be an in-application code. Applications may hint to DoS prevention mechanisms what to do, and let the defense on specialized tools (see answer I.).
Anyway, if you get into a resource problems in future, the best way to get out is new hardware. It may sound rude or even incompetent to someone, but calculate price for new server in 6 months, practically 30% better, versus price for your work: pay $600 for new server and have additional 130% of horsepower, or pay yourself $100 monthly for improving by 5% (okay, improve by 40%, but if the week is worth $25 may seriously vary).
If you design from scratch, read https://www.owasp.org/index.php/Session_Management first, then search for session hijacking, session fixation and similar strings on Google.

At what level should security be implemented in a social network web application?

I am developing a social web application in php/mysql, I would like to hear your advice about what would be a better way to implement security. I am planning something like this:- At the presentation level, I restricting the user to see only those items/content he is eligible to see with the rights he is eligible & at the database level, whenever my data is read/ written or updated I verify that the person has rights to such interactions with that part of data. So for each action there is 2 layers of security one at the view level & another at the database level. Would double checking be much overhead ?
ofcourse this handles only with the internal security issues ..
I would say any app that needs even a modicum of security has to do it this way.
We have several applications that work in a similar manner. We test for authorization both at the application and the database levels. Among the benefits is that we can have multiple loosely related apps utilizing the same datastore with the exact same security model enforced.
Another benefit is that it is much harder to compromise the database in the event an application is pwnd.
Along these lines we don't even allow the apps to directly manipulate tables (selects/deletes/etc). Instead, everything goes through stored procedures. All proc's take a userid (non-guessable) and internally validate that the user is allowed to perform the requested function. If not we fail silently. A side benefit is we are immune to sql injection.
Besides allowing the database to be self contained and enforcing it's own authorization checks, even if they were able to acquire a valid super user account it becomes time consuming to do large swaths of damage. Given that we actively monitor db and app usage we have the capability to detect and stop an attack in progress thereby limiting any exposure.
Remember, it's only paranoia if they aren't out to get you. And they almost always are. ;)
That sounds like a good way to handle the situation, but why do the users need to have so many different levels of eligebility, can't you just restict them to the info that is in their account.

Convincing an IT Manager to allow SQL Server instead of Access

An IT Manager is not allowing the use of SQL Server with an ASP.NET website being developed. The current setup being replaced is a php site connecting to a Microsoft Access database. I have a few reasons of my own as to why SQL should be used, but would like as many strong arguments as possible (student vs. IT Man.). Does anyone have any strong arguments on why SQL should be used? Particularly posed towards an IT Manager who has stated "this is the way we have been doing it, and [it] has been working."
Thanks in advance!
UPDATE
In the interest of 'unloading' this question... If you would recommend keeping Access, when and why?
Do a load test on your finished product and prove that Access isn't meant for powering websites.
Write your code so that you can change out the database back end easily. Then when access fails migrate your data to a real db, or MySQL if you have to.
Here are some Microsoft Web server stress tools
For the record, it is possible to send mostly SQL commands to the database and not keep an active connection open, thereby allowing far more than 6 or 7 connections at once, but the fact is that access just isn't meant to do it. So the "it works fine" point is like saying it is fine to clean your floor with sticky tape: it works, but isn't the right tool for the job.
UPDATED ANSWER TO UPDATED QUESTION:
Really the key here is the separation of data access in your code. You should be able to more or less have the data database structure in any number of DBMS. Things can get complicated, but a map of tables should be universal. Then should access not work, decide to use a different database.
Access CAN be used in kinda high traffic sites. With the SQL statement only routines I was able to build an e-commerce site that did a couple million a year in sales, and had 60K visitors a month. It is possible, but maybe not ideal. Those aren't big numbers, but they are the biggest for any site I have been a part of.
Keep access if IT Manager is too busy to maintain another server, or unwilling to spend time configuring one. Ultimately guessing does nothing, and testing tells you everything you need to know. Test and make decisions on the results.
Here's a document from Microsoft that might help:
Access vs. Sql Server
Another Article.
My own personal thoughts, Access has no place in an environment that could scale beyond more than two or three concurrent connections. Would you use Excel as the back end?
Your manager has stated the reason he wants to use Access. Are you responsible for designing an alternative? Do you have any reason to think you will benefit from proving your manager wrong? What is your personal upside in this conversation? Are you certain that Access won't be "good enough"? Is the redesigned site going to have heaver or different loads (i.e. more users, or less efficient design)? I'm not sure you want to be arguing with your manager that you can't implement something that does as well as the old design.
It's going to be a lot easier to let the project fail (if you expect that will be the outcome) and rescue it with SQL Server, than to get your manager to concede that you understand the situation better than he does.
Don't forget that for something as small as most Access Databases, you can use SQL Server Express Edition, which is free, so it won't cost you anything.
I found this nice quote as well:
It is not recommended to use an Access
database in a production web
application. For production purposes,
consider connecting to a Microsoft™
SQL Server database using the
SqlDataSource or ObjectDataSource
controls.
http://quickstarts.asp.net/QuickStartv20/aspnet/doc/ctrlref/data/accessdatasource.aspx
Don't argue, benchmark it. Real data should trump rhetoric (in a rational world, at least! ;-)
Set up text boxen with the two alternatives and run 'em hard. (If you're exposing web services, you can use a tool such as SoapUI for automated stress testing. There are lots of other tools in this space.) Collect stats and analyze them to determine the tradeoffs.
One simple reason to use SQL Server instead of a Microsoft Access Database: The MS Access DB can result in a bottleneck if the DB will be used heavily by a lot of users.
Licensing for one. I doubt he wants to have hundreds of Office licenses (one for each end user that connects to the site). SQL has licenses that allows multiple connects at the same time without specific connection licenses.
Not to mention scalability and reliability issues. SQL Server ios designed to be used and administrated in a 24/7 environment. Access has not.
SQL can scale to squillions of simultaneous connections, Access cannot.
SQL can backup while operating, Access cannot.
SQL is designed as a highly robust data repository, Access is not designed with the same requirements in mind.
Access doesn't deal with multiple users very well (at all?). This means if you have more than one person trying to access or especially update your site it's very likely to die or at best be very slow.
There's much better tooling around SQL Server (linq to sql or entity framework or any number of ORMs).
SQL express is a much better choice than access for a web site backend and it's free.
Consider the option that maybe he is right. If it is working fine with Access just leave it like this. If there are scalability problems in the future ( the site is used from more than 1 user simultaneously), than it his problem, not yours.
Also consider sqlite, may be better than access
Just grab a testsuite (or just throw one together):
compare the time taken for create a db with 1000,000 enteries.
search an entry in the db.
Vaccum the db
Delete the db
Do couple of operations that you think will be done more on the db couple of times.
and do it infront of him to compare (write a script).My guess is that either your IT manager is joking, or the site that you are working on are non critical and he doesn't want to allocate resources(including you).
MS Access is for 1 desk 1 user ! I was spending a year in a previous project to detach an application (growing to enterprise size in terms of users) from Access because its strange locking behavior and awful performance. SQL Server Express Edition is a good starting point, as echoed from previous posts.

Delaying execution of PHP script

What is the best way to stop bots, malicious users, etc. from executing php scripts too fast? Is it ok if I use the usleep() or sleep() functions to simply do "nothing" for a while (just before the desired code executes), or is that plain stupid and there are better ways for this?
Example:
function login() {
//enter login code here
}
function logout() {
//enter logout code here
}
If I just put, say, usleep(3000000) before the login and logout codes, is that ok, or are there better, wiser ways of achieving what I want to achieve?
edit: Based on the suggestions below, does then usleep or sleep only cause the processor to disengage from the current script being executed by the current user, or does it cause it to disengage from the entire service? i.e. If one user+script invokes a sleep/usleep, will all concurrent users+scripts be delayed too?
The way most web servers work (Apache for example) is to maintain a collection of worker threads. When a PHP script is executed, one thread runs the PHP script.
When your script does sleep(100), the script takes 100 seconds to execute.. That means your worker thread is tied up for 100 seconds.
The problem is, you have a very finite number of worker-threads - say you have 10 threads, and 10 people login - now your web-server cannot serve any further responses..
The best way to rate-limit logins (or other actions) is to use some kind of fast in-memory storage thing (memcached is perfect for this), but that requires running separate process and is pretty complicated (you might do this if you run something like Facebook..).
Simpler, you could have a database table that stores user_id or ip_address, first_failed and failure_counter.
Every time you get a failed login, you (in pseudo code) would do:
if (first_failed in last hour) and (failure_counter > threshold):
return error_403("Too many authentication failures, please wait")
elseif first_failed in last hour:
increment failure_counter
else:
reset first_failed to current time
increment failure_counter
Maybe not the most efficient, and there is better ways, but it should stop brute-forcing pretty well. Using memcached is basically the same, but the database is replaced with memcached (which is quicker)
to stop bots, malicious users, etc.
from executing php scripts too fast?
I would first ask what you are really trying to prevent? If it is denial-of-service attacks, then I'd have to say there is nothing you can do if you are limited by what you can add to PHP scripts. The state of the art is so much beyond what we as programmers can protect against. Start looking at sysadmin tools designed for this purpose.
Or are you trying to limit your service so that real people can access it but bots cannot? If so, I'd look at some "captcha" techniques.
Or are you trying to prevent users from polling your site every second looking for new content? If so, I'd investigate providing an RSS feed or some other way of notifying them so they don't eat up your bandwidth.
Or is it something else?
In general, I'd say neither sleep() nor usleep() is a good way.
Your suggested method will force ALL users to wait unnecessarily before logging in.
Most LAMP servers (and most routers/switches, actually) are already configured to prevent Denial of Service attacks. They do this by denying multiple consecutive requests from the same IP address.
You don't want to put a sleep in your php. Doing so will greatly reduce the number of concurrent requests your serve can handle since you'll have connections held open waiting.
Most HTTP servers have features you can enable to avoid DoS attacks, but failing that you should just track IP addresses you've seen too many times too recently and send them a 403 Forbidden with a message asking them to wait a second.
If for some reason you can't count on REMOTE_ADDR being user specific (everyone behind the same firewall, etc.) you could prove a challenge in the login form and make the remote browser do an extended calculation on it (say, factor a number) that you can quickly check on the server side (with speedy multiplication).

Categories