I've decided the best way to handle authentication for my apps is to write my own session handler from the ground up. Just like in Aliens, its the only way to be sure a thing is done the way you want it to be.
That being said, I've hit a bit of a roadblock when it comes to my fleshing out of the initial design. I was originally going to go with PHP's session handler in a hybrid fashion, but I'm worried about concurrency issues with my database. Here's what I was planning:
The first thing I'm doing is checking IPs (or possibly even sessions) to honeypot unauthorized attempts. I've written up some conditionals that sleep naughtiness. Big problem here is obviously WHERE to store my blacklist for optimal read speed.
session_id generates, hashed, and gets stored in $_SESSION[myid]. A separate piece of the same token gets stored in a second $_SESSION[mytoken]. The corresponding data is then stored in TABLE X which is a location I'm not settled on (which is the root of this question).
Each subsequent request then verifies the [myid] & [mytoken] are what we expect them to be, then reissues new credentials for the next request.
Depending on the status of the session, more obvious ACL functions could then be performed.
So that is a high level overview of my paranoid session handler. Here are the questions I'm really stuck on:
I. What's the optimal way of storing an IP ACL? Should I be writing/reading to hosts.deny? Are there any performance concerns with my methodology?
II. Does my MitM prevention method seem ok, or am I being overly paranoid with comparing multiple indexes? What's the best way to store this information so I don't run into brick walls at 80-100 users?
III. Am I hammering on my servers unnecessarily with constant session regeneration + writebacks? Is there a better way?
I'm writing this for a small application initially, but I'd prefer to keep it a reusable component I could share with the world, so I want to make sure I make it as accessible and safe as possible.
Thanks in advance!
Writing to hosts.deny
While this is a alright idea if you want to completely IP ban a user from your server, it will only work with a single server. Unless you have some kind of safe propagation across multiple servers (oh man, it sounds horrible already) you're going to be stuck on a single server forever.
You'll have to consider these points about using hosts.deny too:
Security: Opening up access to as important a file as hosts.deny to the web server user
Pain in the A: Managing multiple writes from different processes (denyhosts for example)
Pain in the A: Safely making amends to the file if you'd like to grant access to an IP that was previously banned at a later date
I'd suggest you simply ban the IP address on the application level in your application. You could even store the banned IP addresses in a central database so it can be shared by multiple subsystems with it still being enforced at the application level.
I. Optimal way of storing IP ACL would be pushing banned IP's to an SQL database, which does not suffer from concurrency problems like writing to files. Then an external script, on a regular basis or a trigger, may generate IPTABLES rules. You do not need to re-read your database on every access, you write only when you detect mis-behavior.
II. Fixation to IP is not a good thing on public Internet if you offer service to clients behind transparent proxies, or mobile devices - their IP changes. Let users chose in preferences, if they want this feature (depends on your audience, if they know what does the IP mean...). My solution is to generate unique token per (page) request, re-used in that page AJAX requests (not to step into a resource problem - random numbers, session data store, ...). The tokens I generate are stored within session and remembered for several minutes. This let's user open several tabs, go back and submit in an earlier opened tab. I do not bind to IP.
III. It depends... there is not enough data from you to answer. Above may perfectly suit your needs for ~500 user base coming to your site for 5 minutes a day, once. Or it may fit even for 1000 unique concurent users in a hour at a chat site/game - it depends on what your application is doing, and how well you cache data which can be cached.
Design well, test, benchmark. Test if session handling is your resource problem, and not something else. Good algorithms should not throw you into resource problems. DoS defense included, and it should not be an in-application code. Applications may hint to DoS prevention mechanisms what to do, and let the defense on specialized tools (see answer I.).
Anyway, if you get into a resource problems in future, the best way to get out is new hardware. It may sound rude or even incompetent to someone, but calculate price for new server in 6 months, practically 30% better, versus price for your work: pay $600 for new server and have additional 130% of horsepower, or pay yourself $100 monthly for improving by 5% (okay, improve by 40%, but if the week is worth $25 may seriously vary).
If you design from scratch, read https://www.owasp.org/index.php/Session_Management first, then search for session hijacking, session fixation and similar strings on Google.
Related
We are facing huge online customers issue but with same Ip address which is consuming our huge CPU usage on server.
We are already installed mage firewall but we have to manually block the IP's while going to blacklist.
Is there any way through which we can save CPU usage due to spam users, hackers which is throwing traffic which is not relevant for website?
We have already enabled Magento cache and Full page cache extension in Magento.
What more we can do so that we can protect our Magento website from vulnerability traffic effects and our CPU usage saves for other processes.
No real solution, but some things that have propably to be considered:
First, I would check, what defines a spammer in your case.
How many times does the spammer do a certain action? Does he follow a special interaction pattern?
If the user is doing an action y, which is known to be done by spammers, for the first time, you could start to track the repeating action y. After x-times you could block the user.
The difficulty here is to find a difference in the using pattern between a spammer and a regular user, who is propably just doing things fast.
The reason why you should find a spammer-pattern is that this way you don't need to save every user's IP-address. Sure, you could do this, so that you don't need to find a pattern anymore and you just need to check how often the IP-adress is interacting. But this will fill up the database pretty fast.
By tracking and saving I am talking about storing the user-IP in a database. So you see that you have to find a way to store a miminum of false positives (good users), 0 at best, in the database. Otherwise you have good users who get blacklisted for no reason.
Maybe the way with the least implementation effort and with the minimum risk is keeping it the way it is, and blacklisting spammers manually. This on the other hand is an effort for you.
This question already has answers here:
Top techniques to avoid 'data scraping' from a website database
(14 answers)
Closed 5 years ago.
I have LAMP server where I run a website, which I want to protect against bulk scraping / downloading. I know that there is no perfect solution for this, that the attacker will always find a way. But I would like to have at least some "protection" which hardenes the way of stealing data than just having nothing at all.
This website has cca. 5000 of subpages with valuable text data and couple of pictures on each page. I would like to be able online analyze incoming HTTP requests and if there is suspicious activity (e.g. tens of requests in one minute from one IP) it would automatically blacklist this certain IP address from further access to the site.
I fully realize that what I am asking for has many flaws, but I am not really looking for bullet-proof solution, but just a way how to limit script-kiddies from "playing" with easily scraped data.
Thank you for your on-topic answers and possible solution ideas.
Although this is a pretty old post, I think the answer isnt quite complete and I thought it worthwhile to add in my two cents. First, I agree with #symcbean, try to avoid using IP's but instead using a session, a cookie, or another method to track individuals. Otherwise you risk lumping together groups of users sharing an IP. The most common method for rate limiting, which is essentially what you are describing "tens of requests in one minute from one IP", is using the leaky bucket algorithm.
Other ways to combat web scrapers are:
Captchas
Make your code hard to interpret, and change it up frequently. This makes scripts harder to maintain.
Download IP lists of known spammers, proxy servers, TOR exit nodes, etc. This is going to be a lengthy list but its a great place to start. You may want to also block all amazon EC2 IP's.
This list, and rate limiting, will stop simple script kiddies but anyone with even moderate scripting experience will easily be able to get around you. Combating scrapers on your own is a futile effort but my opinion is biased because I am a cofounder of Distil Networks which offers anti-scraping protection as a service.
Sorry - but I'm not aware of any anti-leeching code available off-the-shelf which does a good job.
How do you limit access without placing burdens on legitimate users / withuot providing a mechanism for DOSing your site? Like spam prevention, the best solution is to use several approaches and maintain scores of badness.
You've already mentioned looking at the rate of requests - but bear in mind that increasingly users will be connecting from NAT networks - e.g. IPV6 pops. A better approach is to check per session - you don't need to require your users to register and login (although openId makes this a lot simpler) but you could redirect them to a defined starting point whenever they make a request without a current session and log them in with no username/password. Checking the referer (and that the referer really does point to the current content item) is a good idea too. Tracking 404 rates. Road blocks (when score exceeds threshold redirect to a capcha or require a login). Checking the user agent can be indicative of attacks - but should be used as part of the scoring mechanism, not as a yes/no criteria for blocking.
Another approach, rather than interrupting the flow, is when the thresholds are triggered start substituting content. Or do the same when you get repeated external hosts appearing in your referer headers.
Do not tar pit connections unless you've got a lot of resource serverside!
Referrer checking is one very simple technique that works well against automated attacks. You serve content normally if the referrer is your own domain (ie the user has reached the page by clicking a link on your own site), but if the referrer is not set, you can serve alternate content (such as a 404 not found).
Of course you need to set this up to allow search engines to read your content (assuming you want that) and also be aware that if you have any flash content, the referrer is never set, so you can't use this method.
Also it means that any deep links into your site won't work - but maybe you want that anyway?
You could also just enable it for images which makes it a bit harder for them to be scraped from the site.
Something that I've employed on some of my websites is to block known User-Agents of downloaders or archivers. You can find a list of them here: http://www.user-agents.org/ (unfortunately, not easy to sort by Type: D). In the host's setup, I enumerate the ones that I don't want with something like this:
SetEnvIf User-Agent ^Wget/[0-9\.]* downloader
Then I can do a Deny from env=downloader in the appropriate place. Of course, changing user-agents isn't difficult, but at least it's a bit of a deterrent if going through my logs is any indication.
If you want to filter by requests per minute or something along those lines, I don't think there's a way to do that in apache. I had a similar problem with ssh and saslauth, so I wrote a script to monitor the log files and if there were a certain number of failed login attempts made within a certain amount of time, it appended an iptables rule that blocked that IP from accessing those ports.
If you don't mind using an API, you can try our https://ip-api.io
It aggregates several databases of known IP addresses of proxies, TOR nodes and spammers.
I would advice one of 2 things,
First one would be, if you have information that other people want, give it to them in a controlled way, say, an API.
Second would be to try and copy google, if you scrape the results of google ALOT (and I mean a few hundred times a second) then it will notice it and force you to a Captcha.
I'd say that if a site is visited 10 times a second, its probably a bot. So give it a Captcha to be sure.
If a bot crawls your website slower then 10 times a second, I see no reason to try and stop it.
You could use a counter (DB or Session) and redirect the page if the limit is triggered.
/**Pseudocode*/
if( ip == currIp and sess = currSess)
Counter++;
if ( Count > Limit )
header->newLocation;
I think dynamic blocking of IPs using IP blocker will help better.
This is a beginner question...
In a website, what type of data should or should not be included inside the session? I understand that I should not include any info that needs to remain secure. I'm more interested in programming best practice. For example, it is possible to include into the session some data which would otherwise be sent from page to page as dependency injection. Wouldn't that correspond to creating a global variable?
Generally speaking, what kind of data has or hasn't its place inside a session table?
Thanks,
JDelage
The minimum amount of information needed to maintain needed state information between requests.
You should treat your session as a write-once, read many storage. But one which is rather volatile - e.g. the state of your underlying application data should be consistent (or recoverable) if all the sessions suddenly disappeared.
There are some exceptions to this (normally the shopping basket would be stored in the session - but you might want to perform stock adjustments to 'reserve' items prior to checkout). Here items may be added/edited/changed multiple times - so its not really write-once - but by pre-reserving stock items you are maintaining the recoverabiltiy of the database - but an implication of this is that you should reverse the stock adjustments when the session expires in the absence of completion.
If you start trying to store information about the data relating to individual page turns, you're quickly going to get into problems when the user starts clicking on the forward/back buttons or opens a new window.
In general you can put anything you like in a session. It's bad practice to put information in a session that has to be present to make your page run without (technical) errors.
I suggest to minimize the amount of data in your session as much as possible.
stuff you can save in the session so that you dont have to make another database query for info that isn't going to change. like their username, address, phone number, account balance, security permissions on your site, etc.
(This is perhaps more than you're looking for, but might make for good additional information to add to the good answers already posted.)
Since you mention best practices, you may want to look into some projects/technologies which can be used to take the idea of session state a bit further. One common pitfall with horizontally scaling web applications across multiple servers is maintaining session state between them. (User A logs in to Server A which stores the user's session, but on the next request hits Server B which doesn't know about User A's session, etc.)
One of the things I always end up saying to myself and to colleagues is that session by itself isn't really the best place to store data, even if that data is highly transient in nature. A web server is a request/response system, not a data store. It's highly tuned to the former, but not always so great for the latter.
Thus, there are ways to externalize your application's session data (or any stateful data, which should really be kept to a design minimum in the RESTful stateless nature of the web) from your web server and to another system. Memcached is a very common tool for this. There are also drop-in session replacements (or configurable session options for various frameworks/environments) which store session in a database like SQL or MySQL.
One idea I've been toying with lately is to store session data (well, any transient data where it's OK to lose it in a catastrophe) in a NoSQL database. CouchDB and MongoDB are my current top choices for this, but there's no shortage of other options. CouchDB has excellent horizontal scaling, MongoDB is ridiculously fast when run entirely in-memory, etc.
One of the major benefits of something like this, at least for me, is that deployments can easily become non-events. The web services on any given server can be re-started and the applications therein re-initialized without losing stateful data. If the data is persisted to the disk (that is, not entirely run in-memory) then the server can even be rebooted without losing it. Servers/services can drop in and out of the farm and users would never know the difference.
Additionally, externalizing this data allows you to analyze the data in potentially useful ways. Query it, run metrics on it, interface with it via other web applications or entirely offline tools, etc. It really opens up the options as a project grows in complexity.
(Again, this isn't really intended to answer your question, but rather to just add information that you may find useful. It's something my colleagues and I have been tinkering with as of late and your question seemed like a good place to mention it.)
Of course, I store all players' ip addresses in mysql and I can check if there is a person with the same ip address before he registers, but then, he can register to my page at school or wherever he wants. So, any suggestions?
The only way that proves particularly effective is to make people pay for accessing your game.
Looking behind the question:
Why do you want to stop the same person registering and playing twice?
What advantage will they have if they do?
If there's no (or only a minimal) advantage then don't waste your time and effort trying to solve a non-problem. Also putting up barriers to something will make some people more determined to break or circumvent them. This could make your problem worse.
If there is an advantage then you need to think of other, more creative, solutions to that problem.
You can't. There is no way to uniquely identify users over the internet. Don't use ip addresses because there could be many people using the same ip, or people using dynamic ip's.
Even if somehow you made them give you a piece of legal identification, you still wouldn't be absolutely sure that they were not registered on the site twice as two different accounts.
I would check the user's IP every time they log onto the game, then log users who come from the same IP and how much they interact. You may find that you get some users from the same IP (ie, roomates, spouses, who play together and are not actually the same person). You may just have to flag these users and monitor their interactions - for example, is there a chat service in the game? If they don't ever talk to each other, they're more than likely the same person, and review on an individual basis.
If its in a webrowser you could bring the information like OS or browser but this even makes it not save but still safer.
It would take the hackers only a little more time and You have to look for the possibility that some people could play on systems with the same OS and browser
The safest thing would be that people on the same IP cannot do things with each other like trading or like in the game PKR (poker game) that you cannot sit on the same table.
An other thing would be wise to do is to use captcha's, its very user unfriendly but it keeps a lot bots out
If it is a browser-based game, Flash cookies are a relatively resilient way to identify a computer. Or have them pay a minimal amount, and identify them by credit card number - that way, it still won't be hard to make multiple account (friends' & family members' cards), but it will be hard to make a lot of them. Depending on your target demographic, it might prohibit potential players from registering, though.
The best approach is probably not worrying much about it and setting the game balance in such a way that progress is proportional to time spent playing (and use a strong captcha to keep bots away). That way, using multiple accounts will offer no advantage.
There are far too many ways to circumvent any restrictions to limit to a single player. FAR too many.
Unless the additional player is causing some sort of problem it is not worth the attempt. You will spend most of your time chasing 'ghosts' instead of concentrating on improving the game and making more money.
IP bans do not work nor flash cookies as a control mechanism either.
Browser fingerprinting does not work either. People can easily use a second browser.
Even UUID's will not work as those too can be spoofed.
And if you actually did manage to discover and implement a working method, the user could simply use a second computer or laptop and what then?
People can also sandbox a browser so as to use the same browser twice thus defeating browser identification.
And then there are virtual machines....
We have an extreme amount of control freaks out there wanting to control every aspect of computing. And the losers are the people who do the computing.
Every tracking issue I ever had I can circumvent easily. Be it UUID's, mac addresses, ip addresses, fingerprinting, etc. And it is very easy to do too.
Best suggestion is to simply watch for any TOU violations and address the problem accordingly.
I am using the Zend Framework but my question is broadly about sessions / databases / auth (PHP MySQL).
Currently this is my approach to authentication:
1) User signs in, the details are checked in database.
- Standard stuff really.
2) If the details are correct only the user's unique ID is stored in the session and a security token (user unique ID + IP + Browser info + salt). The session in written to the filesystem.
I've been reading around and many are saying that storing stuff in sessions is not a good idea, and that you should really only write a unique ID which refers back to the user's details and a security token to prevent session hijacking. So this is the approach i've taken, i use to write the user's details in session, but i've moved that out. Wanted to know your opinions on this.
I'm keeping sessions in the filesystem since i don't run on multiple servers, and since i'm only writting a tiny tiny bit of data to sessions, i thought that performance would be greater keeping sessions in the filesystem to reduce load on the database. Once the session is written on authentication, it really is only read-only from then on.
3) The rest of the user's details (like subscription details, permissions, account info etc) are cached in the filesystem (this can always be easily moved to memory if i wanted even more performance).
So rather than keeping the user's details in session, the user's details are cached in the file system. I'm using Zend_Cache and the unique cache id is something like md5(/cache/auth/2892), the number is the unique id of the user. I guess the benefit of this method is that once the user is logged in, there is essentially not database queries being run to get the user's details. Just wonder if this approach is better than keeping the whole lot in session...
4) As the user moves throughout the site the only thing that is checked is the ID in the session and the security token.
So, overall the first question is 1) is the filesystem more efficient than a database for this purpose 2) have i taken enough security precautions 3) is separating user detail's from the session into a cached file a pointless task?
Thanks.
You're asking a range of things.
Sessions
Sessions in PHP are fast and efficient. Thousands of small disk-based sessions on a moderately up-to-date server is not going to be a performance bottleneck. Neither is writing your own handlers (very easy; the PHP manual has examples) to put it in a database.
About the only best-practice rules about sessions is: only give the web browser one thing, the session ID. Putting just the logged in userid in the session and retrieving those details from the DB when you need them is also best-practice. It also means that user information can be changed and they get it on the next page update.
It doesn't sound like you will have this problem but beware of just throwing a lot of stuff into a session. A few K of data (say, a few dozen scalars) is fine. Tossing many objects and large arrays of data in there will be noticed. If you do this for a specific page, remember to throw it away in the session once the page is done with it.
You may also want to implement your own login timeout with a session variable. The garbage collection settings in php.ini are intended for managing the storage of session data, not for doing login timeouts.
Caching
This is a complex topic and you will probably need to start gathering metrics (generally page load times) before implementing anything.
To implement any sort of caching, you do need to consider the lifetime of the data you're caching and how expensive re-generating it will be on a cache miss. Just throwing memcache at the problem is not a solution; you still need to understand your caching parameters and how memcache interprets them. This also applies to any persistent storage solution, including disk-based sessions, but I'm highlighting memcache because it is high-profile and has quite an aggressive expiry mechanism.
An often overlooked example is loading the same data from the database multiple times in a page: a good ORM will do that for you without relying on MySQL query caching. Another overlooked example are small queries that run on every page: caching these for just a few seconds on a moderately busy server and the database load will drop considerably.
Finally, caching at multiple levels is often much more effective and scalable than once because they can leverage each other's expiries. It also abstracts well: for example, hide it in your ORM and it's theoretically available invisibly and automatically for all your objects.
1) You can easily test which is faster by making a loop script. Anyway, a drawback with using the filesystem is that you need to update the cached file everytime you update the db. Copies of data is in general a bad thing. Also, unless you have millions of visitors I dont think there will be any practical diffrence regarding speed in any of the stratagies. And... not to forget, sessions are also stored in the filesystem. One file for each session.
Is a query faster then the filesystem: Depends. Is query caching enabled. In MySql it is by default, and than you might be lucky and only need a memory access. If not, the db needs to do a filesystem accass anyway. Second, how optimized is your query with index's. How buissy is the server harddisk.
3) Depends on the speed of fetching it from db. In general, caching can do magic to speed performance, but caching in memory would be even better by using memcached or something similar. In general i would avoid copies of the data in files. But of course, if it takes secods to query the data from the db, than go for filesystem caching. Also, if you have many users.. like 10.000+ you have to make some folder system, since putting 10.000 cached files in the same folder slows downs the accesstime...