POST PHP Security, how to prevent - php

On my current website i use Jquery and POST requests between different PHP files to get and update information. Currently om not using either SSL or home grown encryption to hide the plain-text in the headers, that will come later.
I'm wondering how to prevent client side POST modification besides sanitizing and validating the inputs before using them. Some of the information passed between the PHP documents are hard to predict, therefore hard to validate.
Got any tricks up your sleeves?
I was thinking i could use session stored data in PHP to validate that it was the actual server that sent the request. But i guess that session data can be "tapped" in many ways?

Choose one:
You can store data in session between requests (more server memory)
You can sign the data sent to the client using an HMAC (more server cpu), then check it on the next request on the server
There's no excuse not to use HTTPS these days. 3 free vendors now.

Two important things about HTTP - It is, by nature, stateless, therefore every request is independent of any previous requests and secondly and more importantly - it is based on trust. Once data hits the server (specifically the php script), it is impossible to know where that request originated and if the data can be trusted. This means the only way to ensure data is clean and secure is if it is sanitized and validated.
Because of the inherent trust with HTTP, any client can forge a request with malicious intent. There are ways to make this harder, and depending on what you are trying to protect you can spend more time and resources to do so. These steps are different depending on what you are trying to accomplish. Are you trying to stop a malicious user from stealing others users information? Are you trying to stop them from accessing data on your server that they should not (sql injection, directory traversal)? Are you trying to prevent the user from impersonating another user (session hijacking)? Are you trying to prevent the user from injecting malicious javascript (xss)? Depending on your goal and your risk, you would invest time and energy to try and prevent one or all of these things.
Lastly, HTTPS only mitigates a man in the middle attack (maybe session hijacking) and not any of the above mentioned scenarios, so you still need to clean and scrub all data that your php receives.

Related

form validation and security in php [duplicate]

I saw here that:
As you probably already know, relying
on client-side validation alone is a
very bad idea. Always perform
appropriate server-side validation as
well.
Could you explain why server-side validation is a must?
Client-side validation - I assume you are talking about web pages here - relies on JavaScript.
JavaScript powered validation can be turned off in the user's browser, fail due to a scripting error, or be maliciously circumvented without much effort.
Also, the whole process of form submission can be faked.
Therefore, there is never a guarantee that what arrives server side, is clean and safe data.
There is a simple rule in writing server application: Never trust the user data.
You need to always assume that a malicious user accesses your server in a way you didn't intend (e.g. in this case via a manual query via curl instead of the intended web page). For example, if your web page tries to filter out SQL commands an attacker already has a good hint that it might be a good attack vector to pass input with SQL commands.
anyone who knows basic javascript can get around client side.
client side is just used to improve the user experience (no need to reload page to validate)
The client you're talking to may not be the client you think you're talking to, so it may be ignoring whatever validation you're asking it to do.
In the web context, it's not only possible that a user could have javascript disabled in their browser, but there's also the possibility that you may not be talking to a browser at all - you could be getting a form submission from a bot which is POSTing to your submission URL without ever having seen the form at all.
In the broader context, you could be dealing with a hacked client which is sending data that the real client never would (e.g., aim-bots for FPS games) or possibly even a completely custom client created by someone who reverse-engineered your wire protocol which knows nothing about any validation you're expecting it to perform.
Without being specific to Javascript and web clients and to address the issue more widely, the server should be responsible for maintaining its own data (in conjunction with underlying databases).
In a client-server environment the server should be ready for the fact that many different client implementations could be talking to it. Consider a trade-entry system. Clients could be GUIs (e.g. trade entry sysems) and (say) data upload clients (loading multiple trades from .csv files).
Client validation may be performed in many different ways, and not all correctly. Consequently the server shouldn't necessarily trust the client data and perform integrity checks and validation itself.
In case the attackers post their own form.
You can turn off/edit JavaScript.
Because the user agent (e.g. browser) might be a fake. It is very easy to create a custom application to create an HTTP request with arbitrary headers and content. It can even say it is a real browser—you have no way of telling the difference.
All you can do is look at the content of the request, and if you don't check it you don't know it is valid.
Server-side validation is a must because client-side validation does not ensure not-validated data will arrive in the server.
Client-side validation is not enough because its scope of action is very restrict. The validation is performed in the browser user-interface only.
A web server "listens" to and receives an HTTP request containing data from the browser, and then process it.
A malicious user can send malicious HTTP requests by many ways. A browser is not even required.
The client-side validation, performed using JavaScript, in the browser, is an important usability, user-interface enhancement. But it does not prevent malicious data to be sent by an user that knows how to circumvent the browser default behaviour of building the HTTP request that will be sent to the server. This can be done easily with some browser plugins, using cURL, etc.
In general, it's best for EVERY piece of an app to do it's own checking/verifications.
Client-side checks are good for maximizing the user-experience and speeding up the feedback to the client that they need to fix something, and to reduce the amount of problems encountered in the server-side checks.
Then at each major point of transition on the server-side code, you should have checks in place there too. Verify inputs within the application code, preferably via whitelist input validation, and then have any interactions with the database use parameterized queries to further ensure problems do not occur.
You should perform server-side validation on any data which, if invalid, could be harmful to anyone other than the entity posting the data. Client-side validation may be suitable in cases where invalid data would have no ill effects for anyone other than the entity posting it. Unless you can be certain that the ill effects from bad data will not spread beyond the entity posting it, you should use server-side validation to protect yourself against vandals or other rogue clients.
Client sided validation is for saving the client from entering wrong data. Server sided validation is for saving the server from processing wrong data. In the process, it also introduces some security into the submission process.
Client side validations presuppose a safe browser, a client side language, or HTML 5. All these elements could be disabled, partially unusable, or simply not implemented. Your website have to be used from every person, with every browser.
The server side languages are safer, and -if they aren't bugs- the validation will be surely safer and right.
Buddy , Suppose if a person turnsoff the javascript in his browser , the validation became dead . Then if he post some malcious content through that form to the server side . It will lead to serious vulnerabilities like sql injection or xss or any other type of problems . So beware if you are going to implement a client side javascript validation .
Thank you

protect php script against csrf...without php session (cross site)

I have a public form that publish POST data to a PHP script.
This form is not located on the same domain, and doesn't use PHP either so the protection cannot be built around PHP session.
The goal is to allow only this form to post on that PHP script.
How do I provide more security for checking source of the request tells how to implement CSRF protection using PHP session but I wonder how I could do to protect mine without it? Is it possible?
POST requests are harder to fake compared to GET requests, so you have that going for you, which is nice. Just make sure you're not using $_REQUEST in your script.
You cannot use sessions here, but the principles are the same - you gotta implement some kind of a "handshake" between a form and your PHP script. There are a few different approaches if sessions are not an option.
The simplest thing to do would be to check http referrers. This will not work if the form is on http and script is under https, and also can be overcome using open redirect vulnerability.
Another way to go would be captchas. I know, not user friendly or fashionable these days, but that would make request forgery much harder, as hacker could not make his exploit work behind the scenes without any user input. You should look into reCAPTCHA (google's "I am not a robot" checkbox): https://www.google.com/recaptcha/intro/index.html
This is a tricky situation, because form on one host and script on another is basically CSRF in itself, so you want to allow it but only for one host. Complete security without any user interaction might be impossible here, so just try to make it as hard as possible for a would-be hacker to mess with your script, or suffer on the UX side. Personally i would go with reCAPTCHA.

Without using SSL, what's the most secure way to make an AJAX request to a PHP page?

So, it's impossible to do AJAX requests securely without using SSL. I get it. You can either view-source the data that's being sent via Javascript, or you can directly access the PHP page by spoofing headers, yada yada.
But let's say this web app doesn't particularly require true security, and instead it's just a sort of game to keep most reverse-engineers at bay. What sort of hurdles should I employ?
I'm not looking for some ridiculously over-the-top Javascript implementation of an encryption algorithm. I want simplicity as well as mild security... if that isn't contradictory by nature. So, what would you guys recommend?
For example, I'm running a contest where if a user clicks an image successfully (jQuery), it passes their userid and a timestamp to a PHP page, both MD5 salted by random data and then encoded with MIME. The PHP page then validates this userid and timestamp, then returns a winning "code" in the form of another salted MD5 hash. I'm also employing multiple header checks to help ensure the request is from a valid location. Am I missing anything, or is that about all I can do? It seems like someone could just fire the jQuery click event and ruin the whole thing, but I don't see how I can prevent that.
I'll be awarding the answer to anyone who comes up with an ingenious faux-security mechanism! Or... just whomever tells me why I'm stupid this time.
I believe header checks can be easily fooled. Doesn't hurt though.
Since your algorithm is exposed on the client side, the user can simply send the appropriate data to your server with an automated script to fool your server into thinking it was clicked.
In addition to that, you have to watch out for session hijacking. A user can essentially submit this ajax request on behalf of someone else, especially if they have the algorithm. Does your application have different behavior for certain users? If so, then the session hijacking could turn into priviledge escalation issue.
It is not necessarily true that you need to encrypt the payload with SSL in your case in order to build a secure application. From what you've described, there is no sensitive data being sent over the wire.
Ensure that you have some basic silly checks on the server side to check for automated or malicious behavior. For example, if you find that the header information is missing, you may want to have some sort of flag/alert that someone is toying with the response. Another place you may want to do this is the pattern of requests.
A more secure model is to have the server assign the user some session token that they cannot reverse-engineer. This session token ideally should begin with the timestamp instead of the username to promote the avalanche effect of the salted hashing algorithm.
Since it seems like your application deals with prizes and potentially money, I would invest some more time in securing this app. Hope these tips have helped you.

Penny Auction AJAX Security

I'm currently developing a penny auction website to test my ability at programming javascript and using AJAX effectively. However, I have come across the problem of security.
Firstly, I have been debating whether authentication should be handled server side or client side but have come to a decision that the PHP could handle this much more easily. For instance, when a user sends a bid via ajax to a php file on the server this will then check if the user is logged in and then then sanitise the data before the bid is entered into the database.
Secondly, Is there any way of encrypting or obscuring data being sent as due to javascript's open nature it seems to pose a considerable threat?
Thanks.
Clients to web applications are inherently untrusted, since you have no control over what the user's browser is going to do. Therefore, never rely on the client to perform sensitive operations.
To answer your specific questions, definitely perform all the authentication and authorization checks on the server side. SSL/TLS encryption will protect data in transit between the client and server, but the data will unavoidably be unencrypted once it reaches the client, so you can't use encryption to somehow hide or protect data from the client and still expect the client to be able to do anything with it.
Security through obscurity is no security at all, as always. If the information you're keeping in JavaScript is so sensitive that it can't be seen and is a risk because of "JavaScript's open nature", it should not be in JavaScript.

Using a session token or nonce for Cross-site Request Forgery Protection (CSRF)?

I inherited some code that was recently attacked where the attacker sent repeated remote form submissions.
I implemented a prevention using a session auth token that I create for each user (not the session id). While I realize this specific attack is not CSRF, I adapted my solution from these posts (albeit dated).
https://www.owasp.org/index.php/Cross-Site_Request_Forgery_%28CSRF%29
http://tyleregeto.com/a-guide-to-nonce
http://shiflett.org/articles/cross-site-request-forgeries
However, it still feels there is some vulnerability here. While I understand nothing is 100% secure, I have some questions:
Couldn't a potential attacker simply start a valid session then include the session id (via cookie) with each of their requests?
It seems an nonce would be better than session token. What's the best way to generate and track an nonce?
I came across some points about these solutions being only single window. Could someone elaborate on this point?
Do these solutions always require a session? Or can these tokens be created without a session? UPDATE, this particular page is just a single page form (no login). So starting a session just to generate a token seems excessive.
Is there a simpler solution (not CAPTCHA) that I could implement to protect against this particular attack that would not use sessions.
In the end, I am looking for a better understanding so I can implement a more robust solution.
As far as I understand you need to do three things: make all of you changing-data actions avaliable only with POST request, disallow POST requests without valid referrer(it must be from the same domain) and check auth token in each POST request(POST token value must be the same as token in cookie).
First two will make it really hard to do any harmfull CSRF request as they are usually hidden images in emails, on other sites etc., and making cross-domain POST request with valid referer should be impossible/hard to do in modern browsers. The thid will make it completely impossible to do any harmfull action without stealing user's cookies/sniffing his traffic.
Now about your questions:
This question really confuses me: if you are using auth tokens correctly then attacker must know user's token from cookie to send it along with request, so why starting a valid attacker's own session can do any harm?
Nonces will make all your links ugly - I have never seen anyone using them anymore. And I think your site can be Dosed using it as you must save/search all the nounces in database - a lot of request to generate nounces may increase your database size really fast(and searching for them will be slow).
If you allow only one nounce per user_id to prevent (2) Dos attack then if user opens a page, then opens another page and then submits the first page - his request will be denied as a new nounce was generated and the old one is already invalid.
How else you will identify a unique user without a session ID be it a cookie, GET or POST variable?
UPD: As we are not talking abot CSRF anymore: you may implement many obscure defences that will prevent spider bots from submitting your form:
Hidden form fields that should not be filled(bots usually fill most of form fields that they see that have good names, even if they are realy hidden for a user)
Javascript mouse trackers (you can analyse recorded mouse movements to detect bots)
File request logs analysis(when a page is loaded javascript/css/images should be loaded too in most cases, but some(really rare) users have it turned off)
Javascript form changes(when a hidden(or not) field is added to a form with javascript that is required on server-side: bots usually don't execute javascript)
Traffic analysis tools like Snort to detect Bot patterns (strange user-agents, too fast form submitting, etc.).
and more, but in the end of the day some modern bots use total emulation of real user behaviour(using real browser API calls) - so if anyone really want to attack your site, no defence like this will help you. Even CAPTCHA today is not very reliable - besides complex image recognition algorithms you can now buy 1000 CAPTCHA's solved by human for any site for as low as $1(you can find services like this mostly in developing countries). So really, there is no 100% defence against bots - each case is different: sometimes you will have to create complex defence system yourself, sometimes just a little tweak will help.

Categories