I came to a realization that malicious users can modify the html of a form in their browser before submitting a form.
For example, switching the names of two input fields of the same types.
I am creating a website that's largely dependent upon relationships between each entries in the database.
If this happens, it could jeopardize all functions of my sites. (I've tested it, and it does switched the inputs from the two fields who names are switched in the browser by the user, ie. me)
How could we prevent this?
If this kind of hack causes problem on the server side, you cannot fix it in your browser code. It means that you need to fix this on the server.
Anyone can generate any kind of POST or GET to your server, and you need to be prepared for that.
So, make sure that you detect this kind of tampering on the server, and deal with it there, by returning an error message, or by silently ignoring such invalid requests.
If the server is prepared for this, you can let people hack away in their browser as much as they want.
You should be validating your inputs as seen on the PHP Manual for Validation. For example, we have form inputs title and body for a post. When the user submits the post, the title should be between 1 to 40 characters, so we should validate that through PHP. Second, the body should not contain malicious text or code, so we should sanitize that with a function such as addslahes().
The entire topic revolves around PHP security. The only way to prevent hacking attempts and malicious intent is to learn security. Start learning with the link above.
As far as I know the simplest and often best solution is to check if each input field has a valid value with php before sending the query to the DB. For example, a field with the name "telephone" should only be numeric and not longer then say 10 ciphers. Make the rules strict enough and you shouldn't have any problems. In addition you should also make sure your DB fields are strict ie. only allow the length and data type you want.
Related
Is it safe to use $_POST for button action ?
for ex.
<button name="submit">Table A</button>
php (what my code here does is, if i click the button(Table A) the table A will appear then, in default is not viewable.)
if(isset($_POST['submit'])) {
<table>
code....
</table>
}
FOR MY QUESTION: is it safe ? to the attacker ? like xss, sql injection, or something ? I want an advice, to have more safer website. (or atleast safer from attackers)
It is safe provided you don't echo the output or use it unescaped within a SQL statement. Also your code should really be...
if(isset($_POST['submit'])) {
...
}
to avoid errors if it isn't set
Developer has always to sanitize and to validate the input he's getting from the request.
Depending on the purposes - you need either to enforce the sanitation or to make less cleaning (for example for DB you need to clean it up/escape additionally, for HTML output to make XSS protection, etc.)
You may read up on:
How to sanitize the input (POST/GET variables, referral, cookies, user agent, and other headers)
Cross-Site Request Forgery (CSRF) prevention;
additionally you may want to use Captcha or other protection from automatic submits, etc.
For your specific purposes, in the question itself - you are not using the value of $_POST['submit'] in any way, except checking of presence, so no additional validation is needed, it's fine like that.
Short answer
NO
POST variables are not 'safe' on their own.
Long Answer:
There are many issues around authenticating POSTed variables on your PHP script. The most well known is Cross Site Request Forgeries whereby the value is sent to your page but you have no idea where the value came from, or who sent it.
To counter this you need to set up single use unique keys to send and receive, typically using PHP SESSIONS or similar concepts that can not be touched by the end user.
The second issue is that you should never ever ever trust the contents of a POSTed variable. you need to fully escape the variable as well as ensuring that the variable is given absolutely minimum access to your code, so if it is a compromised value, it is hopefully cleaned and/or it can not harm your script.
Your Code:
Looking at your question in detail you suffer from CSRF (point 1, above) in that your PHP code has no idea if the submitted button came from the page your button exists on and therefore the PHP has no idea if the browser should see the contents of "Table A". (If everyone can see it, why hide it in the first place?)
There is also the factor your code could be used by a nefarious party to send many, many POSTed submits to the script, over a short space of time (100,000 over 2 seconds) causing a DOS attack, again due to there being no validation of where the POST came from and its implied authenticity.
Further points: You need to clarify what you deem as "safe", it is a very relative term.
You can use the button with POST. But make sure, you have validated the referrer URL of the action. Because someone may try to copy your form and submit in loop. This will down your site.
So you should check the referrer URL matches with your site URL.
Also always use form token to validate the form (Like captcha)
Do I always need to validate user input, even if I'm not actually saving them to a db, file, or using them to include files etc..
Say I just wanted to echo out a request variable or I was using it to send email, is there any need to validate or sanitise it? I've always heard people say that all user input should be checked before doing anything with it, but does it actually pose any threat, if I'm not doing any of the above?
I wouldn't recommend it.
my rule is - NEVER TRUST USER'S INPUT.
lets say that your'e working on a team.
as you wrote, you build a simple form that submit the data to php file and than mail it.
after 3 weeks another team mate wants to use that form.
he's assuming that the data in the php file is clean . he dont know that you dont filtered it.
this is a crack for troubles.
Do I always need to validate user input, even if I'm not actually saving them to a db, file, or using them to include files etc..
Everything you are going to do with user supplied data depends on the context in which you are going to use it. In your single sentence you are already talking about 3 different contexts (db, file, include). Which all will need a different strategy to prevent things for that specific context.
Say I just wanted to echo out a request variable or I was using it to send email, is there any need to validate or sanitise it?
There are more things you can do besides validating and sanitizing. And yes you should handle this case (which is another context btw). Basically you should handle all user data as if it is malicious. Even if you are "just echoing it". There are numerous things I could do when you are "just echoing".
Considering we are in the context of a HTML page I could for example (but not limited to) do:
<script>location.href='http://example.com/my-malicious-page'</script>
Which can be for example an exact copy of you website with a login form.
<script>var cookies = document.cookie; // send cookieinfo to my domain</script>
Which can be used to get all your cookies for the current domain (possibly including your session cookie). (Note that this can and imho should be mitigated by setting the http only flag on the cookies).
<script>document.querySelector('body')[0].appendChild('my maliscious payload containing all kinds of nasty stuff');</script>
Which makes it possible to sideload a virus or something else nasty.
<!--
Fuck up your layout / website. There are several ways to do this.
I've always heard people say that all user input should be checked before doing anything with it
This is mostly wrong. You only need to decide how you are going to handle a piece of data once you know what you are going to do with it. This is because you want to prevent different things in different situations. Some examples are (but not limited to): directory traversal, code injection, sql injection, xss, csrf.
All above attack vectors need different strategies to prevent them.
but does it actually pose any threat, if I'm not doing any of the above
yes totally as explained above. All data that is coming from a 3rd pary (this means user input as well as external services as well as data coming out of the database) should be treated as an infectious disease.
I have a question about security. I have a website programmed with HTML, CSS, PHP, Javascript(jQuery)...
Throughout the website, there are several forms (particularly with radio buttons).
Once a user selects and fills out the form, it takes the value from the selected radio button and sends that to the server for processing. The server also takes the values and plugs them into a database.
My concern is this:
How can I prevent someone from using a developer tool/source editor (such as Google Chrome's Debugging/Developer Tool module) and changing the value of the radio button manually, prior to hitting the submit button? I'm afraid people will be able to manually change the value of a radio button input prior to submitting the form. If they can indeed do that, it will entirely defeat the purpose of the script I am building.
I hope this makes sense.
Thank you!
John
How can I prevent someone from using a developer tool/source editor (such as Google Chrome's Debugging/Developer Tool module) and changing the value of the radio button manually, prior to hitting the submit button?
You can't. You have no control over what gets sent to the server.
Test that the data meets whatever requirements you set for it before inserting it into the database. If it isn't OK, reject it and explain the problem in the HTTP response.
Any data sent from the browser to the server can be manipulated outside of your control, including form data, url parameters and cookies. Your PHP code must know what sets of values are valid and reject the request if it doesn't look sensible.
When sending user input to the database you will want to ensure that a malicious user-entered string can't modify the meaning of the SQL query. See SQL Injection. And when you display the user-entered data (either directly in the following response, or later when you read it back out of the database) ensure that you encode it properly to avoid a malicious user-entered string executing as unwanted javascript in the user's browser. See Cross-site scripting and the prevention cheat sheet
I'll go along with Quentin answer on this.
Client-side validation should never stand alone, you'll need to have some sort of server-side validation of the input as well.
For most users, the client-side validation will save a round trip to the server, but at as you both mention, there is no guarentee that "someone" wouldn't send wrong data.
Therefore the mantra should be: Always have server-side validation
I would say that client-side validation should be used solely for the user's convenience (e.g., to alert them that they have forgotten to fill in a required field) before they have submitted the form and have to wait for it to go to the server, be validated, and then have it sent back to them for fixing. What a pain. Better to have javascript tell you right there on the spot that you've messed something up.
Server-side validation is for security.
The others said it already, you can't prevent users from tampering with data being sent to your server (Firebug, TamperData plugins, self-made tampering proxies...).
So on the server side, you must treat your input as if there were no client validation at all.
Never trust user input that enters your application from an external source. Always validate it, sanitize it, escape it.
OWASP even started a stub page for the vulnerability Client-side validation - which is funny - client-side validation seems to have confused so many people and been the cause of so many security holes that they now consider it a vulnerability instead of something good.
We don't need to be that radical - client-side validation is still useful, but regard it simply as an aid to prevent the user from having to do a server roundtrip first before being told that the data is wrong. That's right, client-side validation is merely a convenience for the user, nothing more, let alone an asset to the security of your server.
OWASP is a great resource for web security. Have a look at their section on data validation.
Some advice worth quoting:
Where to include validation
Validation must be performed on every tier. However, validation should be performed as per the function of the server executing the code. For example, the web / presentation tier should validate for web related issues, persistence layers should validate for persistence issues such as SQL / HQL injection, directory lookups should check for LDAP injection, and so on.
Follow this rule without exception.
In this scenario, I'd recommend that you use values as keys, and look those up on the server side.
Also, consider issuing a nonce in a hidden field every time someone loads a form - this will make it a bit more difficult for someone to submit data without going through your form.
If you have a lot of javascript, it's probably worth running it through an obfuscator - this not only makes it harder for hackers to follow your logic, it also makes scripts smaller and faster to load.
As always, there is no magic bullet to stop hacking, but you can try raising the bar enough to deter casual hackers, then worry about the ones who enjoy a challenge.
I'm currently writing a web application which uses forms and PHP $_POST data (so far so standard! :)). However, (and this may be a noob query) I've just realised that, theoretically, if someone put together an HTML file on their computer with a fake form, put in the action as one of the scripts that are used on my site and populate this form with their own random data, couldn't they then submit this data into the form and cause problems?
I sanitise data etc so I'm not (too) worried about XSS or injection style attacks, I just don't want someone to be able to, for instance, add nonsense things to a shopping cart etc etc.
Now, I realise that for some of the scripts I can write in protection such as only allowing things into a shopping cart that can be found in the database, but there may be certain situations where it wouldn't be possible to predict all cases.
So, my question is - is there a reliable way of making sure that my php scripts can only be called by Forms hosted on my site? Perhaps some Http Referrer check in the scripts themselves, but I've heard this can be unreliable, or maybe some htaccess voodoo? It seems like too large a security hole (especially for things like customer reviews or any customer input) to just leave open. Any ideas would be very much appreciated. :)
Thanks again!
http://en.wikipedia.org/wiki/Cross-site_request_forgery
http://www.codewalkers.com/c/a/Miscellaneous/Stopping-CSRF-Attacks-in-Your-PHP-Applications/
http://www.owasp.org/index.php/PHP_CSRF_Guard
There exists a simple rule: Never trust user input.
All user input, no matter what the case, must be verified by the server. Forged POST requests are the standard way to perform SQL injection attacks or other similar attacks. You can't trust the referrer header, because that can be forged too. Any data in the request can be forged. There is no way to make sure the data has been submitted from a secure source, like your own form, because any and all possible checks require data submitted by the user, which can be forged.
The one and only way to defend yourself is to sanitize all user input. Only accept values that are acceptable. If a value, like an ID refers to a database entity, make sure it exists. Never insert unvalidated user input into queries, etc. The list just goes on.
While it takes experience and recognize all the different cases, here are the most common cases that you should try to watch out for:
Never insert raw user input into queries. Either escape them using functions such as mysql_real_escape_string() or, better yet, use prepared queries through API like PDO. Using raw user input in queries can lead to SQL injections.
Never output user inputted data directly to the browser. Always pass it through functions like htmlentities(). Even if the data comes from the database, you shouldn't trust it, as the original source for all data is generally from the user. Outputting data carelessly to the user can lead to XSS attacks.
If any user submitted data must belong to a limited set of values, make sure it does. For example, make sure that any ID submitted by the user exists in the database. If the user must select value from a drop down list, make sure the selected value is one of the possible choices.
Any and all input validation, such as allowed letters in usernames, must be done on the server side. Any form validation on the client, such as javascript checks, are merely for the convenience of the user. They do not provide any data security to you.
Take a look # nettuts tutorial in the topic.
Just updating my answer with a previously accepted answer also in the topic.
The answer to your question is short and unambiguous:
is there a reliable way of making sure that my php scripts can only be called by Forms hosted on my site?
Of course not.
In fact, NO scripts being called by forms hosted on your site. All scripts being called by forms hosted in client's browser.
Knowing that will help to understand the matter.
it wouldn't be possible to predict all cases.
Contrary, it would.
All good sites doing that.
There is nothing hard it that though.
There are limited number of parameters each form contains. And you just have to check every parameter - that's all.
As you have said ensuring that products exist in the database is a good start. If you take address information with a zip or post code make sure it's valid for the city that is provided. Make countries and cities a drop down and check that the city is valid for the country provided.
If you take email addresses make sure that they are valid email address and maybe send a confirmation email with a link before the transaction is authorised. Same for phone numbers (confirmation code in a text), although validating a phone number may be hard.
Never store credit card or payment details if it can be avoided at all (I'm inclined to believe that there are very few situations where it is needed to store details).
Basically the rule is make sure that all inputs are what you are expecting them to be. You're not going to catch everything (names and addresses will have to accept virtually any character) but that should get most of them.
I don't think that there is any way of completely ensuring that it is your form that they are coming from. HTTP Referrer and perhaps hidden fields in your form may help but they are not reliable. All you can do is validate everything as strictly as possible.
I dont see the problem as long as you trust your way of sanitizing data...and you say you sanitize it.
You do know about http://php.net/manual/en/function.strip-tags.php , http://www.php.net/manual/en/function.htmlentities.php and http://www.php.net/manual/en/filter.examples.validation.php
right?
I'm creating a wizard-based series of forms for taking user inputs. One of the requirements for that wizard is that the script (PHP) cannot save the inputs into the database (MySQL) until the user clicks the 'Save' button, so I have to device a mechanism to transport user inputs in one form to another when the user clicks 'Previous' or 'Next' buttons. I looked into using various methods including cookies, sessions, temporary files etc, but I settled for embedding base64_encoded serialize data in a hidden field that exists in all the forms in the series. The value in this field will be decoded on form submissions and re-encoded for putting in the next form after other values from the current form are inserted.
Here is a sample of how the hidden field looks:
<input type="hidden" name="wizard:presave" value="YTo2OntzOjU6InRpdGxlIjtzOjEwOiJRdWVzdGlvbiAyIjtzOjQ6InRleHQiO3M6MTk6IlllcyBpdCdzIGEgcXVlc3Rpb24iO3M6NDoidHlwZSI7czo2OiJjaG9pY2UiO3M6NzoiY2hvaWNlcyI7YTowOnt9czo1OiJwb2ludCI7aToxO3M6Mjoib3AiO3M6MTM6ImVkaXRfZXhlcmNpc2UiO30=" />
So the questions are:
Is it considered a good/bad practice?
Is there any length limit of hidden fields in HTMLform?
What are the possible security issues?
And are there better alternatives? (with explanations, preferably without using javascript)
Thanks in advance!
I've never seen this particular method of parameter passing in my career, so I can't say whether it's good or bad. It's certainly not "standard". Standard methods would either be passing the submitted method along (unencoded/normally) using hidden inputs, or storing in session. I think you might be making work for yourself, so in that sense it would lean towards "not ideal".
As long as you are using POST for your forms, there is no defined limit for data sizes that I'm aware of in the HTTP specifications. Older servers may have practical limits, but unless you're doing something extreme such as media file uploads, they shouldn't be a worry.
Possible security issues are the normal web security flaws. Anything you take from a user and re-output to a page could contain cross-site scripting vulnerabilities and would have to be properly sanitized (this is somewhat moot if you're encoding everything). Users can craft their own data and submit it if they like. Basically, assume all the data you handle is unsafe and tainted.
Sessions would work much better here. The data the user submits wouldn't have to go through a lengthy encoding process. As well, you'd only have to validate it once. After it's been submitted and validated, you can simply store it on the server in $_SESSION and leave it alone until the final button is clicked. Otherwise, you have to worry about re-outputting it, re-receiving it, and re-validating it at each step. A malicious user could submit one set of data, have it checked and re-output as encoded data, but then craft the next form submission by unencoding, changing data, and re-encoding.
I would highly recommend that you reconsider sessions, as it simplifies all your data operations into a "do-once" scenario.
Is it considered a good/bad practice?
Depends on the purpose. As far I've only seen such constructs as a client side URL hash to remember the state of the selections in large ajax-based applications (so that they are bookmarkable) and then often also Gzipped to make it shorter. In your speficic case I'd say: make use of the HTTP session and only pass a request based identifier (also called token) in the hidden field so that you can get the associated information from the session.
Is there any length limit of hidden fields in HTMLform?
In GET the complete query string (all parameter names and values and separators together) is usually limited to 2048 characters, but you can better adhere an officious limit of 256 chars. In POST it is dependent on server configuration. Often this defaults around 2GB.
What are the possible security issues?
Well, it is obviously decode-able.
And are there better alternatives? (with explanations, preferably without using javascript)
You could Gzip it to make it shorter and less obvious. Or, as already said, make use of the session in combination with a request based identifier.
Well, you can store into the session either by serializing or just simply store it the way it is for each step. When the user clicks Save, you grab and validate the data from all the steps in the session.
tsk tsk :)
Is it considered a good/bad practice?
subjectively - bad practice..you're using the wrong hammer for the job.
Is there any length limit of hidden fields in HTMLform? - Not sure if there is a limit.
What are the possible security issues? - Possibly, quite a few, but you can sanitize the data received for every request. Besides, the data is pretty easy to decode and can be easily modified on the client side (I can see that its some sort of json that you are using :) )
And are there better alternatives? (with explanations, preferably without using javascript) - Use the right tool .. sessions perhaps?
And yes... You are most likely going to face performance and scalability issues (should you have a substantial user load) with all that sanitizing, parsing, formatting and security code running for every request.