What are the consequences of not validating a simple email form on the server.
Keep in mind that:
javascript validation is being carried out
there is no database in question, this is a simple email form
The PHP code I would like to use is this:
<?php
$post_data = filter_input_array( INPUT_POST, FILTER_SANITIZE_SPECIAL_CHARS );
$full_name = $post_data["full_name"];
$email_address = $post_data["email_address"];
$gender = $post_data["gender"];
$message = $post_data["message"];
$formcontent = "Full Name: $full_name \nEmail Address: $email_address \nGender: $gender \nMessage: $message \n";
$formcontent = wordwrap($formcontent, 70, "\n", true);
$recipient = "myemail#address.com"; $subject = "Contact Form"; $mailheader = "From: $email_address \r\n";
mail($recipient, $subject, $formcontent, $mailheader);
echo 'Thank You! - Return Home';
?>
Would a simple captcha solve the issue of security?
UPDATE:
A few questions I would really like answered:
If I am not worried about invalid data being sent, what is the absolute minimum I can do to improve security. Basically avoid disasters.
I should probably mention that this code is being generated in a form generator and I would like to avoid my users getting attacked. Spamming might be sorted by adding Captcha.
UPDATE:
What is the worst case scenario?
UPDATE:
Really appreciate all the answers!
A couple of things I plan to do:
add this as Alex mentioned:
filter_var("$post_data['email_address']", FILTER_VALIDATE_EMAIL);
add simple captcha
If I did add simple server side validation, what should I validate for? Cant the user still send invalid data even if I am validating it?
Also, will the above stop spam?
No validation occurs when there is no javascript present, might as well not have it...
Do you think spam bots have javascript enabled?
Extra note on would be attackers
If I were attacking your form, I would first see your javascript validation.. Next I would turn javascript off and try again...
The Consequences of only using client side validation are the same as not using validation at all...
Would a simple captcha solve the issue of security?
Not unless it's server side...
In general if you are just playing around and don't care, you don't need validation at all.
Having client-side validation is pointless and you will just be wasting your time. The client-side only approach will get you in trouble. You can't trust your users that much.
If you plan to actually release this or really use it on a live environment, you must have a server side validation. It is well worth the time since this is a simple form now, but it may grow to be much more than that. In addition, if you take care of your validation now, you can reuse it later with other components of your application/site. If you try thinking if terms of reusability you will save your self countless of hours of development.
There are also obvious issues such as injections and javaScript issues, as mentioned by other users.
In addition, a simple CAPTCHA does not cut it anymore. There are some nice resource regarding CAPTCHA.
Take a look at those.
Coding Horror
Decapther
So the simple answer of your questions is that you are certainly vulnerable in your current situation. I know that more development takes more time, but if you follow good development practices such as reusability and orthogonal/modular design you can save yourself a lot of time and still produce robust applications.
Good luck!
UPDATE:
You can add FILTER_VALIDATE_EMAIL to take care of the email validation and you can read more about the email injection and how to take care of it here: damonkohler.
As for the CAPTCHA, it could solve the problem, but it really depends on how valuable of a target your form/site is. I would recommend using non-linear transforms or something that is widely used and proven. If you are writing your own you may get yourself in trouble.
Summary:
Validate Email
Still make sure you are save from injections
Make sure the CAPTCHA is strong enough
Really Consider server-side validation
UPDATE:
#kht Did you get your questions answered? Let us know if something was unclear.
Good Luck!
UPDATE:
OK, I think we have made you a bit confused here with this whole client-side/server-side fiasco. I will try to break it down now so it makes more sense. The first part explains some basic concepts, and the second answers your questions.
First, PHP is a server-side language. It runs on the server and when a page request is sent, the server will "run" the PHP script, make any changes to the requested page, and then send it to the user who is requesting the page. The user has no access/control over that PHP script. On the contrary, as discussed earlier, the client-side scripts, such as JavaScript can be manipulated. However, just because you have some PHP script running and checking something on a form, that does not mean that the form is secure. It only means that you are doing some server-side processing of the form. Having it there, and making it secure are two different things as I am sure you have already figured out.
Now when we say that you need server-side validation we mean that you need a good one. Also, in this hectic Q&A format nobody really mentioned that there is a difference between validating data and sanitizing data.
sanitizing - making the data meet some criteria
validating - checking if the data meets a criteria
Take a look at phpnightly for a better explanation and examples.
There are also some nice simple tutorials describing how to create basic validation of a form.
nettuts
Very basic, but you should get the idea.
So how do you approach your current problem?
To begin with, you should keep what you have in terms or client-side validation and add the CAPTCHA as you mentioned(check my post or you can research some good ones).
What should you validate?
a. you should validate the data: all fields such as email, name, subject...
check if the data matches what you expected: is the filed empty?; is it an email?; does it contain numbers?; etc. You can validate the data on the server side for the same things you are validating it on the user side. The only difference is that the client cannot manipulate that validation.
b. you could sanitize the data as well
make it lower case and compare it, trim it, or even cast it into a type if you need to. If you have time to check it out, the article from phpnighty has a decent explanation of the two and when not to use both.
Can the users still send invalid data?
sure they can, but now they have no access to the validation algorithm, they can't just disable it or go around it.(strictly speaking)
when the data is invalid or malicious, just inform the user that there has been an error and make them do it again. That is the point of the server-side validation, you can prevent the user from circumventing the rules, and you can alert them that their input is not valid
be very careful with the error messages too; don't reveal too much of the rules you are using for validation to your user, just inform them what you are expecting
Also, will the above stop spam?
If you make sure the form is not vulnerable to email injections, you have client-side validation, CAPTCHA, and server-side validation of some form(it does not have to be super complex) it will stop spam.(keep in my that today's great solution is not so great tomorrow)
Why the hell do I need that server-side bull* when my client-side validation works just fine?*
Think of it as having a safety net. If a spammer goes around the client-side security, the server-side security will still be there.
This validation thing sounds like a lot of work, but it is actually pretty simple. Take a look at the tutorial I included and I am sure the code will make things click. If you make sure no unwanted information is being sent through the form, and the clients cannot manipulate the form to send to more than one email, then you are pretty much safe.
I just wrote this one out the top of my head, so if it is confusing just put some more questions or shoot me a message.
Good Luck!
Without validation they could use injection to add in values such as CC: and BCC: to send emails to multiple other people via your form. So I would recommend the least you do is add in:
filter_var("$post_data['email_address']", FILTER_VALIDATE_EMAIL);
If you check the email is valid, the worst they could do is send you invalid data.
JS validation is excellent, as long as the browser supports JS. Unfortunately there are still browsers that lack on JS support.
But, when you do not perform serverside validation, you expose your mail() to injectors. I can create my own nastyFile.html within which I place a formular wich action="http://yousite.com/yourEmailHandler.php" and by doing that I might am able to omit your JS validation
Anyone can spoof their HTTP GET/POST requests:
netcat is enough to build a simple GET request
simple python/perl scripts are enough to craft modified POST submissions
Firefox (and likely many others) have plugins that do just that from the browser (e.g. TamperData)
javascript can be readily enabled/disabled/altered aat will using convenient plugins like
IE developer toolbar
Chrome dev toolbar
FireBug
Opera DragonFly
It's just making things too easy. There could be all kinds of motives. You risk getting tampered (inconsistent/unclean) data in your database. You risk getting DoS-ed (e.g. by receiving crafted responses that bring down the server, or just hold a request handler (thread) for a long time resulting connection timeouts)
This is scratching the surface only, because once your site isn't secure, there is no telling what that can be leveraged for.
By using filter_input_array(), you are doing some server-side validation (or sanitization, which amounts to the same thing in this case). Also, by hardcoding the recipient e-mail address, you've plugged the worst and most common hole in typical e-mail forms. Whether by luck or by design, it looks like those might actually be enough in this case.
In general, vulnerabilities in e-mail forms can be divided into two broad categories:
tricking the script into doing something other than sending e-mail, or
tricking it into sending e-mail to the wrong address (spam).
In your case, all you're doing with the user-provided data (as far as the code you've showed us goes, anyway) is passing it to mail(), which is hopefully free of major security holes. (But then again, this is PHP, so...) As for tricking your script into spamming the wrong recipient, there are (again) basically two ways to do that:
if the client can supply the recipient address, a spammer can just pass in whatever they want;
otherwise, they can use e-mail header injection to insert bogus recipients (and other content), generally by injecting line breaks into the data.
Fortunately, you've hardcoded your e-mail address into the script, and the sanitization method you've chosen (FILTER_SANITIZE_SPECIAL_CHARS) replaces line breaks with HTML character entities. So it looks like you might be safe from both of these attacks. Of course, someone could still use the form to send (more or less) arbitrary e-mails to you, but presumably that's a risk you're willing to take.
All that said, though....
I'm hardly infallible, and there might be potential security issues that I've missed in my analysis above. In general, the stricter you are in checking your input, the less likely you are to be vulnerable to unexpected attacks. For example, I'd very much second the suggestion given by others here to apply an extra validation step to the user-supplied e-mail address (and, more generally, to any data that might end up in the e-mail headers).
One consequence may be if the browsers javascript is off or some expert user using firebug or some other tools change the behavior of the javascript validation than you may get a useless information
Javascript might be disabled
A person could put anything in the mail header in your code.
That is a security risk
EDIT:
$post_data contains the items from the form - $email_address being one that makes up the $mail_header. You can use that to inject stuff into the mail header.
Related
I have a question about security. I have a website programmed with HTML, CSS, PHP, Javascript(jQuery)...
Throughout the website, there are several forms (particularly with radio buttons).
Once a user selects and fills out the form, it takes the value from the selected radio button and sends that to the server for processing. The server also takes the values and plugs them into a database.
My concern is this:
How can I prevent someone from using a developer tool/source editor (such as Google Chrome's Debugging/Developer Tool module) and changing the value of the radio button manually, prior to hitting the submit button? I'm afraid people will be able to manually change the value of a radio button input prior to submitting the form. If they can indeed do that, it will entirely defeat the purpose of the script I am building.
I hope this makes sense.
Thank you!
John
How can I prevent someone from using a developer tool/source editor (such as Google Chrome's Debugging/Developer Tool module) and changing the value of the radio button manually, prior to hitting the submit button?
You can't. You have no control over what gets sent to the server.
Test that the data meets whatever requirements you set for it before inserting it into the database. If it isn't OK, reject it and explain the problem in the HTTP response.
Any data sent from the browser to the server can be manipulated outside of your control, including form data, url parameters and cookies. Your PHP code must know what sets of values are valid and reject the request if it doesn't look sensible.
When sending user input to the database you will want to ensure that a malicious user-entered string can't modify the meaning of the SQL query. See SQL Injection. And when you display the user-entered data (either directly in the following response, or later when you read it back out of the database) ensure that you encode it properly to avoid a malicious user-entered string executing as unwanted javascript in the user's browser. See Cross-site scripting and the prevention cheat sheet
I'll go along with Quentin answer on this.
Client-side validation should never stand alone, you'll need to have some sort of server-side validation of the input as well.
For most users, the client-side validation will save a round trip to the server, but at as you both mention, there is no guarentee that "someone" wouldn't send wrong data.
Therefore the mantra should be: Always have server-side validation
I would say that client-side validation should be used solely for the user's convenience (e.g., to alert them that they have forgotten to fill in a required field) before they have submitted the form and have to wait for it to go to the server, be validated, and then have it sent back to them for fixing. What a pain. Better to have javascript tell you right there on the spot that you've messed something up.
Server-side validation is for security.
The others said it already, you can't prevent users from tampering with data being sent to your server (Firebug, TamperData plugins, self-made tampering proxies...).
So on the server side, you must treat your input as if there were no client validation at all.
Never trust user input that enters your application from an external source. Always validate it, sanitize it, escape it.
OWASP even started a stub page for the vulnerability Client-side validation - which is funny - client-side validation seems to have confused so many people and been the cause of so many security holes that they now consider it a vulnerability instead of something good.
We don't need to be that radical - client-side validation is still useful, but regard it simply as an aid to prevent the user from having to do a server roundtrip first before being told that the data is wrong. That's right, client-side validation is merely a convenience for the user, nothing more, let alone an asset to the security of your server.
OWASP is a great resource for web security. Have a look at their section on data validation.
Some advice worth quoting:
Where to include validation
Validation must be performed on every tier. However, validation should be performed as per the function of the server executing the code. For example, the web / presentation tier should validate for web related issues, persistence layers should validate for persistence issues such as SQL / HQL injection, directory lookups should check for LDAP injection, and so on.
Follow this rule without exception.
In this scenario, I'd recommend that you use values as keys, and look those up on the server side.
Also, consider issuing a nonce in a hidden field every time someone loads a form - this will make it a bit more difficult for someone to submit data without going through your form.
If you have a lot of javascript, it's probably worth running it through an obfuscator - this not only makes it harder for hackers to follow your logic, it also makes scripts smaller and faster to load.
As always, there is no magic bullet to stop hacking, but you can try raising the bar enough to deter casual hackers, then worry about the ones who enjoy a challenge.
I really like the idea of validating forms client-side before doing so server-side. If the client's validation passes, I can use Javascript to submit the form.
However, I have heard that some specialized browser, like browsers for the visually impaired, don't support Javascript. Therefore, those users won't be able to submit my forms. Should I therefore avoid what I just thought of doing, or is it alright?
EDIT: (In response to answers): I guess I didn't explain that, but I was planning on doing server-side validation in addition to client-side. Sorry!
Thanks
Javascript is a nice touch to validation. It lets the user know right away that something is wrong, plus it minimises potential calls to the database.
If there are browsers out there that disable javascript for accessibility reasons, you shouldn't worry to much. That's what the server-side checking helps with.
So you should use both, and test with javascript turned on or off. NEVER use javascript as a sole validator - you could just turn javascript off in your browser and the POST data would go through!
You should do both client-side validation and server-side validation. Everything you catch with client-side validation is an opportunity to improve the user experience for your users and tell them exactly what is missing or wrong before they submit the form. If, for any reason, javascript is not enabled, you will still validate on the server (as you always should) and can return errors through the form response from the server if you have to.
So, it's always a good idea to use client-side validation if available.
Is client-side validation smart? Yes, clean input is better for performance than input that will error out.
Great UX? Yes, it's important for a user to get quick, relevant feedback.
Safe? No. Not at all. Hackers don't use your interface to hack your site.
More and more browsers can be site-selective about running JS.
Lastly, if you are concerned about equal access, your best bet is to build accessible versions of the site.
Client side validation often improves user experience, as the user can immediately see whether his data is valid or not.
If it is some simple validation, like pattern matching or length checking for passwords, definitely do it. But of course it is not a substitution for server side validation, it is not a security means in any way. Never trust user input.
Integrate the client side validation in an unobtrusive way, so that form submission still works if JS is turned off.
"Both And" is the answer. Validate client side as a convenience and as a way to better the user experience, but you should always validate server side.
Browsers without JavaScript won't execute the JavaScript at all, so they will still be able to submit your form. Don't worry.
Client side validation is done by interceptin the normal submit event and return true or false based on whether the form is valid. In this way, when javascript is not enabled, the submission is not intercepted and proceeds as normal.
It is one of the easiest things to degrade gracefully, fortunately :)
Not sure we can say it's smart to handle form "control" before submitting : this is "only" client comfort, as these controls... are just not valid from the security standpoint. So this is adding coding efforts for no added value from the security prospective. But this is adding effort for client comfort. And THIS is smart.
The simple way :
No client-side control at all, only server side. No need that js is enabled on the client-side.
This is the point that shall be always enabled and full security valid.
The intermediate way:
Implementing the simple way and adding some javascript "controls" on top, "hand coded" or using js librairies. This is the fastidious way as this is adding a layer on top of the existing server core code, and generally means some server-side code changes or refactoring. So this is the worst way according to me. But this is a good way to learn and understand the client-server exchanges. Painful but useful.
The best way:
Base all your efforts on server-side validation but make sure, from the coding starting point, to be able to embed also the "nice to have", eg. the client-side nice "controls". Which means you have to think of your code architecture before starting writing any line. How to do that ? use Ajax coded forms on the server side. Which suggests the way of coding ideally with specific php form classes. For example, ZendFramework is providing such kind of possibilities using either dojo or jQuery.
Its always better to have "Cleaner" data passed into the server.
Prevents errors and malicous data.
I'm currently writing a web application which uses forms and PHP $_POST data (so far so standard! :)). However, (and this may be a noob query) I've just realised that, theoretically, if someone put together an HTML file on their computer with a fake form, put in the action as one of the scripts that are used on my site and populate this form with their own random data, couldn't they then submit this data into the form and cause problems?
I sanitise data etc so I'm not (too) worried about XSS or injection style attacks, I just don't want someone to be able to, for instance, add nonsense things to a shopping cart etc etc.
Now, I realise that for some of the scripts I can write in protection such as only allowing things into a shopping cart that can be found in the database, but there may be certain situations where it wouldn't be possible to predict all cases.
So, my question is - is there a reliable way of making sure that my php scripts can only be called by Forms hosted on my site? Perhaps some Http Referrer check in the scripts themselves, but I've heard this can be unreliable, or maybe some htaccess voodoo? It seems like too large a security hole (especially for things like customer reviews or any customer input) to just leave open. Any ideas would be very much appreciated. :)
Thanks again!
http://en.wikipedia.org/wiki/Cross-site_request_forgery
http://www.codewalkers.com/c/a/Miscellaneous/Stopping-CSRF-Attacks-in-Your-PHP-Applications/
http://www.owasp.org/index.php/PHP_CSRF_Guard
There exists a simple rule: Never trust user input.
All user input, no matter what the case, must be verified by the server. Forged POST requests are the standard way to perform SQL injection attacks or other similar attacks. You can't trust the referrer header, because that can be forged too. Any data in the request can be forged. There is no way to make sure the data has been submitted from a secure source, like your own form, because any and all possible checks require data submitted by the user, which can be forged.
The one and only way to defend yourself is to sanitize all user input. Only accept values that are acceptable. If a value, like an ID refers to a database entity, make sure it exists. Never insert unvalidated user input into queries, etc. The list just goes on.
While it takes experience and recognize all the different cases, here are the most common cases that you should try to watch out for:
Never insert raw user input into queries. Either escape them using functions such as mysql_real_escape_string() or, better yet, use prepared queries through API like PDO. Using raw user input in queries can lead to SQL injections.
Never output user inputted data directly to the browser. Always pass it through functions like htmlentities(). Even if the data comes from the database, you shouldn't trust it, as the original source for all data is generally from the user. Outputting data carelessly to the user can lead to XSS attacks.
If any user submitted data must belong to a limited set of values, make sure it does. For example, make sure that any ID submitted by the user exists in the database. If the user must select value from a drop down list, make sure the selected value is one of the possible choices.
Any and all input validation, such as allowed letters in usernames, must be done on the server side. Any form validation on the client, such as javascript checks, are merely for the convenience of the user. They do not provide any data security to you.
Take a look # nettuts tutorial in the topic.
Just updating my answer with a previously accepted answer also in the topic.
The answer to your question is short and unambiguous:
is there a reliable way of making sure that my php scripts can only be called by Forms hosted on my site?
Of course not.
In fact, NO scripts being called by forms hosted on your site. All scripts being called by forms hosted in client's browser.
Knowing that will help to understand the matter.
it wouldn't be possible to predict all cases.
Contrary, it would.
All good sites doing that.
There is nothing hard it that though.
There are limited number of parameters each form contains. And you just have to check every parameter - that's all.
As you have said ensuring that products exist in the database is a good start. If you take address information with a zip or post code make sure it's valid for the city that is provided. Make countries and cities a drop down and check that the city is valid for the country provided.
If you take email addresses make sure that they are valid email address and maybe send a confirmation email with a link before the transaction is authorised. Same for phone numbers (confirmation code in a text), although validating a phone number may be hard.
Never store credit card or payment details if it can be avoided at all (I'm inclined to believe that there are very few situations where it is needed to store details).
Basically the rule is make sure that all inputs are what you are expecting them to be. You're not going to catch everything (names and addresses will have to accept virtually any character) but that should get most of them.
I don't think that there is any way of completely ensuring that it is your form that they are coming from. HTTP Referrer and perhaps hidden fields in your form may help but they are not reliable. All you can do is validate everything as strictly as possible.
I dont see the problem as long as you trust your way of sanitizing data...and you say you sanitize it.
You do know about http://php.net/manual/en/function.strip-tags.php , http://www.php.net/manual/en/function.htmlentities.php and http://www.php.net/manual/en/filter.examples.validation.php
right?
I have a form that sends info into a database table. I have it checked with a Javascript but what is the best way to stop spammers entering http and such into the database with PHP when Javascript is turned off?
You could implement a CAPTCHA on the form:
http://en.wikipedia.org/wiki/CAPTCHA
Edit: Also definitely verify form data on the server side and check for html tags etc as usual, but the CAPTCHA should help against automated spam attacks.
Never trust the client. Always validate all data on server side. JavaScript for form validation can just be an additional feature. You could start with basic PHP functions to check if the content contains certain strings you don't like, eg. "http://".
if (strpos('http://', $_POST['message']) !== false) { /* refuse */ }
You can use CSRF protection to prevent spammers, I have found it quite effective.
What it is and how it works
Another sneaky method is to include a "honeypot" field - a hidden field that should never be submitted with content. If it's filled, you know it's spam. Neither of these methods require an annoying CAPTCHA.
There are two things to consider which should be implemented in parallel (maybe there's more).
Captcha (as mentioned before)
Verify your data on server side! You wrote you do this by javascript. This is good, but the very same verification proccess should be written in PHP.
Well, for CAPTCHA you'll have to make it's verification on server side anyway. But even if you decide not to implement captcha, you should make data verification on server side.
I suggest using the htmlentities() function before doing your insert.
Obviously your insert should be done using parametrized queries to interact with the database as well. captcha is certainly an option, but it more serves to limit how often someone can post, not what they can post. Use hmtl escaping (again, the htmlentities() function) to prevent the user from inputting things you don't want.
What ways are there for detecting exploits in PHP/MySQL web applications (checking for certain characters or pieces of codes in the GET, POST, COOKIE arrays / using a library with a database that has all the patterns for common exploits, if any exist?) and how should I proceed when one is detected?
For example, if someone tried to find a SQL injection in my PHP/MySQL web application using the GET request method, should I store the action performed by the user in the database, have the application send me an e-mail, IP ban the user and display him/her a message "Sorry, but we have detected a harmful action from your account that will be reviewed. Your account has been disabled and certain features may be disabled from your IP address. If this is a mistake, please e-mail us with all the details."
Thanks.
Three things come to mind:
defensive coding, sanitize all input, prepare sql statements and use Suhosin
increase security of your site by breaking into it with a vulnerability scanner
log hacking attemtps with an Intrusion Detection System
If you feel a full fledged IDS is too much, try PHP IDS, as it does pretty much what you are asking for out of the box. Note that detecting intrusions at the PHP level might already be too late though to prevent an attack.
In case of a successful intrusion, I guess your best bet is taking the server offline and see what damage was done. You might have to consider hiring someone to do a forensic analysis of the machine in case you need to collect legally usable evidence.
If you feel you need to react to unsuccessful intrusion attempts and got the malicious user's IP, find out the ISP and inform him with as much details of the intrusion attempt as possible. Most ISPs have an abuse contact for these cases.
Your question is twofold and I'll answer the second part.
Log everything but do not ban or display any message. It will be embarrassing in case of a false positive. As a general rule, try to build an application that can deal with any sort of user input without a problem.
just use strip_tags() on all $_REQUEST and $_COOKIE vars to take care of code showing up in these vars, as for SQL you would have to maybe write up a query-like regex or something, but this shouldnt be an issue as you should always mysql_real_escape_string($str) all variables in your queries. try something like this though.
function look_for_code_and_mail_admin($str) {
$allowed_tags = "<a>,<br>,<p>";
if($str != strip_tags($str, $allowed_tags)) {
$send_email_to = "some#bodys.email";
$subject = "some subject";
$body = "email body";
mail($send_email_to,$subject,$body);
}
return($str);
}
Um, I can't remember the last time I've seen a site that tries to log SQL injection attacks that I wasn't able to penetrate..
You don't need to worry about weather someone is attacking the site, as it is subjective at best as to weather something is an attack or not. What if the site base64 encodes some values and decodes them before it uses it? Your IDS is not going to catch that. What if a user wants to post a snippet of code, it gets detected as an exploit because it contains SQL? This is such a waste of time... If you really need to know if someone's attacking you, just install some IDS on a seperate machine with readonly access to the incoming traffic.. I say seperate machine, because many IDS are vulnerable themselves, and will only make the situation worse.
Use standard secure programming methodologies, use paramaterized SQL queries or an ORM.
Seems like too much work with the email bit and everything to me. Aside from that, I like to run around on sites I commonly use and try to find injectable points so I can warn the administrator about it. Would you IP ban me for that?
I suggest hunting down the exploitable parts yourself and sanitizing where necessary. Acunetix has a very very good program for that. If you don't feel like shelling out the giant price for Acunetix, there are some very good firefox addons called XSS Me and SQL Inject me you might want to look into.