Related
I have a membership website where users can embed their own code into their profile. I would like to allow them to include embed codes on their profile such as YouTube and Javascript embed codes.
I noticed JsFiddle.net can do this. Does anybody know how to duplicate this security?
Thank you for any help!
Set up a completely separate domain name (e.g. "exampleusercontent.com") exclusively for user-submitted HTML, CSS, and JavaScript. Do not allow this content to be loaded through your main domain name. Then embed the user content in your pages using iframes.
If you need tighter integration than simple framing, window.postMessage() may help, allowing scripts in different frames to communicate with each other in a controlled manner.
Alternatively, Google Caja is an open-source compiler for sandboxing third-party JavaScript, although from time to time, someone has discovered a vulnerability in it.
You may not want to rely on Caja as your sole layer of defense. After all, Facebook did give up on a similar system (called FBML/FBJS) in favor of the iframe sandboxing approach.
I assume you want security from malicious code injection, sql injection, etc.
When the form is submitted you'll need to verify the input server side. As long as you validate it there for what you expect it to be you should be okay. The only validation you do within JavaScript (if any) should be for helping the user add their information to the form and see where they've gone wrong, assistance in other words.
I've used the YouTube API in the past to validate video entries. Not sure if Vimeo has an API, I'd be surprised if it didn't though.
I am using JQuery and Javascript extensively in my new project including form validation because I don't want to burden server using PHP validation. So I am restricting my site from the people who disabled Javascript on their browsers. I am trying to redirect them using meta tag:
<meta http-equiv="refresh" content="2; URL=../../enablejs.html">
I assume that this is safe because if javascript is not enabled they will not be able to access my site.
But still I have a doubt over this and need your advice. Is it completely safe? If not what are the area I need to concetrate?
This is a terrible, terrible idea.
because I don't want to burden server using PHP validation
You mean, you don't want to burden yourself with implementing it :)
I can relate. Everyone hates doing stuff twice. But server side validation is not a negotiable extra; client side validation can be easily circumvented and is for user convenience only. Server side validation is always needed for safety and security.
Apart from it being a bad idea, there is no way of reliably excluding users who have JavaScript turned off. JavaScript runs on client side, and its presence or non-presence can be easily faked to the server.
Client-side anything is never ever safe. You always need server-side validation. It's not a "burden", it's a necessity. I don't even need your website to submit (unvalidated) data to your server, in the end it all just boils down to HTTP requests. If you do not validate everything the user does on the server, you have no security.
I am using Jquery and Javascript extensively in my new project including form validation because I don't want to burden server using PHP validation.
That shouldn't save a significant burden. It should give faster feedback to users though, which is good.
So I am restricting my site from the people who disabled Javascript on their browsers.
That is a waste of time. The proportion of submissions in that which will be from users with JS disabled will be tiny.
I am trying to redirect them using meta tag
That's a very user hostile thing to do.
I assume that this is safe because if javascript is not enabled they will not be able to access my site.
If you mean that it avoids the need to write server side validation routines, then you are wrong. If someone wants to attack the site (rather then submit bad data by accident) then they can construct HTTP requests manually.
No that's not safe. Client side validations are nowhere safe. With javascript enabled anyone can bypass your validations. Using chrome console I can probably alter any text on your input boxes or any other input method without you validation noticing it.
Use server side validation or you're screwed.
No, this is not safe. Never rely on the browser for form validation. Form validation in the browser should only be to improve user experience, not to protect your data. You need to add some PHP validation.
Also, are people who have JavaScript disabled not supposed to use your site? You should make JavaScript degrade gracefully so that your site is still usable without it.
Using client side validation is a recipe for disaster "never ever trust clients input" clients inputs are GET (URL included), POST, FLash ...
All inputs should be validated by server side scripting language like PHP, ASP.net, java ...
If you use PHP then check http://www.phpclasses.org/ and look for form validation scripts and Cross-site scripting (XSS). Or use validation classes offered in frameworks like zend, codeigniter
http://en.wikipedia.org/wiki/Cross-site_scripting
I have a question about security. I have a website programmed with HTML, CSS, PHP, Javascript(jQuery)...
Throughout the website, there are several forms (particularly with radio buttons).
Once a user selects and fills out the form, it takes the value from the selected radio button and sends that to the server for processing. The server also takes the values and plugs them into a database.
My concern is this:
How can I prevent someone from using a developer tool/source editor (such as Google Chrome's Debugging/Developer Tool module) and changing the value of the radio button manually, prior to hitting the submit button? I'm afraid people will be able to manually change the value of a radio button input prior to submitting the form. If they can indeed do that, it will entirely defeat the purpose of the script I am building.
I hope this makes sense.
Thank you!
John
How can I prevent someone from using a developer tool/source editor (such as Google Chrome's Debugging/Developer Tool module) and changing the value of the radio button manually, prior to hitting the submit button?
You can't. You have no control over what gets sent to the server.
Test that the data meets whatever requirements you set for it before inserting it into the database. If it isn't OK, reject it and explain the problem in the HTTP response.
Any data sent from the browser to the server can be manipulated outside of your control, including form data, url parameters and cookies. Your PHP code must know what sets of values are valid and reject the request if it doesn't look sensible.
When sending user input to the database you will want to ensure that a malicious user-entered string can't modify the meaning of the SQL query. See SQL Injection. And when you display the user-entered data (either directly in the following response, or later when you read it back out of the database) ensure that you encode it properly to avoid a malicious user-entered string executing as unwanted javascript in the user's browser. See Cross-site scripting and the prevention cheat sheet
I'll go along with Quentin answer on this.
Client-side validation should never stand alone, you'll need to have some sort of server-side validation of the input as well.
For most users, the client-side validation will save a round trip to the server, but at as you both mention, there is no guarentee that "someone" wouldn't send wrong data.
Therefore the mantra should be: Always have server-side validation
I would say that client-side validation should be used solely for the user's convenience (e.g., to alert them that they have forgotten to fill in a required field) before they have submitted the form and have to wait for it to go to the server, be validated, and then have it sent back to them for fixing. What a pain. Better to have javascript tell you right there on the spot that you've messed something up.
Server-side validation is for security.
The others said it already, you can't prevent users from tampering with data being sent to your server (Firebug, TamperData plugins, self-made tampering proxies...).
So on the server side, you must treat your input as if there were no client validation at all.
Never trust user input that enters your application from an external source. Always validate it, sanitize it, escape it.
OWASP even started a stub page for the vulnerability Client-side validation - which is funny - client-side validation seems to have confused so many people and been the cause of so many security holes that they now consider it a vulnerability instead of something good.
We don't need to be that radical - client-side validation is still useful, but regard it simply as an aid to prevent the user from having to do a server roundtrip first before being told that the data is wrong. That's right, client-side validation is merely a convenience for the user, nothing more, let alone an asset to the security of your server.
OWASP is a great resource for web security. Have a look at their section on data validation.
Some advice worth quoting:
Where to include validation
Validation must be performed on every tier. However, validation should be performed as per the function of the server executing the code. For example, the web / presentation tier should validate for web related issues, persistence layers should validate for persistence issues such as SQL / HQL injection, directory lookups should check for LDAP injection, and so on.
Follow this rule without exception.
In this scenario, I'd recommend that you use values as keys, and look those up on the server side.
Also, consider issuing a nonce in a hidden field every time someone loads a form - this will make it a bit more difficult for someone to submit data without going through your form.
If you have a lot of javascript, it's probably worth running it through an obfuscator - this not only makes it harder for hackers to follow your logic, it also makes scripts smaller and faster to load.
As always, there is no magic bullet to stop hacking, but you can try raising the bar enough to deter casual hackers, then worry about the ones who enjoy a challenge.
I'm developing a website where people will be able to register and access different data via Ajax (powered by jQuery). This is all simple and i shall have no problems doing. the issue is that the data showed by Ajax needs to be secure and not available to be parsed through remote scripts. I can encrypted the data through a AES (in PHP) and decrypt successfully in javascript, but the javascript code will always be visible to everyone (after login). I can use an obfuscator and javascript encryption, but both ways, even mixed, are not secure enough and decryptable. I would prefer avoiding SSL connections, since I am trying to prevent registered users from accessing the information and the SSL connection would only prevent unregistered users from accessing the data.
Registered users will be able to earn money therefore very interested in cheating the code, this is why it has to be bulletproof.
Unfortunately the system needs definitely Ajax (the whole working principle needs to be based on Ajax). The ideal solution would be a way to save the encryption key on a place that can be saved by php and accessed by javascript, but not by users, remote script parsers etc.
Does anyone know a way to create a secure Ajax connection for this purpose?
I really appreciate all your help.
You want something that browsers do not do.
You've asked for: "The ideal solution would be a way to save the encryption key on a place that can be saved by php and accessed by javascript, but not by users, remote script parsers etc."
The design of the web browser and javascript engine in the browser is such that any Javascript that the web browser can execute can be seen by a human who wants to look at it, steal it, borrow it, whatever. Period. There is NO such place that can be accessed by Javascript, but not by users or remote script parsers. You will have to rethink how your app works if this is a problem. Most likely, you need to keep the secret stuff on the server and do more work on the server and less work on the client in order to protect what you want to protect. If you think about it, a browser is just a remote script parser so if you prevent remote script parsing, you prevent a browser. If you allow a browser, you allow a remote script parser.
You can obfuscate your Javascript to your heart's content if you want. That will make it a little more work for a human to understand and do something useful with it, but it will only be an additional obstacle that any determined and competent person can defeat if they really want to. If this secrecy is really important to you, then you need to rethink the design of the app so that secret information is not required in the browser and the browser just works as a display and interaction engine.
Just so I'm clear here. Any code that can be executed by a browser must, by definition, be something that any user or any tool can download and inspect. You can use SSL to protect data from snoopers in transport, but it ultimately has to be readable as Javascript for the browser to be able to execute it.
You can't do exactly what you want. It's like a cheat-proof game design. You CAN make it HARDER, even MORE hard, but NOT 100% secure. You've got to solve the problem froma different approach, like, whatever that is, examine the actions at server-side (e.g. in a stateful manner) and try to detect any non-human behavior. But it's only a matter of someone creating a realistic bot that mimicks the behavior of humans. Encryption is used for preventing 3rd parties -- other than the server and the client -- from eavesdropping/capturing data, NOT for the client. I'm not saying give up on the whole thing, but try a different approach to secure the system. I want to help more, but don't know what exactly you are trying to achieve.
authentication is the only ways to do it.
Just get your users to authenticate (login) and send them the random seed and salt you've used to encrypt their data.
Without the seed/salt, even though a malicious user can decrypt your data it will still be garbage.
If you want javascript to use a piece of data then clients use that data.
If you don't want data to be re-used set up a server-side system where each chunk of data is only valid once.
Proper authentication should solve all these problems.
I want the users to be able to see the data only when Ajax displays them
Then load the data when ajax get's it and not before. Or only partially load data and off-load any sensitive work to the server.
i think the best practice is to make your code (production code) too mush complex to read and edit
you should rename all your variable with letters [a-z] you should not declare ny function always use function(){} inside of another to make it more logical complex this way
the client can still see the code but has nothing to do with it
EDIT: I realize now it's a terrible advice
I'm creating a PHP CMS, one that I hope will be used by the public. Security is a major concern and I'd like to learn from some of the popular PHP CMS's like Wordpress, Joomla, Drupal, etc. What are some security flaws or vulnerabilities that they have they had in the past that I can avoid in my application and what strategies can I use to avoid them? What are other issues that I need to be concerned with that they perhaps didn't face as a vulnerability because they handled it correctly from the start? What additional security features or measures would you include, anything from minute details to system level security approaches? Please be as specific as possible. I'm generally aware of most of the usual attack vectors, but I want to make sure that all the bases are covered, so don't be afraid to mention the obvious as well. Assume PHP 5.2+.
Edit: I'm changing this to a community wiki. Even though Arkh's excellent answer is accepted, I'm still interested in further examples if you have them.
Cross-Site Request Forgery (CSRF)
Description :
The basic idea is to trick a user to a page where his browser will initiate a POST or GET request to the CMS you attack.
Imagine you know the email of a CMS powered site administrator. Email him some funny webpage with whatever you want in it. In this page, you craft a form with the data used by the admin panel of the CMS to create a new admin user. Send those data to the website admin panel, with the result in a hidden iframe of your webpage.
VoilĂ , you got your own administrator account made.
How to prevent it :
The usual way is to generate random short-lived (15mn to hour) nonce in all your forms. When your CMS receive a form data, it checks first if the nonce is alright. If not, the data is not used.
CMS examples :
CMS made simple
Joomla!
Drupal
ModX
More information :
On the wikipedia page and on the OWASP project.
Bad password storing
Description :
Imagine your database get hacked and published on something like wikileak. Knowing that a big part of your users use the same login and password for a lot of websites, do you want them to be easy to get ?
No. You need to mitigate the damages done if your database datas become public.
How to prevent it :
A first idea is to hash them. Which is a bad idea because of rainbow tables (even if the hash is not md5 but sha512 for example).
Second idea : add a unique random salt before hashing so the hackers has to bruteforce each password. The problem is, the hacker can compute a lot of hash fast.
So, the current idea is to make it slow to hash the passwords : you don't care because you don't do it often. But the attacker will cry when he gets from 1000 hash generated per ms to 1.
To ease the process, you can use the library phpass developped by some password guru.
CMS examples :
Joomla! : salted md5
ModX : md5
Typo3 : cleartext
Drupal : switched to phpass after this discussion.
More information :
The phpass page.
Cross Site Scripting (XSS)
Description
The goal of these attacks, is to make your website display some script which will be executed by your legitimate user.
You have two kind of these : persistent or not. The first one comes usually from something your user can save, the other count on parameters given by a request sent. Here is an example, not persistent :
<?php
if(!is_numeric($_GET['id'])){
die('The id ('.$_GET['id'].') is not valid');
}
?>
Now your attacker can just send links like http://www.example.com/vulnerable.php?id=<script>alert('XSS')</script>
How to prevent it
You need to filter everything you output to the client. The easiest way is to use htmlspecialchars if you don't want to let your user save any html. But, when you let them output html (either their own html or some generated from other things like bbcode) you have to be very careful. Here is an old example using the "onerror" event of the img tag : vBulletin vulnerability. Or you have the old Myspace's Samy.
CMS examples :
CMS made simple
Mura CMS
Drupal
ModX
More information :
You can check wikipedia and OWASP. You also have a lot of XSS vector on ha.ckers page.
Mail header injection
Description :
Mail headers are separated by the CRLF (\r\n) sequence. When you use some user data to send mails (like using it for the From: or To:) they can inject more headers. With this, they can send anonymous mails from your server.
How to prevent it :
Filter all the \n, \r, %0a and %0d characters in your headers.
CMS examples :
Jetbox CMS
More information :
Wikipedia is a good start as usual.
SQL Injection
Description :
The old classic. It happen when you form a SQL query using direct user input. If this input is crafted like needed, a user can do exactly what he want.
How to prevent it :
Simple. Don't form SQL queries with user input. Use parameterized queries.
Consider any input which is not coded by yourself as user input, be it coming from the filesystem, your own database or a webservice for example.
CMS example :
Drupal
Joomla!
ModX
Pars CMS
More information :
Wikipedia and OWASP have really good pages on the subject.
Http response splitting
Description :
Like e-mail headers, the http headers are separated by the CLRF sequence. If your application uses user input to output headers, they can use this to craft their own.
How to prevent it :
Like for emails, filter \n, \r, %0a and %0d characters from user input before using it as part of a header. You can also urlencode your headers.
CMS examples :
Drake CMS
Plone CMS
Wordpress
More information :
I'll let you guess a little as to where you can find a lot of infos about this kind of attack. OWASP and Wikipedia.
Session hijacking
Description :
In this one, the attacker want to use the session of another legitimate (and hopefully authenticated) user.
For this, he can either change his own session cookie to match the victim's one or he can make the victim use his (the attacker's) own session id.
How to prevent it :
Nothing can be perfect here :
- if the attacker steal the victim's cookie, you can check that the user session matches the user IP. But this can render your site useless if legitimate users use some proxy which change IP often.
- if the attacker makes the user use his own session ID, just use session_regenerate_id to change the session ID of a user when his rights change (login, logout, get in admin part of the website etc.).
CMS examples :
Joomla! and Drupal
Zen Cart
More information :
Wikipedia page on the subject.
Other
User DoSing : if you prevent bruteforcing of login attempt by disabling the usernames tried and not the IP the attempts come from, anyone can block all your users in 2mn. Same thing when generating new passwords : don't disable the old one until the user confirm the new one (by loging with it for example).
Using user input to do something on your filesystem. Filter this like if it was cancer mixed with aids. This concern the use of include and require on files which path is made in part from the user input.
Using eval, system, exec or anything from this kind with user input.
Don't put files you don't want web accessible in web accessible directory.
You have a lot of things you can read on the OWASP page.
I remember a rather funny one from phpBB. The autologin cookie contained a serialized array containing a userId and encrypted password (no salt). Change the password to a boolean with value true and you could log in as anyone you wanted to be. Don't you love weaktyped languages?
Another issue that phpBB had was in an regular expression for the highlighting of search keywords that had a callback (with the e modifier), which enabled you to execute your own PHP code - for example, system calls on unsecure systems or just output the config file to get the MySQL login/password.
So to sum this story up:
Watch out for PHP being weaktyped ( md5( "secretpass" ) == true ).
Be careful with all code that could be used in a callback (or worse, eval).
And of course there are the other issues already mentioned before me.
Another application level security issue that I've seen CMSes deal with is insufficiently authorizing page or function level access. In other words, security being set by only showing links when you are authorized to view those links, but not fully checking that the user account is authorized to view the page or use the functionality once they are on the page.
In other words, an admin account has links displayed to go to user management pages. But the user management page only checks that the user is logged in, not that they are logged in and admin. A regular user then logs in, manually types in the admin page URI, then has full admin access to the user management pages and makes their account into an admin account.
You'd be surprised how many times I've seen things like that even in shopping cart applications where user CC data is viewable.
The biggest one that so many people seem to either forget or not realise is that anyone can post any data to your scripts, including cookies and sessions etc. And don't forget, just because a user is logged in, doesn't mean they can do any action.
For example, if you had a script that handles the adding/editing of a comment, you might have this:
if ( userIsLoggedIn() ) {
saveComment( $_POST['commentid'], $_POST['commenttext'] )
}
Can you see what's wrong? You checked that the user is logged in, but you didn't check if the user owns the comment, or is able to edit the comment. Which means any logged-in user could post a comment ID and content and edit others' comments!
Another thing to remember when providing software to others is that server set ups vary wildly. When data is posted you may want to do this, for example:
if (get_magic_quotes_gpc())
$var = stripslashes($_POST['var']);
else
$var = $_POST['var'];
So so many..
A number of answers here are listing specific vuls they remember or generic "things i worry about when writing a webapp", but if you want a reasonably reliable list of a majority of reported vulnerabilities found historically, then you wouldn't do much worse than to search the National Vulnerability Database
There are 582 vulnerabilities reported in Joomla or Joomla addons, 199 for Wordpress and 345 for Drupal for you to digest.
For generic understanding of common webapp vuls, the OWASP Top Ten project has recently been updated and is an essential read for any web developer.
A1: Injection
A2: Cross-Site Scripting (XSS)
A3: Broken Authentication and Session Management
A4: Insecure Direct Object References
A5: Cross-Site Request Forgery (CSRF)
A6: Security Misconfiguration
A7: Insecure Cryptographic Storage
A8: Failure to Restrict URL Access
A9: Insufficient Transport Layer Protection
A10: Unvalidated Redirects and Forwards
Four big ones in my mind:
using exec on untrusted data/code (or in general)
include-ing files from remote URL's for local execution
enabling register globals so that get and post variables
get variable values automatically assigned.
not escaping db entered data/ allowing SQL injection attacks
(usually happens when not using a DB API layer)
Disallow POST from other domain/IP So Bots cant login/submit forms.
People, the biggest security breech, is the human stupidity. Trust, review code. You need a special team, which will review anything that added as an extra code in your application, cms's problem are the outsource, the incomings, WordPress, Drupal, Joomla, and other popular cms, like default installations, they are really in a very good point secure. The problem is coming when you leave people to add extra code in your application, without a good review (or better, without penetration testing). This is the point where WordPress and Joomla have the weakness, there re so many plugin n theme devs, there are so many approvals,hundreds of outdated plugins n themes outhere.... So imho, if you are able to build a strong team, a good security plan, train your contributors, and learn them how to code secure, and with all the other comments before mine, then you will be able to move on and say :ei hi that's my cms, and it's a bit more secure than all the other cms on the net ;)
Here's a potential pitfall for forum admins especially, but also anyone who codes up a form with a dropdown selector but doesn't validate that the posted response was actually one of the available options.
In college, I realized that the user's 'country' selector in phpBB had no such validation.
In our school forum, Instead of 'United States' or 'Afganistan', my country could be ANYTHING, no matter how silly, or filthy. All I needed was an html POST form. It took my classmates a few days to figure out how I had done it, but soon, all the 'cool kids' had funny phrases instead of countries displayed under their usernames.
Going to a geek college was awesome. :D