Hi i am using ckeditor plugin to beautify the text given by the user.It was working properly but now i try to increase security to my website so that i used htmlentities() function in all places where echo is used.
The problem is while displaying a text output from ckeditor are shown as html tags in my website because of the effect of htmlentities() i used.This is the output i am getting in my website,
<p><strong><span style="color:#008080">Superhero</span></strong></p>
So the look of website is damaged.I want to show the ckeditor text as it is.But htmlentities()
must have to be used.
I searched stack overflow and found many issues related to this.So i used the following solution in my ckeditor/config.js page as below,
config.entities = false;
config.basicEntities = false;
config.entities_greek = false;
config.entities_latin = false;
But its not working in my code.
Thanks in advance!
Well, as far as I am aware there is no in-built way in php to distinguish between malicious injected script tags and normal html tags.
This leads to problem where you want to block malicious script, but not valid html tags.
When I have to accept user input and display again which may contain html tags, instead of using htmlentities I use htmlpurifier. There is another one I am aware of is safeHtml.
However, there might be better solutions then this and I am also interested in knowing as well. Unfortunately haven't came across one.
Related
I'm testing one of my web application using Acunetix. To protect this project against XSS attacks, I used HTML Purifier. This library is recommended by most of PHP developers for this purpose, but my scan results shows HTML Purifier can not protect us from XSS attacks completely. The scanner found two ways of attack by sending different harmful inputs:
1<img sRc='http://attacker-9437/log.php? (See HTML Purifier result here)
1"onmouseover=vVF3(9185)" (See HTML Purifier result here)
As you can see results, HTML Purifier could not detect such attacks. I don't know if is there any specific option on HTML Purifier to solve such problems, or is it really unable to detect these methods of XSS attacks.
Do you have any idea? Or any other solution?
(This is a late answer since this question is becoming the place duplicate questions are linked to, and previously some vital information was only available in comments.)
HTML Purifier is a contextual HTML sanitiser, which is why it seems to be failing on those tasks.
Let's look at why in some detail:
1<img sRc='http://attacker-9437/log.php?
You'll notice that HTML Purifier closed this tag for you, leaving only an image injection. An image is a perfectly valid and safe tag (barring, of course, current image library exploits). If you want it to throw away images entirely, consider adjusting the HTML Purifier whitelist by setting HTML.Allowed.
That the image from the example is now loading a URL that belongs to an attacker, thus giving the attacker the IP of the user loading the page (and nothing else), is a tricky problem that HTML Purifier wasn't designed to solve. That said, you could write a HTML Purifier attribute checker that runs after purification, but before the HTML is put back together, like this:
// a bit of context
$htmlDef = $this->configuration->getHTMLDefinition(true);
$image = $htmlDef->addBlankElement('img');
// HTMLPurifier_AttrTransform_CheckURL is a custom class you've supplied,
// and checks the URL against a white- or blacklist:
$image->attr_transform_post[] = new HTMLPurifier_AttrTransform_CheckURL();
The HTMLPurifier_AttrTransform_CheckURL class would need to have a structure like this:
class HTMLPurifier_AttrTransform_CheckURL extends HTMLPurifier_AttrTransform
{
public function transform($attr, $config, $context) {
$destination = $attr['src'];
if (is_malicious($destination)) {
// ^ is_malicious() is something you'd have to write
$this->confiscateAttr($attr, 'src');
}
return $attr;
}
}
Of course, it's difficult to do this 'right':
if this is a live check with some web-service, this will slow purification down to a crawl
if you're keeping a local cache you run risk of having outdated information
if you're using heuristics ("that URL looks like it might be malicious based on indicators x, y and z"), you run risk of missing whole classes of malicious URLs
1"onmouseover=vVF3(9185)"
HTML Purifier assumes the context your HTML is set in is a <div> (unless you tell it otherwise by setting HTML.Parent).
If you just feed it an attribute value, it's going to assume you're going to output this somewhere so the end-result looks like this:
...
<div>1"onmouseover=vVF3(9185)"</div>
...
That's why it appears to not be doing anything about this input - it's harmless in this context. You might even not want to strip this information in that context. I mean, we're talking about this snippet here on stackoverflow, and that's valuable (and not causing a security problem).
Context matters. Now, if you instead feed HTML Purifier this snippet:
<div class="1"onmouseover=vVF3(9185)"">foo</div>
...suddenly you can see what it's made to do:
<div class="1">foo</div>
Now it's removed the injection, because in this context, it would have been malicious.
What to use HTML Purifier for and what not
So now you're left to wonder what you should be using HTML Purifier for, and when it's the wrong tool for the job. Here's a quick run-down:
you should use htmlspecialchars($input, ENT_QUOTES, 'utf-8') (or whatever your encoding is) if you're outputting into a HTML document and aren't interested in preserving HTML at all - it's unnecessary overhead and it'll let some things through
you should use HTML Purifier if you want to output into a HTML document and allow formatting, e.g. if you're a message board and you want people to be able to format their messages using HTML
you should use htmlspecialchars($input, ENT_QUOTES, 'utf-8') if you're outputting into a HTML attribute (HTML Purifier is not meant for this use-case)
You can find some more information about sanitising / escaping by context in this question / answer.
All the HTML purifier seems to be doing, from the brief look that I gave, was HTML encode certain characters such as <, > and so on. However there are other means of invoking JS without using the normal HTML characters:
javascript:prompt(1) // In image tags
src="http://evil.com/xss.html" // In iFrame tags
Please review comments (by #pinkgothic) below.
Points below:
This would be HTML injection which does effectively lead to XSS. In this case, you open an <img> tag, point the src to some non-existent file which in turn raises an error. That can then be handled by the onerror handler to run some JavaScript code. Take the following example:
<img src=x onerror=alert(document.domain)>
The entrypoint for this it generally accompanied by prematurely closing another tag on an input. For example (URL decoded for clarity):
GET /products.php?type="><img src=x onerror=prompt(1)> HTTP/1.1
This however, is easily mititgated by HTML escaping meta-character (i.e. <, >).
Same as above, except this could be closing off an HTML attribute instead of a tag and inserting its own attribute. Say you have a page where you can upload the URL for an image:
<img src="$USER_DEFINED">
A normal example would be:
<img src="http://example.com/img.jpg">
However, inserting the above payload, we cut off the src attribute which points to a non-existent file and inject an onerror handler:
<img src="1"onerror=alert(document.domain)">
This executes the same payload mentioned above.
Remediation
This is heavily documented and tested in multiple places, so I won't go into detail. However, the following two articles are great on the subject and will cover all your needs:
https://www.acunetix.com/websitesecurity/cross-site-scripting/
https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet
The Idea
I have a forum in my project and i'm trying to implement wysiwyg editor to post questions and answers. Actually the project is already completed and now i'm trying to implement NicEdit wysiwyg editor to all the required textareas. I'm using the following lines of prior to database insertion of posted data :
$post_body = $_POST['post_body'];
$post_body = nl2br(htmlspecialchars($post_body));
$post_body = mysqli_real_escape_string($db_conx,$post_body);
Problem
When I insert the posted data I'm getting the following output :
Conclusion
I want to know how to insert the wysiwyg edited content into the database. Apart from this, please suggest me some nice wysiwyg plugins which are easy to embed in my project. Currently, i'm using NicEdit which is pretty easy to embed but with limited functionalities.
Why don't you decode it with: htmlspecialchars_decode() before echoing?
That may solve your problem.
Edit:
You could do strip_tags() and allow only few markup tags.
Eg:
// Allow <p> and <a>
echo strip_tags($text, '<p><a>');
Thus you could strip risky tags like <script>
Don't do this:
$post_body = nl2br(htmlspecialchars($post_body));
That is how you create an HTML representation of plain text. You are asking people to submit HTML, not plain text.
(Do, however, implement a whitelist based, HTML aware XSS filter such as HTML Purifier)
I've been hunting around the net now for a few days trying to figure this out but getting conflicting answers.
Is there a library, class or function for PHP that securely sanitizes/encodes a string against XSS? It needs to be updated regularly to counter new attacks.
I have a few use cases:
text <script>alert(111)</script>
The most advanced library is http://htmlpurifier.org It allows you to add tags you want to allow.
All you need to do is sanitize text/data before using it.
<?php
$given_text = '<script>alert("you are hacked")</script>';
//before using it
$given_text = htmlspecialchars($given_text);
//now the text will be like this
<script>alert("you are hacked")</script>
?>
Note : And also cookies should not be accessible to scripts
I have a form with 2 textareas; the first one allows user to send HTML Code, the second allows to send CSS Code. I have to verify with a PHP function, if the language is correct.
If the language is correct, for security, i have to check that there is not PHP code or SQL Injection or whatever.
What do you think ? Is there a way to do that ?
Where can I find this kind of function ?
Is "HTML Purifier" http://htmlpurifier.org/ a good solution ?
If you have to validate the date to insert them in to database - then you just have to use mysql_real_escape_string() function before inserting them in to db.
//Safe database insertion
mysql_query("INSERT INTO table(column) VALUES(".mysql_real_escape_string($_POST['field']).")");
If you want to output the data to the end user as plain text - then you have to escape all html sensitive chars by htmlspecialchars(). If you want to output it as HTML, the you have to use HTML Purify tool.
//Safe plain text output
echo htmlspecialchars($data, ENT_QUOTES);
//Safe HTML output
$data = purifyHtml($data); //Or how it is spiecified in the purifier documentation
echo $data; //Safe html output
for something primitive you can use regex, BUT it should be noted using a parser to fully-exhaust all possibilities is recommended.
/(<\?(?:php)?(.*)\?>)/i
Example: http://regexr.com?2t3e5 (change the < in the expression back to a < and it will work (for some reason rexepr changes it to html formatting))
EDIT
/(<\?(?:php)?(.*)(?:\?>|$))/i
That's probably better so they can't place php at the end of the document (as PHP doesn't actually require a terminating character)
SHJS syntax highlighter for Javascript have files with regular expressions http://shjs.sourceforge.net/lang/ for languages that highlights — You can check how SHJS parse code.
HTMLPurifier is the recommended tool for cleaning up HTML. And as luck has it, it also incudes CSSTidy and can sanitize CSS as well.
... that there is not PHP code or SQL Injection or whatever.
You are basing your question on a wrong premise. While HTML can be cleaned, this is no safeguard against other exploitabilies. PHP "tags" are most likely to be filtered out. If you are doing something other weird (include-ing or eval-ing the content partially), that's no real help.
And SQL exploits can only be prevented by meticously using the proper database escape functions. There is no magic solution to that.
Yes. htmlpurifier is a good tool to remove malicious scripts and validate your HTML. Don't think it does CSS though. Apparently it works with CSS too. Thanks Briedis.
Ok thanks you all.
actually, i realize that I needed a human validation. Users can post HTML + CSS, I can verify in PHP that the langage & the syntax are correct, but it doesn't avoid people to post iframe, html redirection, or big black div that take all the screen.
:-)
Is there a PHP implementation of markdown suitable for using in public comments?
Basically it should only allow a subset of the markdown syntax (bold, italic, links, block-quotes, code-blocks and lists), and strip out all inline HTML (or possibly escape it?)
I guess one option is to use the normal markdown parser, and run the output through an HTML sanitiser, but is there a better way of doing this..?
We're using PHP markdown Extra for the rest of the site, so we'd already have to use a secondary parser (the non-"Extra" version, since things like footnote support is unnecessary).. It also seems nicer parsing only the *bold* text and having everything escaped to <a href="etc">, than generating <b>bold</b> text and trying to strip the bits we don't want..
Also, on a related note, we're using the WMD control for the "main" site, but for comments, what other options are there? WMD's javascript preview is nice, but it would need the same "neutering" as the PHP markdown processor (it can't display images and so on, otherwise someone will submit and their working markdown will "break")
Currently my plan is to use the PHP-markdown -> HTML santiser method, and edit WMD to remove the image/heading syntax from showdown.js - but it seems like this has been done countless times before..
Basically:
Is there a "safe" markdown implementation in PHP?
Is there a HTML/javascript markdown editor which could have the same options easily disabled?
Update: I ended up simply running the markdown() output through HTML Purifier.
This way the Markdown rendering was separate from output sanitisation, which is much simpler (two mostly-unmodified code bases) more secure (you're not trying to do both rendering and sanitisation at once), and more flexible (you can have multiple sanitisation levels, say a more lax configuration for trusted content, and a much more stringent version for public comments)
PHP Markdown has a sanitizer option, but it doesn't appear to be advertised anywhere. Take a look at the top of the Markdown_Parser class in markdown.php (starts on line 191 in version 1.0.1m). We're interested in lines 209-211:
# Change to `true` to disallow markup or entities.
var $no_markup = false;
var $no_entities = false;
If you change those to true, markup and entities, respectively, should be escaped rather than inserted verbatim. There doesn't appear to be any built-in way to change those (e.g., via the constructor), but you can always add one:
function do_markdown($text, $safe=false) {
$parser = new Markdown_Parser;
if ($safe) {
$parser->no_markup = true;
$parser->no_entities = true;
}
return $parser->transform($text);
}
Note that the above function creates a new parser on every run rather than caching it like the provided Markdown function (lines 43-56) does, so it might be a bit on the slow side.
JavaScript Markdown Editor Hypothesis:
Use a JavaScript-driven Markdown Editor, e.g., based on showdown
Remove all icons and visual clues from the Toolbar for unwanted items
Set up a JavaScript filter to clean-up unwanted markup on submission
Test and harden all JavaScript changes and filters locally on your computer
Mirror those filters in the PHP submission script, to catch same on the server-side.
Remove all references to unwanted items from Help/Tutorials
I've created a Markdown editor in JavaScript, but it has enhanced features. That took a big chunk of time and SVN revisions. But I don't think it would be that tough to alter a Markdown editor to limit the HTML allowed.
How about running htmlspecialchars on the user entered input, before processing it through markdown? It should escape anything dangerous, but leave everything that markdown understands.
I'm trying to think of a case where this wouldn't work but can't think of anything off hand.