When I directly write https://example.com/?trigger=*/</script><script>alert(1)</script> in url bar, it prompts the alert box.
How to fix this in a wordpress site?
I've tried some plugins like security headers, html purifier etc, but no success.
How to prevent these kind of vulnerabilities?
This plugin encode the following signs and then Remove some XSS signs from the URL:
Prevent XSS Vulnerability
You can Encodes or Block such type of Entities. You can also add Entities in Comma Separated Form which you do not want to be blocked/remove in the URL.
I personally use the free version of the Wordfence Security Plug-in. I tried your link on my site: https://www.majlovesreg.one/?trigger=*/</script><script>alert(1)</script> and no browser alert was created.
Looks like security plug-ins can do the job. :-)
There are many ways to prevent XSS on your website, you can block certain inputs such as special characters (<,>,/,) as well as only allow public users to input information when prompted to. Here is a great article to read to increase your knowledge, because sometimes you can't always find a plugin to fix your issue :)
https://www.owasp.org/index.php/XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet
p.s It also lists many more prevention methods.
Related
I have had issues with XSS. Specifically I had an individual inject JS alert showing that the my input had vulnerabilities. I have done research on XSS and found examples but for some reason I can't get them to work.
Can I get example(s) of XSS that I can throw into my input and when I output it back to the user see some sort of change like an alert to know it's vulnerable?
I'm using PHP and I am going to implement htmlspecialchars() but I first am trying to reproduce these vulnerabilities.
Thanks!
You can use this firefox addon:
XSS Me
XSS-Me is the Exploit-Me tool used to test for reflected Cross-Site
Scripting (XSS). It does NOT currently test for stored XSS.
The
tool works by submitting your HTML forms and substituting the form
value with strings that are representative of an XSS attack. If the
resulting HTML page sets a specific JavaScript value
(document.vulnerable=true) then the tool marks the page as vulnerable
to the given XSS string. The tool does not attempting to compromise
the security of the given system. It looks for possible entry points
for an attack against the system. There is no port scanning, packet
sniffing, password hacking or firewall attacks done by the
tool.
You can think of the work done by the tool as the same as the
QA testers for the site manually entering all of these strings into
the form fields.
For example:
<script>alert("XSS")</script>
"><b>Bold</b>
'><u>Underlined</u>
It is very good to use some of the automated tools, however you won't gain any insight or experience from those.
The point of XSS attack is to execute javascript in a browser window, which is not supplied by the site. So first you must have a look in what context the user supplied data is printed on the website; it might be within <script></script> code block, it might be within <style></style> block, it might be used as an attribute of an element <input type="text" value="USER DATA" /> or for instance in a <textarea>. Depending on that you will see what syntax you will use to escape the context (or use it); for instance if you are within <script> tags, it might be sufficient to close parethesis of a function and end the line with semicolon, so the final injection will look like ); alert(555);. If the data supplied is used as an html attribute, the injection might look like " onclick="alert(1)" which will cause js execution if you click on the element (this area is rich to play with especially with html5).
The point is, the context of the xss is as much important as any filtering/sanatizing functions that might be in place, and often there might be small nuances which the automated tool will not catch. As you can see above even without quotes and html tags, in a limited number of circumstance you might be able to bypass the filters and execute js.
There also needs to be considered the browser encoding, for instance you might be able to bypass filters if the target browser has utf7 encoding (and you encode your injection that way). Filter evasion is a whole another story, however the current PHP functions are pretty bulletproof, if used correctly.
Also here is a long enough list of XSS vectors
As a last thing, here is an actual example of a XSS string that was found on a site, and I guarantee you that not a single scanner would've found that (there were various filters and word blacklists, the page allowed to insert basic html formatting to customize your profile page):
<a href="Boom"><font color=a"onmouseover=alert(document.cookie);"> XSS-Try ME</span></font>
Ad-hoc testing is OK, however I also recommend trying a web application vulnerability scanning tool to ensure you haven't missed anything.
acunetix is pretty good and has a free trial of their application:
http://www.acunetix.com/websitesecurity/xss.htm
(Note I have no affiliation with this company, however I have used the product to test my own applications).
I always run user supplied input through both the html entities and mysql real escape string functions.
But now I am building a CMS which has a WYSIWYG editor in the admin section. I noticed that using htmlentities() on the WYSIWYG edited user content removed all styles and throws a bunch of quotes on the front end article page (as can be expected).
So is it ok to not clean the html/javascripts entered by the user in this situation? I will still use mysql_real_escape_string() which doesn't conflict.
Although the admin in the only one who will have access to the back end, I can think of at least one scenario where suppose a hacker somehow got access to the create a post page, now although they can wreak havoc by deleting posts, etc, instead they choose to use this as an opportunity to send visitors to his site by making this post:
<script>window.location = "http://evilsite.com"</script>
So what should I do? and also are there any functions that will disable javascript but not html and inline css?
The WYSWYG is TinyMCE by the way.
It is never OK to not clean user input. Anybody can sabotage your system, just like you hypothesized. This kind of risk is simply not worth taking.
Although, for your case it would depend on the WYSIWYG editor you use. Look around TinyMCE's documentation or ask around, and see what it says about displaying/rendering HTML output in its rich text editor with regards to XSS vulnerabilities.
I'm working on an app that would allow people to enter arbitrary URL's that would be included in <a href="ARBITRARY URL"> and <img src="ARBITRARY URL" /> tags.
What type of security risks am I looking at?
The app is coded in PHP, and the only security countermeasure I currently perform is using PHP's htmlentities() function against the input URL before sending it as HTML. I'm also checking to make sure that the URL text starts with either http:// or https:// but I don't know if that's accomplishing anything, security wise.
What else should I be doing to ensure the security of my end users?
Take a look at the XSS Checklist.
It is possible to construct an image that is also a valid javascript file, and get a browser to execute it. See http://www.thinkfu.com/blog/?p=15
SVG images (mime-type image/svg+xml) can contain javascript. See http://www.w3.org/TR/SVG/interact.html
You should sanitize at all times, img tags are vulnerable to cross-site-scripting
You would like to read about XSS (Cross site scripting) and XSRF (Cross site request forgery)
EDIT:
As pointed out by ryeguy, you can pretty much copy and paste any of the examples in XSS (Cross Site Scripting) Cheat Sheet and seek the best way to prevent from them accordingly.
In addition, it is possible to insert whole images into URLs using inline data in newer browsers. It might be possible to inject something through there, however that would require a gaping browser-side security hole and I would not know how to sanitize something like that.
Maybe you just want to restrict access to certain domains, or check whether an image physically exists? That might already help a lot.
CSRF:
<img src="http://example.org/accounts/123/delete" />
In addition to the great answers so far, the xss cheat sheet doesn't really account for event attributes like onmouseover onhover etc. These are all, by design, to allow someone to run some javascript when something happens.
I developed a web application, that permits my users to manage some aspects of a web site dynamically (yes, some kind of cms) in LAMP environment (debian, apache, php, mysql)
Well, for example, they create a news in their private area on my server, then this is published on their website via a cURL request (or by ajax).
The news is created with an WYSIWYG editor (fck at moment, probably tinyMCE in the next future).
So, i can't disallow the html tags, but how can i be safe?
What kind of tags i MUST delete (javascripts?)?
That in meaning to be server-safe.. but how to be 'legally' safe?
If an user use my application to make xss, can i be have some legal troubles?
If you are using php, an excellent solution is to use HTMLPurifier. It has many options to filter out bad stuff, and as a side effect, guarantees well formed html output. I use it to view spam which can be a hostile environment.
It doesn't really matter what you're looking to remove, someone will always find a way to get around it. As a reference take a look at this XSS Cheat Sheet.
As an example, how are you ever going to remove this valid XSS attack:
<IMG SRC=javascript:alert('XSS')>
Your best option is only allow a subset of acceptable tags and remove anything else. This practice is know as White Listing and is the best method for preventing XSS (besides disallowing HTML.)
Also use the cheat sheet in your testing; fire as much as you can at your website and try to find some ways to perform XSS.
The general best strategy here is to whitelist specific tags and attributes that you deem safe, and escape/remove everything else. For example, a sensible whitelist might be <p>, <ul>, <ol>, <li>, <strong>, <em>, <pre>, <code>, <blockquote>, <cite>. Alternatively, consider human-friendly markup like Textile or Markdown that can be easily converted into safe HTML.
Rather than allow HTML, you should have some other markup that can be converted to HTML. Trying to strip out rogue HTML from user input is nearly impossible, for example
<scr<script>ipt etc="...">
Removing from this will leave
<script etc="...">
Kohana's security helper is pretty good. From what I remember, it was taken from a different project.
However I tested out
<IMG SRC=javascript:alert('XSS')>
From LFSR Consulting's answer, and it escaped it correctly.
For a C# example of white list approach, which stackoverflow uses, you can look at this page.
If it is too difficult removing the tags you could reject the whole html-data until the user enters a valid one.
I would reject html if it contains the following tags:
frameset,frame,iframe,script,object,embed,applet.
Also tags which you want to disallow are: head (and sub-tags),body,html because you want to provide them by yourself and you do not want the user to manipulate your metadata.
But generally speaking, allowing the user to provide his own html code always imposes some security issues.
You might want to consider, rather than allowing HTML at all, implementing some standin for HTML like BBCode or Markdown.
I use this php strip_tags function because i want user can post safely and i allow just few tags which can be used in post in this way nobody can hack your website through script injection so i think strip_tags is best option
Clich here for code for this php function
It is very good function in php you can use it
$string = strip_tags($_POST['comment'], "<b>");
I run a website (sorta like a social network) that I wrote myself. I allow the members to send comments to each other. In the comment; i take the comment and then call this line before saving it in db..
$com = htmlentities($com);
When I want to display it; I call this piece of code..
$com = html_entity_decode($com);
This works out well most of the time. It allows the users to copy/paste youtube/imeem embed code and send each other videos and songs. It also allows them to upload images to photobucket and copy/paste the embed code to send picture comments.
The problem I have is that some people are basically putting in javascript code there as well that tends to do nasty stuff such as open up alert boxes, change location of webpage and things like that.. I am trying to find a good solution to solving this problem once and for all.. How do other sites allow this kind of functionality?
Thanks for your feedback
First: htmlentities or just htmlspecialchars should be used for escaping strings that you embed into HTML. You shouldn't use it for escaping string when you insert them into a SQL query - Use mysql_real_escape_string (For MySql) or better yet - use prepared statements, which have bound parameters. Make sure that magic_quotes are turned off or disabled otherwise, when you manually escape strings.
Second: You don't unescape strings when you pull them out again. Eg. there is no mysql_real_unescape_string. And you shouldn't use stripslashes either - If you find that you need, then you probably have magic_quotes turned on - turn them off instead, and fix the data in the database before proceeding.
Third: What you're doing with html_entity_decode completely nullifies the intended use of htmlentities. Right now, you have absolutely no protection against a malicious user injecting code into your site (You're vulnerable to cross site scripting aka. XSS). Strings that you embed into a HTML context, should be escaped with htmlspecialchars (or htmlentities). If you absolutely have to embed HTML into your page, you have to run it through a cleaning-solution first. strip_tags does this - in theory - but in practise it's very inadequate. The best solution I currently know of, is HtmlPurifier. However, whatever you do, it is always a risk to let random user embed code into your site. If at all possible, try to design your application such that it isn't needed.
I so hope you are scrubbing the data before you send it to the database. It sounds like you are a prime target for a SQl injection attack. I know this is not your question, but it is something that you need to be aware of.
Yes, this is a problem. A lot of sites solve it by only allowing their own custom markup in user fields.
But if you really want to allow HTML, you'll need to scrub out all "script" tags. I believe there are libraries available that do this. But that should be sufficient to prevent JS execution in user-entered code.
This is how Stackoverflow does it, I think, over at RefacterMyCode.
You may want to consider Zend Filter, it offers a lot more than strip_tags and you do not have to include the entire Zend Framework to use it.