Is there any secure way to allow cross-site AJAX requests? - php

I am currently working on a script that website owners could install that would allow users to highlight a word and see the definition of the word in a small popup div. I am only doing this as a hobby in my spare time and have no intention of selling it or anything, but nevertheless I want it to be secure.
When the text is highlighted it sends an AJAX request to my domain to a PHP page that then looks up the word in a database and outputs a div containing the information. I understand that the same-origin policy prohibits me from accomplishing this with normal AJAX, but I also cannot use JSONP because I need to return HTML, not JSON.
The other option I've looked into is adding
header("Access-Control-Allow-Origin: *");
to my PHP page.
Since I really don't have much experience in security, being that I do this as a hobby, could someone explain to me the security risks in using Access-Control-Allow-Origin: * ?
Or is there a better way I should look into to do this?

Cross-Origin Resource Sharing (CORS), the specification behind the Access-Control-Allow-Origin header field, was established to allow cross-origin requests via XMLHttpRequest but protect users from malicious sites to read the response by providing an interface that allows the server to define which cross-origin requests are allowed and which are not. So CORS is more than simply Access-Control-Allow-Origin: *, which denotes that XHR requests are allowed from any origin.
Now to your question: Assuming that your service is public and doesn’t require any authentication, using Access-Control-Allow-Origin: * to allow XHR requests from any origin is secure. But make sure to only send that header field in those scripts your want to allow that access policy.

"When the text is highlighted it sends an AJAX request to my domain to a PHP page that then looks up the word in a database and outputs a div containing the information. I understand that the same-origin policy prohibits me from accomplishing this with normal AJAX, but I also cannot use JSONP because I need to return HTML, not JSON."
As hek2mgl notes, JSONP would work fine for this. All you'd need to do is wrap your HTML in a JSONP wrapper, like this:
displayDefinition({"word": "example", "definition": "<div>HTML text...</div>"});
where displayDefinition() is a JS function that shows a popup with the given HTML code (and maybe caches it for later use).
"The other option I've looked into is adding header("Access-Control-Allow-Origin: *"); to my PHP page. Since I really don't have much experience in security, being that I do this as a hobby, could someone explain to me the security risks in using Access-Control-Allow-Origin: *?"
The risks are essentially the same as for JSONP; in either case, you're allowing any website to make arbitrary GET requests to your script (which they can actually do anyway) and read the results (which, using normal JSON, they generally cannot, although older browsers may have some security holes that can allow this). In particular, if a user visits a malicious website while being logged into your site, and if your site may expose sensitive user data through JSONP or CORS, then the malicious site could gain access to such data.
For the use case you describe, either method should be safe, as long as you only use it for that particular script, and as long as the script only does what you describe it doing (looks up words and returns their definitions).
Of course, you should nor use either CORS or JSONP for scripts you don't want any website to access, like bank transfer forms. Such scripts, if they can modify data on the server, generally also need to employ additional defenses such as anti-CSRF tokens to prevent "blind" CSRF attacks where the attacker doesn't really care about the response, but only about the side effects of the request. Obviously, the anti-CSRF tokens themselves are sensitive user-specific data, and so should not be obtainable via CORS, JSONP or any other method that bypasses same-origin protections.
"Or is there a better way I should look into to do this?"
One other (though not necessarily better) way could be for your PHP script to return the definitions as HTML, and for the popups to consist of just an iframe element pointing to the script.

JSONP should fit your needs. It is a widely deployed web technique that aims to solve cross domain issues. Also you should know about CORS which addresses some disadvantages of JSONP. The links I gave you will also contain information about security considerations about these techniques.
You wrote:
but I also cannot use JSONP because I need to return HTML, not JSON.
Why not? You could use a JSONP response like this:
callback({'content':'<div class="myclass">...</div>'});
and then inject result.content into the current page using DOM manipulation.

Concept of CSRF(Cross Site Request Foregery) can be your concern
http://en.wikipedia.org/wiki/Cross-site_request_forgery
there are multiple ways to limit this issue, most commonly used technique is employing use of csrf token.
Further you should also put a IP based Rate limiter for "Limiting execution of a to number requests made from a certain ip", to limit DoS attacks that can be done if you are a target, you can seek some help from the How do I throttle my site's API users?

CORS issues are simple - do you want anyone to be able to remotely AJAX your stuff on your domain? This could be extremely dangerous if you've got forms that are prone to CSRF. Here is an example plucked straight out of my head.
The set-up:
A bank whose online banking server has CORS headers set to accept all (ACAO: *) (call it A)
A legitimate customer who is logged in (call them B)
A hacker who happens to be able to make the client run anything (call it E)
A<->B conversation is deemed lawful. However, if the hacker can manage to make the mark (B) load a site with a bit of JS that can fire off AJAX requests (easy through permanent XSS flaws on big sites), he/she can get B to fire requests to A by JSON, which will be allowed and treated as normal requests!
You could do so many horrible things with this. Imagine that the bank has a form where the input are as follows:
POST:
* targetAccountID -> the account that will receive money
* money -> the amount to be transferred
If the mark is logged in, I could inject:
$.ajax({ url: "http://A/money.transfer.php"; data { targetAccountID: 123; money: 9999; }; });
And suddenly, anyone who visits the site and is logged in to A will see their account drained of 9999 units.
THIS is why CORS is to be taken with a pinch of salt - in practice, DO NOT open more than you need to open. Open your API and that is it.
A cool side note, CORS does not work for anything before IE9. So you'll need to build a fallback, possibly iframes or JSONP.
I wrote about this very topic a short while back: http://www.sebrenauld.co.uk/en/index/view/access-json-apis-remotely-using-javascript in a slightly happier form than Wikipedia, by the way. It's a topic I hold dear to my heart, as I've had to contend with API development a couple of times.

Related

how to prevent access file from curl use php [duplicate]

I have a webserver, and certain users have been retrieving my images using an automated script.I wish to redirect them to a error page or give them an invalid image only if it's a CURL request.
my image resides in http://example.com/images/AIDd232320233.png, is there someway I can route it with .htaccess to my controller index function to where I can check if it's an authentic request?
and my other question, how can I check browser headers to distinguish between most likely authentic ones and ones done with a cURL request?
Unfortunately, the short answer is 'no.'
cURL provides all of the necessary options to "spoof" any browser. That is to say, more specifically, browsers identify themselves via specific header information, and cURL provides all of the tools to set header data in whatever manner you choose. So, directly distinguishing two requests from one another is not possible.*
*Without more information. Common methods to determine if there is a Live Human initiating the traffic are to set cookies during previous steps (attempts to ensure that the request is a natural byproduct of a user being on your website), or using a Captcha and a cookie (validate someone can pass a test).
The simplest is to set a cookie, which will really only ensure that bad programmers don't get through, or programmers who don't want to spend the time to tailor their scraper to your site.
The more tried and true approach is a Captcha, as it requires the user to interact to prove they have blood in their veins.
If the image is not a "download" but more of a piece of a greater whole (say, just an image on your site), a Captcha could be used to validate a human before giving them access to the site as a whole. Or if it is a download, it would be presented before unlocking the download.
Unfortunately, Captchas are are "a pain," both to set up, and for the end-user. They don't make a whole lot of sense for general-purpose access, they are a little overboard.
For general-purpose stuff, you can really only throttle IPs, download limits and the like. And even there, you have nothing you can do if the requests are distributed. Them's the breaks, really...

Detect when request is for preview generation

We have certain action links which are one time use only. Some of them do not require any action from a user other than viewing it. And here comes the problem, when you share it in say Viber, Slack or anything else that generates a preview of the link (or unfurls the link as Slack calls it) it gets counted as used since it was requested.
Is there a reliable way to detect these preview generating requests solely via PHP? And if it's possible, how does one do that?
Not possible with 100% accuracy in PHP alone, as it deals with HTTP requests, which are quite abstracted from the client. Strictly speaking you cannot even guarantee that user have actually seen the response, even tho it was legitimately requested by the user.
The options you have:
use checkboxes like "I've read this" (violates no-action requirement)
use javascript to send "I've read this" request without user interaction (violates PHP alone requirement)
rely on cookies: redirect user with set-cookie header, then redirect back to show content and mark the url as consumed (still not 100% guaranteed, and may result with infinite redirects for bots who follow 302 redirects, and do not persist cookies)
rely on request headers (could work if you had a finite list of supported bots, and all of them provide a signature)
I've looked on the entire internet to solve this problem. And I've found some workarounds to verify if the request is for link preview generation.
Then, I've created a tool to solve it. It's on GitHub:
https://github.com/brunoinds/link-preview-detector
You only need to call a single method from the class:
<?php
require('path\to\LinkPreviewOrigin.php');
$response = LinkPreviewOrigin::isForLinkPreview();
//true or false
I hope to solve your question!

Sharing Sessions with 302 Redirects/IMG SRC/ JSON-P and implications with Google SEO/Pagerank or Other Problems

I am currently researching the best way to share the same session across two domains (for a shared shopping cart / shared account feature). I have decided on two of three different approaches:
Every 15 minutes, send a one time only token (made from a secret and user IP/user agent) to "sync the sessions" using:
img src tag
img src="http://domain-two.com/sessionSync.png?token="urlsafebase64_hash"
displays an empty 1x1 pixel image and starts a remote session session with the same session ID on the remote server. The png is actually a PHP script with some mod_rewrite action.
Drawbacks: what if images are disabled?
a succession of 302 redirect headers (almost same as above, just sending token using 302's instead:
redirect to domain-2.com/sessionSync.php?token="urlsafebase64_hash"
then from domain-2.com/sessionSync, set(or refresh) the session and redirect back to domain-1.com to continue original request.
QuestionL What does Google think about this in terms of SEO/Pagerank?? Will their bots have issues crawling my site properly? Will they think I am trying to trick the user?
Drawbacks: 3 requests before a user gets a page load, which is slower than the IMG technique.
Advantages: Almost always works?
use jsonp to do the same as above.
Drawbacks: won't work if javascript is disabled. I am avoiding this option because of particularly this.
Advantages: callback function on success may be useful (but not really in this situation)
My questions are:
What will google think of using 302's as stated in example 2 above? Will they punish me?
What do you think the best way is?
Are there any security considerations introduced by any of these methods?
Am I not realizing something else that might cause problems?
Thanks for all the help in advance!
Just some ideas:
You could use the jsonP approach and use the <noscript> tag to set the 302-chains mode.
You won't find a lot of js disabled clients in the human part of your web clients.
But the web crawlers will mostly fall in the 302-chain mode, and if you care about them you could maybe implement some user-agent checks in sessionSync to give them specific instructions. For example give them a 301 permanent redirect. Your session synchronistation needs are maybe not valid for web crawlers, maybe you can redirect them permanently (so only the first time) without handling any specific session synchronisation for them. Well it depends ofg your implementation of this 302-chains but you could as well set something in the crawlers session to let them crawl domain-1 without any check on domain-2, as this depends on the url you generate on the page, and that you could have something in the session to prevent the domain-2 redirect on url generation.

Securing JSONP?

I have a script that uses JSONP to make cross domain ajax calls. This works great but my question is, is there a way to prevent other sites from accessing and getting data from these URL's? I basically would like to make a list of sites that are allowed and only return data if they are in the list. I am using PHP and figure I might be able to use "HTTP_REFERER" but have read that some browsers will not send this info.... ??? Any ideas?
Thanks!
There really is no effective solution. If your JSON is accessible through the browser, then it is equally accessible to other sites. To the web server a request originating from a browser or another server are virtually indistinguishable aside from the headers. Like ILMV commented, referrers (and other headers) can be falsified. They are after all, self-reported.
Security is never perfect. A sufficiently determined person can overcome any security measures in place, but the goal of security is to create a high enough deterrent that laypeople and or most people would be dissuaded from putting the time and resources necessary to compromise the security.
With that thought in mind, you can create a barrier of entry high enough that other sites would probably not bother making requests with the barriers of entry put into place. You can generate single use tokens that are required to grab the json data. Once a token is used to grab the json data, the token is then subsequently invalidated. In order to retrieve a token, the web page must be requested with a token embedded within the page in javascript that is then put into the ajax call for the json data. Combine this with time-expiring tokens, and sufficient obfuscation in the javascript and you've created a high enough barrier.
Just remember, this isn't impossible to circumvent. Another website could extract the token out of the javascript, and or intercept the ajax call and hijack the data at multiple points.
Do you have access to the servers/sites that you would like to give access to the JSONP?
What you could do, although not ideal is to add a record to a db of the IP on the page load that is allowed to view the JSONP, then on the jsonp load, check if that record exists. Perhaps have an expiry on the record if appropriate.
e.g.
http://mysite.com/some_page/ - user loads page, add their IP to the database of allowed users
http://anothersite.com/anotherpage - as above, add to database
load JSONP, check the IP exists in the database.
After one hour delete the record from the db, so another page load would be required for example
Although this could quite easily be worked around if the scraper (or other sites) managed to work out what method you are using to allow users to view the JSONP, they'd only have to hit the page first.
How about using a cookie that holds a token used with every jsonp request?
Depending on the setup you can also use a variable if you don't want to use cookies.
Working with importScript form the Web Worker is quite the same as jsonp.
Make a double check like theAlexPoon said. Main-script to web worker, web worker to sever and back with security query. If the web worker answer to the main script without to be asked or with the wrong token, its better to forward your website to the nirvana. If the server is asked with the wrong token don't answer. Cookies will not be send with an importScript request, because document is not available at web worker level. Always send security relevant cookies with a post request.
But there are still a lot of risks. The man in the middle knows how.
I'm certain you can do this with htaccess -
Ensure your headers are sending "HTTP_REFERER" - I don't know any browser that wont send it if you tell it to. (if you're still worried, fall back gracefully)
Then use htaccess to allow/deny access from the right referer.
# deny all except those indicated here
order deny,allow
deny from all
allow from .*domain\.com.*

How do I restrict JSON access?

I have a web application that pulls data from my newly created JSON API.
My static HTML pages dynamically calls the JSON API via JavaScript from the static HTML page.
How do I restrict access to my JSON API so that only I (my website) can call from it?
In case it helps, my API is something like: http://example.com/json/?var1=x&var2=y&var3=z... which generates the appropriate JSON based on the query.
I'm using PHP to generate my JSON results ... can restricting access to the JSON API be as simple as checking the $_SERVER['HTTP_REFERER'] to ensure that the API is only being called from my domain and not a remote user?
I think you might be misunderstanding the part where the JSON request is initiated from the user's browser rather than from your own server. The static HTML page is delivered to the user's browser, then it turns around and executes the Javascript code on the page. This code opens a new connection back to your server to obtain the JSON data. From your PHP script's point of view, the JSON request comes from somewhere in the outside world.
Given the above mechanism, there isn't much you can do to prevent anybody from calling the JSON API outside the context of your HTML page.
The usual method for restricting access to your domain is prepend the content with something that runs infinitely.
For example:
while(1);{"json": "here"} // google uses this method
for (;;);{"json": "here"} // facebook uses this method
So when you fetch this via XMLHttpRequest or any other method that is restricted solely to your domain, you know that you need to parse out the infinite loop. But if it is fetched via script node:
<script src="http://some.server/secret_api?..."></script>
It will fail because the script will never get beyond the first statement.
In my opinion, you can't restrict the access, only make it harder. It's a bit like access-restriction by obscurity. Referrers can be easily forged, and even with the short-lived key a script can get the responses by constantly refreshing the key.
So, what can we do?
Identify the weakness here:
http://www.example.com/json/getUserInfo.php?id=443
The attacker now can easily request all user info from 1 to 1.000.000 in a loop. The weak point of auto_increment IDs is their linearity and that they're easy to guess.
Solution: use non-numeric unique identifiers for your data.
http://www.example.com/json/getUserInfo.php?userid=XijjP4ow
You can't loop over those. True, you can still parse the HTML pages for keys for all kinds of keys, but this type of attack is different (and more easily avoidable) problem.
Downside: of course you can't use this method to restrict queries that aren't key-dependent, e.g. search.
Any solution here is going to be imperfect if your static pages that use the API need to be on the public Internet. Since you need to be able to have the client's browser send the request and have it be honored, it's possibly for just about anyone to see exactly how you are forming that URL.
You can have the app behind your API check the http referrer, but that is easy to fake if somebody wants to.
If it's not a requirement for the pages to be static, you could try something where you have a short-lived "key" generated by the API and included in the HTML response of the first page which gets passed along as a parameter back to the API. This would add overhead to your API though as you would have to have the server on that end maintain a list of "keys" that are valid, how long they are valid for, etc.
So, you can take some steps which won't cost a lot but aren't hard to get around if someone really wants to, or you can spend more time to make it a tiny bit harder, but there is no perfect way to do this if your API has to be publically-accessible.
The short answer is: anyone who can access the pages of your website will also be able to access your API.
You can attempt to make using your API more difficult by encrypting it in various ways, but since you'll have to include JavaScript code for decrypting the output of your API, you're just going to be setting yourself up for an arms race with anyone who decides they want to use your API through other means. Even if you use short-lived keys, a determined "attacker" could always just scrape your HTML (along with the current key) just before using the API.
If all you want to do is prevent other websites from using your API on their web pages then you could use Referrer headers but keep in mind that not all browsers send Referrers (and some proxies strip them too!). This means you'd want to allow all requests missing a referrer, and this would only give you partial protection. Also, Referrers can be easily forged, so if some other website really wants to use your API they can always just spoof a browser and access your API from their servers.
Are you, or can you use a cookie based authentication? My experience is based on ASP.NET forms authentication, but the same approach should be viable with PHP with a little code.
The basic idea is, when the user authenticates through the web app, a cookie that has an encrypted value is returned to the client browser. The json api would then use that cookie to validate the identity of the caller.
This approach obviously requires the use of cookies, so that may or may not be a problem for you.
Sorry, maybe I'm wrong but... can it be made using HTTPS?
You can (?) have your API accessible via https://example.com/json/?var1=x&var2=y, thus only authenticated consumer can get your data...
Sorry, there's no DRM on the web :-)
You can not treat HTML as a trusted client. It's a plain text script interpreted on other people's computers as they see fit. Whatever you allow your "own" JavaScript code do you allow anyone. You can't even define how long it's "yours" with Greasemonkey and Firebug in the wild.
You must duplicate all access control and business logic restrictions in the server as if none of it were present in your JavaScript client.
Include the service in your SSO, restrict the URLs each user has access to, design the service keeping wget as the client in mind, not your well behaved JavaScript code.

Categories