i have php proxy script which uses file_get_contents to get web sites and outputs it ...
everything is working as long as web sites are static, but as long as i use some sites that uses ajax requests to update it's content, lik twitter, 9gag, youtube ... new content doesn't get added
i get this error in console:
XMLHttpRequest cannot load http://9gag.com/new/json?list=hot&id=6408098. Origin is not allowed by Access-Control-Allow-Origin.
since 9gag site is now my local site served by my local proxy it can't access new content from original 9gag site, which this is cross domain issue ....
so my question is how do i take ajax requests and put them through my local proxy server?
This is a security feature. It is made to prevent such requests that you are trying to do. As I can see, you have only two possibilities:
Add site to hosts file to forward it to your proxy. It this way you have to ensure that your proxy responds correctly this way. But I don't know if there are some other checks browser-side except checking the domain. If only domain taken into account, everything will be ok.
Set OS to use your proxy site as a system proxy. This way you should make it to respond as a regular proxy server.
P.S. May be it is better to use some ready-to-use transparent proxy utility?
Related
I have a reverse proxy server F5 standing in front of Apache web server which hosts my application - http://example.net/documents/.
F5 which is my organization reverse proxy server is out of my reach as it is managed by admins and they recently implemented HTTPS for my site which changed my site from HTTP to HTTPS.
All worked good except that my entire site content also got changed to HTTPS including any absolute HTTP external url references which were part of site content also got changed to HTTPS. So, my biggest concern now is to preserve original protocol of my site content(specifically, the external link references like jquery etc. under my site content).
I am confused as to how I can address this as reverse proxy server is not in my reach. Is there anything that can be done on Apache web-server to preserve original protocol of any absolute urls by bypassing reverse proxy server's HTTPS implementation? I am not an expert on this. Please help!
First: You of course should always use https for everything. If you site is accessible through https (through the F5) and you reference js files through http:// many browsers will throw warning about mixed and insecure content. Especially jquery and all other JavaScript modules should perfectly be available through https, so i dont really see an issue here?
Second, no, you wont be able to bypass the content rewrite of the F5 Loadbalancer; You could of course hack something that dynamically concats the urls through javascript or something like that, but that is hacky und subject to fail.
Maybe i could help you better if you give an actual example on where you got a problem with https?
I am using django as backend API and ajax for making api call.my main site runs on https but the api on http . i am unable to make api calls from ssl cert loaded onto ngnix.
is it possible to make ajax calls from https to http ?
any leads will be appreciated ?
thnks in advance ..!!
The only difference between HTTP and HTTPS is the SSL security part, if your server is able to handle HTTPS requests they will be send through to the API just like any other HTTP request, it's only the actual data communication from the client socket to the server socket that is affected, once the data is received it's back in plain text (or it's original format) again.
Your browser will stop this and/or give an insecure warning and a padlock symbol for your HTTPS connection.
HTTPS indicates the site is secure, which gives certain guarantees to the visitor - namely that the site is for the given domain (authentication), that it's not been intercepted and changed (integrity) and that no one else is able to listen in to your messages to and from the server (confidentiality).
When you add an insecure resource like an api call, those guarantees are no longer there and so the browser will give a "insecure" warning, typically with a yellow warning padlock (instead of green) and/or a pop up.
Browsers used to differentiate between inactive content (e.g. images) - which were seen as less of a risk and so allowed, and active content (e.g. JavaScript) - which were potentially dangerous and so not allowed, however don't think they do any more. Even if they did Ajax XHR calls are definitely in the latter category.
Best option is to proxy pass the request through your main site domain through Nginx (e.g. forward requests to https://example.com/api from Nginx to your api using Nginx config).
Looking for some PHP help. What I'd like to try (and find out if its feasible) is to redirect all traffic coming from origin back to the Akamai CDN url. Obviously if I did this globally I would run into a loop. So instead I've set up a header sent only by Akamai that would be ignored by my app if it was found.
What I'm looking for is the best method to accomplish this with PHP on my app. Something along the lines of:
if (!$header_exists && $current_baseurl === origin.site.com {
301 redirect to www.site.com version of same request URL
}
This would allow me to make sure no requests coming in from outside of Akamai are properly redirected. Is this method sound? Does anyone currently has a code sample using a similar method?
This is a complete wrong approach. What you need to do is implement site shield in Akamai. Site shield will have a set of Akamai IP's. If you allow only those IP's that should solve your problem. Akamai will make sure all the requests to Origin are sent from one of akamai site shield map. This way any request that is sent directly to origin will be denied and requests from Akamai will be allowed. Contact Akamai support to help you create and map site shield for your domains. No code changes are required for this.
Additionally you can allow your office IP if you want origin domain to be open for your testing purpose.
I have a file that is being linked to from other sub websites.
The file: http://site.com/file.img
Website A linking to it <img src="http://site.com/file.img"></img>
website B linking to it <img src="http://site.com/file.img"></img>
I need to reliably identify which of these websites has accessed the file, but I know that $_SERVER['HTTP_REFERER'] can be spoofed. What other ways do I have to reliably confirm the requester site? By IP, get them to register an IP? not sure. setup an API key? What options are there?
If a website is only linking to a file, the "website" itself will never actually access your image. Instead, the client who's viewing the site will make a request for the image.
As such, you're depending on information sent by the client, which is completely out of your control and not reliable at all. If you have the opportunity to set some sort of unique cookie on the client, you may be able to use this in some fashion for extended identification, but even that won't be reliable.
There is no 100% reliable solution.
Getting the referrer is the best you can do without getting into complicated territory.
If you don't mind complicated, then read on: set up your Web server to serve file.img only to Website A and Website B, then require that Website A and Website B set up a proxy configuration on their end that will retrieve file.img on behalf of their visitors.
Example:
A visitor to Website A loads a page that contains an image tag like <img src="http://websiteA.com/file.img"/> (note reference to Website A rather than your site). Client requests file.img from WebsiteA.com accordingly. Website A is configured to proxy requests for the path /file.img to your server, http://site.com/file.img. Your site verifies that it is in fact Website A that is requesting the image and then serves it to Website A's proxy. Website A then serves it to the visitor.
Basically, that makes it a pain for Websites A and B, gives you a performance hit, and also requires further configuration on your part. But I imagine that would satisfy your requirement.
Have a look at how OpenID relying is implemented, it allows one site to authenticate against another. The protocol specification will give a hint at the effort and overhead required to reliably implement such a scheme.
http://googlecode.blogspot.com/2010/11/googles-sample-openid-relying-party.html
i need to fetch a url with javascript/jquery and not php.
i've read that you could do that if you got a php proxy, but that means that it is still going through php. cause then it's still the ip of the server that is fetching it.
could one fetch the url entirely with only front-end, and thus fetch it with the client's ip?
There exists a Same origin policy for AJAX requests. This prevents Javascript on, say, this site, from making a request to gmail.com (with your cookies), reading your e-mails, and uploading them to the StackOverflow server. Javascript on stackoverflow.com can only make AJAX requests to pages on that domain.
As you can see, this is essential for security. Requests must instead be made by a proxy running on your web server - PHP can be used, but there are other solutions. For example, Ajax Cross Domain is an AJAX library that communicates with a Perl script running on the server to emulate AJAX requests for other domains.
It is also possible to make requests on other domains via a javascript include (script tag), image tag, etc. but in these cases you cannot read the contents of the page.
You cannot do this with an iframe either: scripts cannot see the internals of iframes unless they are on the same domain as the script.
So in short, use a proxy.
The problem is that jQuery would fetch an url with AJAX and AJAX won't operate cross-domain because of the potential security (as per the same-origin policy).
There are however ways to emulate this, if you load the page in an iframe you can retrieve the data by using innerHTML on the iframe. Here's an example script that uses jQuery: http://code.google.com/p/jquery-crossframe/