Redirect from Origin Server to Akamai instance - php

Looking for some PHP help. What I'd like to try (and find out if its feasible) is to redirect all traffic coming from origin back to the Akamai CDN url. Obviously if I did this globally I would run into a loop. So instead I've set up a header sent only by Akamai that would be ignored by my app if it was found.
What I'm looking for is the best method to accomplish this with PHP on my app. Something along the lines of:
if (!$header_exists && $current_baseurl === origin.site.com {
301 redirect to www.site.com version of same request URL
}
This would allow me to make sure no requests coming in from outside of Akamai are properly redirected. Is this method sound? Does anyone currently has a code sample using a similar method?

This is a complete wrong approach. What you need to do is implement site shield in Akamai. Site shield will have a set of Akamai IP's. If you allow only those IP's that should solve your problem. Akamai will make sure all the requests to Origin are sent from one of akamai site shield map. This way any request that is sent directly to origin will be denied and requests from Akamai will be allowed. Contact Akamai support to help you create and map site shield for your domains. No code changes are required for this.
Additionally you can allow your office IP if you want origin domain to be open for your testing purpose.

Related

How to implement HSTS in my website

I've a website (domain from godaddy and hosted in hostgator). As I updated the certificates, manually, I can redirect my site to https, but it is always going to http from google search. After searching online, I got to know that Considering Strict-Transport-Security: max-age=15768000 as result of curl -i -L on target domain will work for my need as it will force a browser to open the website in https. But I'm confused about how to implement this to my website.
Can anyone help me on this ?
Not sure this is right for Stack Overflow. Then again it covers so many topics that it doesn't fit nicely in any other Stack Exchange site either. So anyway will attempt to answer.
Redirects.
What do you mean "I can redirect my site to https"? You should redirect your site to https now you've gone through the hassle of setting this up so are you doing it? Or are you able to access both http and https? If so find out how to force https even if the user sets up http.
This is set up with a redirect rule on your web server. Not sure whether you have direct access to your config (e.g. .htaccess file if using Apache) or require your host provider to set this up for you.
Google search
Regarding Google Search, once you have the redirect set up, it will take some time for Google to recognise this and update the links in their search index to show the https version of the pages.
Saying that there are ways you can tell Google about this to hurry up the process:
Do you force a redirect to https? If not Google will decide which site to show (http or https) based on a number of factors.
Do you have a site map and have you updated those links to https?
Do you have a rel="canonical" setting in the HTML of any of your pages and is it set to the https version? This tells Google which is the real version of the page if, for example, you allow both http and https versions of the page (not recommended).
Have you registered the https version of your site with Google Search Console? If so are there any errors in there? You can also kick off a re-index request in here.
Have you set all internal links to be https or, better yet, relative links.
Can you update any external links to be https instead of http.
HTTP Strict Transport Security (HSTS)
This is an advanced topic so really wouldn't recommend it until you understand it more. Basically it's a HTTP Header you send back with your webpage over https to tell web browsers "hey I'm an https-only site. From now on, automatically translate any http requests to https automatically before you even send them to me".
It is a good security addition on top of redirects but crucially it does not replace the need for redirects. Redirects need to be in place first to send it to https, at which point your web server can send the HSTS HTTP Header (and which the browser will cache so it knows to change to HTTPS next time).
To set it up you send a HTTP Header like this (but only over https requests).
Strict-Transport-Security "max-age=16070400"
This can be setup in your webserver, or in your php files or any other way you can send HTTP Headers.
Be aware that this we'll prevent your site being available over http, so if you decide to turn off https for whatever reason, then you've basically blocked you're site for up to the max-age time for any browsers that have cached that setting.
For more information on HSTS see here:
301 Redirect and HSTS in .htaccess
But I really don't think that's what you are looking for here. It tells web browsers (like Google Chrome) to force https and is nothing to do with search engines (like Google Search) as, at present, they ignore this Header.

can i make a call to http://api.example.com from https://example.com?

I am using django as backend API and ajax for making api call.my main site runs on https but the api on http . i am unable to make api calls from ssl cert loaded onto ngnix.
is it possible to make ajax calls from https to http ?
any leads will be appreciated ?
thnks in advance ..!!
The only difference between HTTP and HTTPS is the SSL security part, if your server is able to handle HTTPS requests they will be send through to the API just like any other HTTP request, it's only the actual data communication from the client socket to the server socket that is affected, once the data is received it's back in plain text (or it's original format) again.
Your browser will stop this and/or give an insecure warning and a padlock symbol for your HTTPS connection.
HTTPS indicates the site is secure, which gives certain guarantees to the visitor - namely that the site is for the given domain (authentication), that it's not been intercepted and changed (integrity) and that no one else is able to listen in to your messages to and from the server (confidentiality).
When you add an insecure resource like an api call, those guarantees are no longer there and so the browser will give a "insecure" warning, typically with a yellow warning padlock (instead of green) and/or a pop up.
Browsers used to differentiate between inactive content (e.g. images) - which were seen as less of a risk and so allowed, and active content (e.g. JavaScript) - which were potentially dangerous and so not allowed, however don't think they do any more. Even if they did Ajax XHR calls are definitely in the latter category.
Best option is to proxy pass the request through your main site domain through Nginx (e.g. forward requests to https://example.com/api from Nginx to your api using Nginx config).

file_get_contents and ajax requests

i have php proxy script which uses file_get_contents to get web sites and outputs it ...
everything is working as long as web sites are static, but as long as i use some sites that uses ajax requests to update it's content, lik twitter, 9gag, youtube ... new content doesn't get added
i get this error in console:
XMLHttpRequest cannot load http://9gag.com/new/json?list=hot&id=6408098. Origin is not allowed by Access-Control-Allow-Origin.
since 9gag site is now my local site served by my local proxy it can't access new content from original 9gag site, which this is cross domain issue ....
so my question is how do i take ajax requests and put them through my local proxy server?
This is a security feature. It is made to prevent such requests that you are trying to do. As I can see, you have only two possibilities:
Add site to hosts file to forward it to your proxy. It this way you have to ensure that your proxy responds correctly this way. But I don't know if there are some other checks browser-side except checking the domain. If only domain taken into account, everything will be ok.
Set OS to use your proxy site as a system proxy. This way you should make it to respond as a regular proxy server.
P.S. May be it is better to use some ready-to-use transparent proxy utility?

Providing Access-Control-Allow-Origin with a Wildcard

I am making a page that responds to an AJAX request with a certain string when another certain string is provided as a GET variable. In order to avoid problems with the "same origin" policy, I have found that I can include this line of PHP at the top of the page:
header('Access-Control-Allow-Origin: *');
There's no sensitive data being passed whatsoever, it's actually keywords passed back and forth from several different domains, (its an SEO related application). Due to this, hundreds of different domains will be using it, so if possible I would like to avoid specifying each one. Are there any risks to using this line? If so, what are they?
Also, if this page was located under an HTTPS URL is it still accessible?
Any advice, suggestions or concerns are most welcome. Thank you!
If the access truly is public, I'd say this is a good solution. However, if you want to limit the access to your site, you'll probably want to list explicitly each domain origin allowed.
Since you say your response doesn't include any sensitive information, you probably don't need to worry about hosting your service over HTTPS. The one reason you might is if a client HTTPS page tries to access your non-HTTPS service. In that case, I would guess they'd get a warning about unsecure information being sent/received when your AJAX service is called, and maybe even just a silent fail. If this is a common enough case, then I'd say looking into the HTTPS service. Make sure your HTTPS certificate is certified properly, because if the client's browser cannot verify the certificate the AJAX request will silently fail (as opposed to prompting when you navigate directly to an HTTPS page)! Also, I don't know how it will go in your case, but whenever I've worked with HTTPS, I've usually had to tweak things to get them to function properly.
Long story short, I'd start with HTTP and then evaluate the need of HTTPS. Good luck!

How to ensure the HTTP_REQUEST Is coming from the right place?

I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).

Categories