Providing Access-Control-Allow-Origin with a Wildcard - php

I am making a page that responds to an AJAX request with a certain string when another certain string is provided as a GET variable. In order to avoid problems with the "same origin" policy, I have found that I can include this line of PHP at the top of the page:
header('Access-Control-Allow-Origin: *');
There's no sensitive data being passed whatsoever, it's actually keywords passed back and forth from several different domains, (its an SEO related application). Due to this, hundreds of different domains will be using it, so if possible I would like to avoid specifying each one. Are there any risks to using this line? If so, what are they?
Also, if this page was located under an HTTPS URL is it still accessible?
Any advice, suggestions or concerns are most welcome. Thank you!

If the access truly is public, I'd say this is a good solution. However, if you want to limit the access to your site, you'll probably want to list explicitly each domain origin allowed.
Since you say your response doesn't include any sensitive information, you probably don't need to worry about hosting your service over HTTPS. The one reason you might is if a client HTTPS page tries to access your non-HTTPS service. In that case, I would guess they'd get a warning about unsecure information being sent/received when your AJAX service is called, and maybe even just a silent fail. If this is a common enough case, then I'd say looking into the HTTPS service. Make sure your HTTPS certificate is certified properly, because if the client's browser cannot verify the certificate the AJAX request will silently fail (as opposed to prompting when you navigate directly to an HTTPS page)! Also, I don't know how it will go in your case, but whenever I've worked with HTTPS, I've usually had to tweak things to get them to function properly.
Long story short, I'd start with HTTP and then evaluate the need of HTTPS. Good luck!

Related

Preventing calls to php scripts from a localhost or from another domain

I have a website with some php scripts, some of them are called in ajax.
I'd like to prevent my site from some malicious users who would try calling and using those scripts from another site, or from a dummy localhost site.
I thought about filtering the domain name, but with some tools like EasyPHP and virtual host managers, you can run a local website tricking the "domain" name.
I also thought about filtering the IP adress of the caller, but I guess that if you can trick the "domain" name, you can also trick the localhost IP.
So, how may I do this to have this security work fine ?
What are you referring to is called Cross Site Request Forgery.
Calling one of your scripts from another website will be forbidden by same-origin policy. Taking this into consideration and the fact that an AJAX request can contain only a few headers without the consent of the server via Cross-Origin Resource Sharing, you can send a custom HTTP header and checking that header on the server side, from PHP. If the header is missing, most likely the request is not coming from your own application.
You could also require each client to send a unique token for each request in order to fetch the data. Most common used token method is called Synchronizer token pattern.
Sorry for the long list of links included in this answer, but I consider the subject to be a delicate one and like any security problem, I think it is crucial to read as much as you can, from many sources, in order to understand the problem from different perspectives, available solutions and pick the right one for your use case.
Resources to read:
How to stop other website to send cross domain ajax requests?
What's the point of X-Requested-With header?
Using CORS

Directing HTTP requests to HTTPS if initial connection is HTTPS but not if it is HTTP

I have a site running WordPress on Apache server and I am attempting to provide both HTTP and HTTPS connections via the same site. I want to allow connections over HTTP without forcing a redirect to HTTPS, unless the client is connecting initially via HTTPS then I want all subsequent HTTP requests to be forwarded to HTTPS to avoid issues with CORS and unsecured content warnings.
I am having some trouble turning up results on how to effectively do this with mod_rewrite alone. Most solutions I find try to force the connections to redirect to HTTPS regardless and will not allow an HTTP connection or vice versa. I have tried a few mod rewrite conditions including making use of the referer string but none seem to work thus far. I must be missing something because I feel that this is indeed possible but I and my search engines alone are stumped.
Maybe I'm just doing something wrong or is this kind of functionality beyond Mod_Rewrite?
I was thinking to use a PHP script but was worried it wouldn't work for some static files since WordPress doesn't handle those requests.
Update:
I have made a php script to detect the version. It sets a cookie which expires in 20 seconds from being set, this is read by Mod_Rewrite and if set it redirects the URLs to HTTPS. This works for most of the subsequent requests of an initial HTTPS request. A few URLs seem to be unaffected by it, not sure exactly why as the cookie hasn't expired by the time of these file requests and the particular rules are before the static file bypass rules in the htaccess file. At any rate that was easy enough to fix by setting the file urls to protocol-less versions.
Some third party sites need domains rewritten though, as they serve https from other domains. On that note I don't think this is actually possible without buffering the whole page and actually re-writing the URLs.
It is possible to detect the initial connection but this must be done using Server Side code, like a PHP script. Then using the detection can be done at Mod_Rewrite level.
Add in the WordPress constraint and things get complicated.
WordPress isn't built to facilitate one install with both protocols allowing access to content. So to accomplish this would require a custom plugin using the detection mentioned earlier, and instead of using Mod_Rewrite to direct requests on the server, we have to buffer WordPress output and logically replace/rewrite URLs in the page before they go to the user if and only if the initial connection for the page is in SSL.
There is only one plugin I have found which does something similar to this, however it doesn't do dynamic detection only gives admin/editors a checkbox option to make a page SSL secured. The plugin is called WordPress HTTPS
Dynamic detection and redirection isn't something SSL was meant for anyways, it's either on or off, and most pages need it that way.
I was originally trying to provide both so I could use a self-signed certificate without worrying that users would get the "warning unsecured connection" messages from their browsers by forcing them to use only SSL connections.
So I'll be purchasing a cert or making a custom plugin.
tkausl is right, you don't really need to do mod_rewrite. You should be able to format links without the protocol and it will automagically select for you.
You can see that google does this with their hosted libraries:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
*Note the lack of http: or https: this will follow the protocol requested by the user.

Can we detect if a root CA certificate is installed?

Is this possible with JavaScript or PHP? I want to be able to detect if my private CA is installed on the user's iOS or Android device. From there I can decide whether to provide instructions for installation or not. I've been "googlin" and haven't found anything useful. Has anyone tried this before? I want to find out what I should spend my time learning. If it's not possible, could you suggest an in browser alternative?
EDIT : I don't have a choice here and it's not my decision. A private CA certificate is going to be used for other security reasons.
I doubt there will be any sort of device query to test this.
I haven't actually done this, but you could probably come up with a test where the JavaScript makes an AJAX request to an https server that uses the certificate you want to test for. If the request succeeds, then the certificate is working. (This question seems to imply that AJAX requests will (correctly) fail if the SSL certificate doens't validate)
Note that, because the scheme (http or https) of the URL will be different (and maybe the domain depending on how you set this up), your test site will have to use the CORS Access-Control-Allow-Origin header to allow the browser to make the request. See: AJAX calls to untrusted (self-signed) HTTPS fail silently
EDIT:
I had some time and put together a very simple example. Goto http://ssl_test.gjp.cc . That page will attempt to make an AJAX request to https://ssl_test2.gjp.cc, which uses a self-signed certificate. Before you trusted ssl_test2, you will see "Failed" on the ssl_test page, however once you trust the certificate for ssl_test2, you should always see "Success" on ssl_test.
Note that this doesn't prove that your user has the CA cert installed - all it proves is that they have configured their browser to trust the test site (ssl_test2). If you never directly point the user to the test site, then they will never have the chance to trust only that site, so this should work reasonably well.
Maybe this will help :
<img src="https://the_site/the_image" onerror="redirectToCertPage()" />

Disable cookies on certain PHP pages

Is there a way to disable PHP sending cookies in a specific page's header, on a server that normally uses cookies?
The scenario is that I have a server that uses cookies, but some pages are actually programming API calls that don't need cookies, and in fact slow down the user's API request by sending this irrelevant data.
The way that many sites use to serve their static resources without the cookie overhead is using a different domain. For Stack Overflow, for example, that domain is http://sstatic.net
In a web app, you can restrict cookies to a specific path. By default, they will be restricted to the directory in which they were set. You can also explicitly specify it using the $path parameter in setcookie().
I agree with Pekka's answer and Dagon's comment. If you look at what goes in an http request with a tool like firebug you'll see that cookies are only sent when there is a setcookie call, however, the browser will always send valid cookies it has for the domain.
The way around this is to use a seperate domain or subdomain for your api. You can also configure the web server supporting the api to disable any support for cookies, however, if your domain has implemented a domain cookie anywhere, you can't stop the clients from sending all the cookie data in the header of their requests. Thus it's probably best if you use an entirely different domain for your api, and avoid cookies entirely in doing so. If you can insure that no domain cookies exist, then subdomains is the next best solution.

How to ensure the HTTP_REQUEST Is coming from the right place?

I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).

Categories