As the title says, I intend to create a web-app that uses an iframe to lock all my web sessions within the server itself. Thus when accessing from a client, i can still visit other sites, while being in the main browser page.
Since the website itself is making the connection through the page, for security wise, am I technically going through a VPN since the connection goes like
Client -> Server Hosting the Main Webpage -> facebook.com
Will my connection to facebook.com come from the client, or the server?
And is this type of solution even feasible?
Will my connection to facebook.com come from the client, or the
server? And is this type of solution even feasible?
If you're just using an IFrame, then the request will come from the browser.
If you've made a proxy handler on your site which makes a back-end HTTP request, then it will come from the server. E.g. the handler could take a query string parameter like url - http://example.com?url=https://facebook.com.
Three relevant security issues spring to mind with this approach.
Server-side Request Forgery - ensuring an attacker cannot browse to things in your DMZ like http://127.0.0.1 or http://192.168.2.4.
X-Frame-Options - lots of sites use this header, or the new CSP2 frame-ancestors header to prevent themselves from being framed. You could though strip out such headers in your proxy code.
Browser trust. If I'm on your website at http://example.com (or even https://example.com), how do I know I'm logging into Facebook. There is no assurance other than the fact the IFramed page looks like Facebook. Any case, if you're proxying the request to Facebook, how do I know you're not capturing my credentials?
If this is just for yourself, then you can ignore points one and three somewhat, however you have no way of verifying the security yourself using your browser, you'd have to trust your server-side code, and how will you be aware if a MITM downgrades your connection from HTTPS to plain HTTP (sslstrip).
The rest of it is feasible, ignoring the security issues. Handling session cookies and the like will result in some complex code (particularly if you're going to deal with certain cookies being set in JavaScript too because they'll all share an Origin with your main site's domain).
Related
I am using django as backend API and ajax for making api call.my main site runs on https but the api on http . i am unable to make api calls from ssl cert loaded onto ngnix.
is it possible to make ajax calls from https to http ?
any leads will be appreciated ?
thnks in advance ..!!
The only difference between HTTP and HTTPS is the SSL security part, if your server is able to handle HTTPS requests they will be send through to the API just like any other HTTP request, it's only the actual data communication from the client socket to the server socket that is affected, once the data is received it's back in plain text (or it's original format) again.
Your browser will stop this and/or give an insecure warning and a padlock symbol for your HTTPS connection.
HTTPS indicates the site is secure, which gives certain guarantees to the visitor - namely that the site is for the given domain (authentication), that it's not been intercepted and changed (integrity) and that no one else is able to listen in to your messages to and from the server (confidentiality).
When you add an insecure resource like an api call, those guarantees are no longer there and so the browser will give a "insecure" warning, typically with a yellow warning padlock (instead of green) and/or a pop up.
Browsers used to differentiate between inactive content (e.g. images) - which were seen as less of a risk and so allowed, and active content (e.g. JavaScript) - which were potentially dangerous and so not allowed, however don't think they do any more. Even if they did Ajax XHR calls are definitely in the latter category.
Best option is to proxy pass the request through your main site domain through Nginx (e.g. forward requests to https://example.com/api from Nginx to your api using Nginx config).
I have a website with some php scripts, some of them are called in ajax.
I'd like to prevent my site from some malicious users who would try calling and using those scripts from another site, or from a dummy localhost site.
I thought about filtering the domain name, but with some tools like EasyPHP and virtual host managers, you can run a local website tricking the "domain" name.
I also thought about filtering the IP adress of the caller, but I guess that if you can trick the "domain" name, you can also trick the localhost IP.
So, how may I do this to have this security work fine ?
What are you referring to is called Cross Site Request Forgery.
Calling one of your scripts from another website will be forbidden by same-origin policy. Taking this into consideration and the fact that an AJAX request can contain only a few headers without the consent of the server via Cross-Origin Resource Sharing, you can send a custom HTTP header and checking that header on the server side, from PHP. If the header is missing, most likely the request is not coming from your own application.
You could also require each client to send a unique token for each request in order to fetch the data. Most common used token method is called Synchronizer token pattern.
Sorry for the long list of links included in this answer, but I consider the subject to be a delicate one and like any security problem, I think it is crucial to read as much as you can, from many sources, in order to understand the problem from different perspectives, available solutions and pick the right one for your use case.
Resources to read:
How to stop other website to send cross domain ajax requests?
What's the point of X-Requested-With header?
Using CORS
I have used SSL to secure my pages, but one of my scripts has stopped working.
I was using on page to show visit count on this website.
It was working fine earlier without SSL but now shows the error message:
blocked insecure content.
When using secure connection, all content should be loaded using secure connections. That includes images, scripts, iframes, stylesheets, swf and other media from both your server as well third-party ones.
Some browsers allows changes in configuration so they can fetch and display this content, but you can't force your users to change their configurations (especially for less secure one).
If this service does not provide it's api through SSL, you may have to change it for another one or resign from this counter on secured pages.
its a deliberate security feature to prevent a page looking secure which then uses resources from less secure sites.
See if you can host the script under your ssl domain, or you could proxy the response if its an api for example.
Be aware though that you are circumventing a security feature and you should be confident that you trust the resource.
This feature was enabled by default in Firefox 23 recently. That's probably the reason it stopped working now (Chrome has been doing this longer), but it's always been bad practice because of several security implications: if the page itself is protected from being tampered, it gives the end user a false sense of security if he sees the connection is encrypted with HTTPS. After all, the insecurely served script could still be tampered with through a MitM attack, and for example introduce password sniffing callbacks, or redirect form postback targets.
I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).
I know the general definition but I need more details on how to implement them in general and PHP in specific, and what exactly are the features I gain from them?
SSL stands for "Secure Socket Layer", and it's a method of encrypted HTTP communication (among other things). It encrypts the traffic between a web browser and a server, making it possible to send secure data without fear of eavesdropping.
SSL is a web-server level technology, and has nothing to do with PHP. You can enable any web server with SSL, whether it has PHP on it or not, and you don't have to write any special PHP code in order to make your PHP pages show up over SSL.
There are many, many guides to be found on the internet about how to set up SSL for whatever webserver you might be using. It's a broad subject. You could start here for Apache.
some webservers are configured to mirror the whole site, so you can get every page over http or https, depending on what you prefer, or how the webbrowser sends them around. https is secure, but a bit slower and it puts more strain on your hardware.
so you might implement your site and shop as usual, but decide to put everything from the cart to the checkout, payment and so on under https. to accomplish this, all links to the shopping cart are absolute and prefixed with https:// instead of http://. now, if people click on the shopping cart icon, they're transfered to the secure version, and because all links from there on are relative again, they stay there.
but! they might replace the https with http manually, or go on the unencrypted version using a malicious link, etc.
in this case, you probably might want to check if your script was called over https (_SERVER["SERVER_PROTOCOL"], afaik), and deny the execution if not (good practice). or issue a redirect to the secure site.
on a side note: https is not using ssl exclusivley anymore, tls (the successor to ssl, see rfc2818) is more modern
rule of thumb: users should have the choice if they want http or https in noncritical environments, but forced to use https on the critical parts of your site (login/cart/payment/...) to prevent malicious attacks.