I am in need of an authentication system that would work in harmony with the current authentication system my client's server uses.
The current system works as follows:
A page requiring authentication invokes an in-house developed mod_auth Apache module in the .htaccess file.
The user is redirected to a generic log in page.
After entering valid credentials, a cookie is created, which has the IP address of the client, a public key, and other helpful info about the user, all base64 encrypted.
Any page requiring authentication after this point checks the public key and the requesting IP address. If the user's IP has changed, they are redirected to the login screen. If the cookie is tampered with, they are redirected.
The benefit of the above system is that a cookie can not be used on another machine (other than on the same LAN, but other measures check against man-in-the-middle attacks), as the IP address won't match.
The downside is that this method prevents the user's session from being extended server-side. In other words, a server-side script can't get information on behalf of the user since the IP address won't match.
This limitation makes sense under most circumstances, as it avoids allowing the server from "stealing" the user's cookie. However it also means that a Web Service can't be protected using the same authentication system, since requests will always come from the server's IP, never from the client (unless AJAX is used, which is a very limited usage of a web service).
What I would like is for the web service client (server-side) to pass the cookie to the web service server and have the web service server verify the authenticity of the cookie directly with the end-user's client.
My basis for this is how sites like Stackoveflow use Open ID to check log-in status at the browser level without the end-user being involved unless the check fails.
A quick wikipedia search leads me to understand that the underlying system involved is a protocol called Yadis.
So I would like to know if I am missing any pieces to this puzzle and if I'm leaving myself open to major security flaws:
User logs in as normal
Page user requests needs web-service
Page passes user's authentication cookie to web service
Web Service uses same cookie to request a generic "confirm authentication" page via user's browser. (without user seeing this).
"confirm authentication" page returns a "user logged in" message or the browser opens a new window with log-in page.
Upon receiving the "all clear" message above, web service returns any info requested by original page that user is logged in to.
Am I missing any details? Is Yadis just a name give to this idea or will I need to install something to make sure it works correctly?
The term "Yadis" can be a little murky because it's referred to different things over the years, but more than anything it refers to the discovery phase of the protocol. That is, it answers this question: given an identifier (like http://keturn.example.com/ or xri://=keturn*example or whatever), what is the authentication server to use for this user? What version of the protocol does it support?
Which, if I read your situation correctly, is not at all what you're trying to address.
What you describe, authorizing one web service to act on behalf of the server with another, is more the domain of what OAuth is meant to address. But if you're stuck with your client's currently implemented auth protocol, I'm not sure that helps you either. But it's probably worth a look, it's not dissimilar from the solution you propose.
Related
I am creating a plugin in my website, where logged in users can view their emails. The Email Server I am developing against is Zimbra. So far, I have been able to successfully fetch and display user emails using PHP's imap_open function:
imap_open($server, $email, $password)
When a user clicks on an email link on the website, the user is navigated to the zimbra web client. However, the users will have to reenter their log-in credentials once more. I have checked my browser's cookie information, and have noticed that Zimbra sets a cookie, ZM_AUTH_TOKEN, when a user is logged in: I believe Zimbra uses this cookie to detect if a user is already logged in. In essence, my task is to eliminate this extra step of re-logging in; if there are open-source solutions, I would like to know about these as well.
You can check the official documentation here:
http://wiki.zimbra.com/index.php?title=Preauth
This is half of a solution -- sorry I've never programmed with Zimbra, but I've implemented single sign-on across php projects several times.
Is your domain and the domain of the zimbra webserver the same? If they are you can see and manipulate each other's cookies. Try to find the zimbra code that handles the login and sets a cookie. Then write a little web service web page and put it on the zimbra server that calls that code and returns the cookie token. Your website can then do a curl behind the scenes over to zimbra when a user logs in, get the token contents for the cookie and then set the appropriate cookie so they are logged into Zimbra. I secure the web service web page with a password that only my plugin website knows.
If they are not the same domain you can still do it. But instead of doing this through curl on the server you'll have to use frames or JavaScript on the client. Also a simple password to secure the login web service will not work since it is being accessed by the browser and everyone can see the password. You'll have to make the password more secure like hashing their email address (assuming it is the same on both servers) with a predefined secret.
I want to make a php webpage accessible from only one computer.
IP checking isn't suitable for that (Dynamic IP).
I could set a cookie (with no expiration date) with a token. Then I could check if the cookie has the correct token and display the page, else I could die(). I think that this isn't a secure solution, because a cookie can be stolen, can't it?
So, what to do?
P.S. Obviously I can't login every time.
So here are a couple of options:
Client side certificates
Create a client side certificate and configure your webserver to authenticate using client certificates. Problem solved. In future, if you need to have more computers connect to the server, give them client certificates as well.
IP based : using Dynamic DNS
Give your computer a dynamic-dns name (myclient.dyndns.com) and install a dyndns client on your computer. The dyndns client keeps checking its own IP and updates the nameserver entry whenever your computer's IP changes. On server side all you need to check is if the IP that the requester presents is same as myclient.dyndns.com and allow access if it is.
A slight gotcha in this one is that there is a small (configurable) window of time between when IP changes and the dyndns client pupulates it to the nameserver. So, whenever your IP changes, until the dyndns client on your computer detects it and updates the nameserver, your server will not allow any requests from your computer in that time window. Thats because your computer will present the new IP and myclient.dyndns.com will resolve to your old UP. This time window can be made as small as you want (even 1 second). The other small gotcha is that in this n second window, any random computer that gets your old IP assigned by the ISP can access your server. The probablity of this is very small but just mentioning as a possibility.
There are many free dynamic dns services out there. You can google them.
Cookie Based
You could use cookies. However as you correctly identified, cookies can be stolen. Now, there are two ways they can be stolen:
Copying the cookie off the computer: Someone who has access to the computer can copy that specific cookie and impersonate as your computer to your webserver. If this is possible (if potential malicious users can remote desktop or physically access your computer), then cookie based solution is not for you.
Sniffing over the network: Cookies can be easily sniffed over the network. A easy way to prevent sniffing is enabling SSL. Given that you are confident that cookies cannot be stolen off the computer by copying it over, cookie+SSL option works in your case. In this case its just like a shared secret key. You do it via cookie or querystring, it doesn't matter. Cookie obviously are preferred over querystring because cookies aren't normally logged in browser history or webserver logs.
Also just a thought: For all the computers that aren't authenticated, send a standard 404 response rather than some custom "Access denied" page. This way anyone who is running a crawler/bot/scanner on your site will not be intrigued by this custom response and will not attempt to circumvent your security controls.
Couldn't you just use a unique passphrase as a parameter in the uri?
e.g. http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
and check to see if it matches the one stored in the server?
Well I get if you are not the user it is someone else... then you need only that specific client (computer) to be able to access the page
Either way the first time there must be some sort of registration. Maybe the example uri above works like this:
you request: http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
the passphrase is checked of being correct and a boolean value is stored in the server as to never be able to "register" again.
If it is correct, a cookie is being generated with a unique key.
This same key is also stored in the server (file, database or something)
Therefore on subsequent requests when you just compare the key stored in the server and the key in the cookie you know who is the client
I run a computer lab for grade schoolers (3-14 y.o.) and would like to create a desktop/dashboard page consisting of a number of iframes, each pointing at a different external website
(for which we have created individual accounts for each child); and when a kid logs in (to the dashboard) a script will log her in to those websites, so she does not have to.
I have 1 server and 20 workstations, I'll refer to them as 'myserver' and 'mybrowser'(s) respectively. All these behind the same router (dynamic IP).
A kid gets on a 'mybrowser' workstation, fires up Firefox and runs desktop.php (hosted in 'myserver') and gets a login screen (for 'myserver')
'mybrowser' ---http---> 'myserver'
Once logged in, 'myserver' will retrieve a set of username and password stored in its database and run a CURL script to send those to an 'external web server'.
'mybrowser' ---http---> 'myserver' ---curl---> 'external web server'
SUCCESSFUL, well, I thought.
Turns out CURL, being run off 'myserver', logs in 'myserver' instead of 'mybrowser'.
The session inside the iframe, after refresh, is still NOT logged in. Now I know.
Then I thought of capturing the cookies from 'myserver' and set it into 'mybrowser' so that 'mybrowser' can now browse (within the iframe)
as a logged in user. After all, we (all the 'mybrowsers') are behind the same router as 'myserver', thus same IP address.
So in other words, I only need 'myserver' to log a user in to several external websites all at once ,and once done pass the control over back to individual users' browsers.
I hope the answer will not resort to using CURL to display and control the external websites for the whole session, aside from being a drag that will lead to some other sticky issues.
I am getting the nuance that this is not permitted due to security issues, but what if all the 'mybrowsers' and 'myserver' are behind the same router? Assuming there's a way to copy the login cookies from 'myserver' to 'mybrowsers', would 'external web server' know that a request came from different machines?
Can this be done?
Thanks.
The problem you are facing relates to the security principles of cookies. You cannot set cookies for other domains, which means that myserver cannot set a cookie for facebook.com, for example.
You could set your server to run an HTTP proxy and make it so that all queries run through your server and do some kind of URL translation (e.g. facebook.com => facebook.myserver) which then in return allows you to set cookies for the clients (since you're running on facebook.myserver) and then translates cookies you receive from the clients and feed them to the third party websites.
An example of a non-transparent proxy that you could begin with: http://www.phpmyproxy.com/
Transparent proxies (in which URLs remain "correct" / untranslated) might be worth considering too. Squid is a pretty popular one. Can't say how easy this would be, though.
After all that you'll still need to build a local script for myserver that takes care of the login process, but at least a proxy should make it all possible.
If you have any say in the login process itself, it might be easier to set up all the services to use OpenID or similar login services, StackOverflow and its sister sites being a prime example on how easy login on multiple sites can be achieved.
My web application is receiving increased attention and I need to provide additional security to protect my customers.
The biggest problem as I see it is that the user login data is sent as plain text. My goal with this question is to discern if the following approach is an improvement or not.
In extension I will need to get dedicated servers for my service. This proposed solution is temporary until then.
I am currently running my web application on a shared hosting web server which only provides SSL through their own domain.
http://mydomain.com
is equivalent to
https://mydomain-com.secureserver.com
My thought is to have:
http://mydomain.com/login.php
...in which an iframe opens a page from the secure server, something like this:
<iframe src="http://mydomain-com.secureserver.com/ssllogin.php"></iframe>
I authenticate the user in
ssllogin.php with the (hashed+(per
user based-randomly salted))
passwords from the database.
After proper session regeneration set a session verifying the authentication.
This session is then somehow transferred and verified on http://mydomain.com
Is this approach even possible to achieve? Would this be an improvement of my login security or just move the "point of interception of password" for the attacker to another instance?
All feedback is appreciated.
You don't need an iframe. Just make the action of the login form to point to https://yourdomain.com/login.php . In there you may check if user & password are correct, and then redirect again to plain http.
BUT this is not 100% secure. The fact that you are sending the user & password via https may prevent an attacker or sniffer to get that. But if you later revert to plain http, it is possible to this attacker/sniffer to hijack the session of any logged in user sniffing the session cookies of this user.
If you want more security (not 100%, but more than this previous option), stay always in https, for all resources (css, js, images too, not just your php/html files), and even serve the login page via https.
For some reasoning of these points, see firesheep (for the hijacking session problems) or the recent tunisian gov't attack on tunisian facebook/yahoo/gmail users (for serving even the login page via https).
edit: sorry, I misread your question. If the SSL domain is different than the not-ssl domain, you may have problems, as the session cookie only will work against the same domain or subdomains. So, if you do the login and send the session cookie from https://yourdomain.secure-server.com, it will only be sent back by the browser to yourdomain.secure-server.com (or *.secure-server.com if you will), but not to yourdomain.com. I think it's possible to make a wildcard cookie valid for all *.com subdomains, but it's better not to do this (do you want your users' session cookie be sent to evil.com ?)
I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).