The issue I'm having, which may not be solvable, is as follows:
I have a client that is a large organization of 1,500+ users at 7-8 different locations. The application is a PHP application build on the Kohana v3.0 framework. The organization sits behind a proxy filtering server at the ISP level. Each location has one main public IP address that funnels through the proxy then to the web. Each user has a Mac or Windows workstation issued by the employer.
What they are experiencing appears to be cookie collisions. Example: One user logs in at their workstation then another user logs in from the same location, different workstation, with the same OS and browser type. The second user receives the first users' active session by receiving a newly generated cookie (token) that matches the first user. This appears to only be related to the 'authautologin' cookie (set when the remember me check-box is engaged on the login screen), but I'm keeping my options open to caching from the proxy (I can't prove that the proxy is caching yet).
Because of the network setup, the server sees hundreds of users logging in from the same IP address with the same user agent. My initial thought is that the Kohana v3's way of generating cookies that are unique to the browser (user agent) is not unique enough for this real-world application.
Has anyone ever experienced anything like this? And what would be the proper actions to take in cookie and session generation? Would managing cookies and active sessions in the database be better?
Kohana Modules: Jelly-Auth, Jelly, and Auth
Server: Apache/2.2.9 (Debian) mod_fastcgi/2.4.6 mod_jk/1.2.26 PHP/5.2.6-1+lenny8 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g
Known Browsers: IE 8 & 9, Firefox (OS and Win), and Safari (OS)
It's just an idea but there is / used to be (depending on your Debian and PHP version) a bug with PHP sessions. What I suggest you to try:
Check this link - this may not be related to your problem but it's worth a try
Switch to database driver - I'd give 90% chance that this will fix everything
Test on different then Debian server - this may not be easy to accomplish though
Wow thats a nasty vulnerability, good catch!
By far the best way to generate cookies under PHP is to let PHP do it:
session_start(). And thats all! If you are generating your own cookie, then you really messed up somewhere. Now you can use the $_SESSION[] super global. The best practice is to call session_start() in a common header file before you access $_SESSION in your application.
There are probably other problems you should take into consideration such as owasp a9, csrf, and the cookie flags: HTTP_Only, and the "secure" flag (forcing the cookie over https).
I'm not sure if I understood you correctly, but... I understood that request goes like this:
user (workstation) ==> proxy () ==> internet ==> company website (and response in reverse direction).
Check if proxy sets "HTTP_X_FORWARDED_FOR" (in $_SERVER superglobal variable). It could be the only way to determine user's workstation IP address. If so, you're done.
Related
I want to make a php webpage accessible from only one computer.
IP checking isn't suitable for that (Dynamic IP).
I could set a cookie (with no expiration date) with a token. Then I could check if the cookie has the correct token and display the page, else I could die(). I think that this isn't a secure solution, because a cookie can be stolen, can't it?
So, what to do?
P.S. Obviously I can't login every time.
So here are a couple of options:
Client side certificates
Create a client side certificate and configure your webserver to authenticate using client certificates. Problem solved. In future, if you need to have more computers connect to the server, give them client certificates as well.
IP based : using Dynamic DNS
Give your computer a dynamic-dns name (myclient.dyndns.com) and install a dyndns client on your computer. The dyndns client keeps checking its own IP and updates the nameserver entry whenever your computer's IP changes. On server side all you need to check is if the IP that the requester presents is same as myclient.dyndns.com and allow access if it is.
A slight gotcha in this one is that there is a small (configurable) window of time between when IP changes and the dyndns client pupulates it to the nameserver. So, whenever your IP changes, until the dyndns client on your computer detects it and updates the nameserver, your server will not allow any requests from your computer in that time window. Thats because your computer will present the new IP and myclient.dyndns.com will resolve to your old UP. This time window can be made as small as you want (even 1 second). The other small gotcha is that in this n second window, any random computer that gets your old IP assigned by the ISP can access your server. The probablity of this is very small but just mentioning as a possibility.
There are many free dynamic dns services out there. You can google them.
Cookie Based
You could use cookies. However as you correctly identified, cookies can be stolen. Now, there are two ways they can be stolen:
Copying the cookie off the computer: Someone who has access to the computer can copy that specific cookie and impersonate as your computer to your webserver. If this is possible (if potential malicious users can remote desktop or physically access your computer), then cookie based solution is not for you.
Sniffing over the network: Cookies can be easily sniffed over the network. A easy way to prevent sniffing is enabling SSL. Given that you are confident that cookies cannot be stolen off the computer by copying it over, cookie+SSL option works in your case. In this case its just like a shared secret key. You do it via cookie or querystring, it doesn't matter. Cookie obviously are preferred over querystring because cookies aren't normally logged in browser history or webserver logs.
Also just a thought: For all the computers that aren't authenticated, send a standard 404 response rather than some custom "Access denied" page. This way anyone who is running a crawler/bot/scanner on your site will not be intrigued by this custom response and will not attempt to circumvent your security controls.
Couldn't you just use a unique passphrase as a parameter in the uri?
e.g. http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
and check to see if it matches the one stored in the server?
Well I get if you are not the user it is someone else... then you need only that specific client (computer) to be able to access the page
Either way the first time there must be some sort of registration. Maybe the example uri above works like this:
you request: http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
the passphrase is checked of being correct and a boolean value is stored in the server as to never be able to "register" again.
If it is correct, a cookie is being generated with a unique key.
This same key is also stored in the server (file, database or something)
Therefore on subsequent requests when you just compare the key stored in the server and the key in the cookie you know who is the client
I run a computer lab for grade schoolers (3-14 y.o.) and would like to create a desktop/dashboard page consisting of a number of iframes, each pointing at a different external website
(for which we have created individual accounts for each child); and when a kid logs in (to the dashboard) a script will log her in to those websites, so she does not have to.
I have 1 server and 20 workstations, I'll refer to them as 'myserver' and 'mybrowser'(s) respectively. All these behind the same router (dynamic IP).
A kid gets on a 'mybrowser' workstation, fires up Firefox and runs desktop.php (hosted in 'myserver') and gets a login screen (for 'myserver')
'mybrowser' ---http---> 'myserver'
Once logged in, 'myserver' will retrieve a set of username and password stored in its database and run a CURL script to send those to an 'external web server'.
'mybrowser' ---http---> 'myserver' ---curl---> 'external web server'
SUCCESSFUL, well, I thought.
Turns out CURL, being run off 'myserver', logs in 'myserver' instead of 'mybrowser'.
The session inside the iframe, after refresh, is still NOT logged in. Now I know.
Then I thought of capturing the cookies from 'myserver' and set it into 'mybrowser' so that 'mybrowser' can now browse (within the iframe)
as a logged in user. After all, we (all the 'mybrowsers') are behind the same router as 'myserver', thus same IP address.
So in other words, I only need 'myserver' to log a user in to several external websites all at once ,and once done pass the control over back to individual users' browsers.
I hope the answer will not resort to using CURL to display and control the external websites for the whole session, aside from being a drag that will lead to some other sticky issues.
I am getting the nuance that this is not permitted due to security issues, but what if all the 'mybrowsers' and 'myserver' are behind the same router? Assuming there's a way to copy the login cookies from 'myserver' to 'mybrowsers', would 'external web server' know that a request came from different machines?
Can this be done?
Thanks.
The problem you are facing relates to the security principles of cookies. You cannot set cookies for other domains, which means that myserver cannot set a cookie for facebook.com, for example.
You could set your server to run an HTTP proxy and make it so that all queries run through your server and do some kind of URL translation (e.g. facebook.com => facebook.myserver) which then in return allows you to set cookies for the clients (since you're running on facebook.myserver) and then translates cookies you receive from the clients and feed them to the third party websites.
An example of a non-transparent proxy that you could begin with: http://www.phpmyproxy.com/
Transparent proxies (in which URLs remain "correct" / untranslated) might be worth considering too. Squid is a pretty popular one. Can't say how easy this would be, though.
After all that you'll still need to build a local script for myserver that takes care of the login process, but at least a proxy should make it all possible.
If you have any say in the login process itself, it might be easier to set up all the services to use OpenID or similar login services, StackOverflow and its sister sites being a prime example on how easy login on multiple sites can be achieved.
I used to work for a bank, that had a very cool feature in it's intranet. Once you logged in your computer, there were global variables set in PHP through Apache, and they contained the identity of the user that was logged on on the computer. Now I'm at a new job, and I'm wondering, how this thing worked! I would like to implement this kind of thing once again.
What I'm working with here:
FreeBSD server, version is unknown to me.
Apache 2.2 web server
PHP 5, some custom compilation, that for various reasons, I can't upgrade or modify.
MS AD
All of the users logging on to their computers are using active directory, all are in the same domain.
What I used to have was something like this:
echo $_SERVER['username']
which would print the username of the user currently logged in.
Could someone explain, how this could be done?
P.S. If any of my server settings are not what is required, say so, because then I will have a reason to ask the bosses to give me one of my own, with more control.
There's lots of ways this might be implemented. However a lot of them depend on having control over the client as well as the server.
Obvious sources of data include:
NTLM
Client side certificates
The Ident protocol (not very secure without the encryption extensions)
A long lasting cookie (again, not secure)
HTTP authentication methods
However none of these explain how the value appeared in the session - this must have been implemented within the PHP code.
So without knowing how it was implemented at your previous site we can't tell you:
Whether it was secure and correctly implemented
how to replicate the behaviour
Given your resource list, while it would be possible to implement authentication based on direct LDAP calls, passing the username and password through your application, I would strongly recommend using (e.g.) openId - but restricting the providers to just your openid provider - which would use the MSAD as the backend.
I did not understand correctly the question, so I edit my post...
you could use apache auth, you can make auth by ip's or hostnames
http://httpd.apache.org/docs/2.0/en/howto/auth.html
how to identify remote machine uniquely in proxy server environment, i have used $_SERVER['REMOTE_ADDR'] but all machines in proxy network has same IP Address, is there any way
Don't ever depend on information that is coming from the client. In this case, you're running up against simple networking problems (you can never be sure the client's IP address is correct), in other cases the client may spoof information on purpose.
If you need to uniquely identify your clients, hand them a cookie upon their first visit, that's the best you can do.
Your best bet would be :
$uid = md5($_SERVER['HTTP_USER_AGENT'] . $_SERVER['REMOTE_ADDR']);
however, there's no way to know if they changed their user agent or different browser.
You could use some other headers to help, like these ones (ones that come to mind when looking at a dump of $_SERVER) :
HTTP_USER_AGENT
HTTP_ACCEPT
HTTP_ACCEPT_LANGUAGE
HTTP_ACCEPT_ENCODING
HTTP_ACCEPT_CHARSET
Using several informations coming from the client will help differenciate different clients (the more information you use, the more chances you have that at least one of those is different between two clients)...
... But it will not be a perfect solution :-(
Depending on the kind of proxy software and it's configuration, there might be a header called X-Forwarded-For, that you could use :
The X-Forwarded-For (XFF) HTTP header
is a de facto standard for identifying
the originating IP address of a client
connecting to a web server through an
HTTP proxy or load balancer. This is a
non-RFC-standard request header which
was introduced by the Squid caching
proxy server's developers.
But I wouldn't rely on that either : it will probably not always be present (don't think its' required)
Good luck !
I do not think there are other ways to do what you want. This is because the proxy server proxies the clients' requests and acts on their behalf. So, the clients are virtually hidden from the server's point of view. However, I may be wrong.
If you are aware of the proxy server, I think that implies this is some kind of company LAN. Are you in control of the LAN? Perhaps building and installing some ActiveX plugin which sends a machine-unique ID to the server might be the solution.
In general, HTTP proxy servers are not required to send the IP of their client. So every request sent by a proxy looks like it came from the proxy's IP. (Although the wikipedia has some mention of custom headers some proxies send to forward the client's ip.)
It gets even worse when an HTTP proxy is itself using another HTTP proxy - the server getting the request will only get the IP of the last proxy in the chain, and there's no guarantee that the 2nd proxy is even aware that the 1st proxy wasn't a regular client!
There is currently no way of doing this as you don't get information about the MAC address, and even that can be wrong, as if there are 2 network cards like a wired one or wireless one.
The best thing to do is locally to get JavaScript to write and read to local storage and send that saved setting back to your server with an Ajax command. This still isn't perfect as if they clear their cache, the setting is lost.
JKS,
Remote machines do not have unique identifiers. This is impossible.
Usually developers like to track machines when the end-user visits a page with a form like a login for security reasons.
Here is what I do: I store a cookie, a session variable and use the new html5 localStorage to track folks on my sensitive pages. This is really the only way to do this accurately. The nice thing about localStorage (when browsers can do this), the end-user typically has no idea you are storing stuff on their machine and deleting cookies has no effect.
So you might make a database table with tracking details like:
timestamp, ip_address, user_agent
then let's say you are tracking failed login attempts.. I would do this:
if(isset($_SESSION['failed_logins'])) {
$failed_logins = $_SESSION['failed_logins'];
$_SESSION['failed_logins'] = ($failed_logins + 1);
} else {
$_SESSION['failed_logins'] = 1;
}
I would then do the same for with setcookie() and then the localStorage script..
Now I am tracking this person and know how many times they are failing a login..
I would then write this user's data to my failed_login table as described above.
I'm sure this isn't the answer you were looking for, but it really is the best way to track users on your site.
I roamed the site for this question using the search engine, and I don't think it's out there. If it is, apologies in advance and feel free to point me to it.
Here is my scenario:
I am setting up a web application, Moodle if anyone is familiar with it, with Apache, MySQL, and php on Windows. Moodle supports enabling SSL for login, but then reverts to regular http after login is established. This is running on an internal network with no connection to the outside world, so no Internet access through this network. All users who use the network have logins, however there are some generic guest type logins with certain restricted privilages. Currently the MySQL database is not encrypted.
My question is this:
If my users do an SSL login, and the system then reverts back to http for the remainder of their session, how vulnerable is the data that is transferred back and forth between the browser interface and the database?
I would perhaps prefer to have all the data encrypted, but I am not sure how bad the performance hit would be to do that, so any suggestions concerning that would be appreciated too. Although I will be extending the functionality in Moodle, I don't necessarily want to have to change it to encrypt everything if already does.
I am new to the world of IT security, and my DBA skills are rusty, so if you give me an answer, type slowly so I can understand! ;)
Thanks in advance!
Carvell
A few things.
The fact that the data in the DB server is not encrypted in no way is a factor in the communication between the User and the Web Server. It is a concern obviously for communications between the web server and the database server.
Your risk point between user and web server is in that packets could be sniffed if a person was able to interject in the middle of the communication chain. However, this risk is mitigated by the fact that your on an internal network.
Therefore, unless you are VERY concerned about the other people in your organization, you are more than likely ok. However, if it is really sensitive data, you might do ALL communications via SSL to ensure that it is transmitted securely. IF you are this concerned, then I would also look at the security of the DB and the communications from DB to webserver.
My concern would be how your authenticated sessions are propagated.
Generally a session works by setting a cookie or appending a session id to any URLs presented by the web site. Once a log-in has been established, often the credentials aren't needed any more, as a session is then linked to the user and deemed to be authenticated, and the existence of the session itself is proof of a successful authentication.
However, as previous posters have mentioned, local network traffic can be available for sniffing. If someone sniffed a session id, they could recreate the cookie or urls using the session id, and simply access the site as that session user, even changing the user's password if that option was available.
I would say that your decision here rests on the security of your sessions. If you have some mitigating factors in place to make sessions difficult to replicate even if a session id is compromised (ie. comparison to ip addresses, etc), or your user accounts are relatively secure from a compromised session (eg. require current password to change account settings), then perhaps SSL after login isn't required. However, if you have doubts and can afford the performance hit, then having SSL throughout the site will guarantee that your sessions can't be compromised (as far as you can guarantee SSL, anyway).
With no internet access to this network, the only thing that could potentially happen is someone else (who is already on the internal network) snooping on another user's HTTP traffic. If someone were to actually do that, and you aren't using SSL, they could read all the data that your website is sending/receiving from that user. But is that actually a concern?
Since you are on an internal network turning on SSL for the whole site should not be that bad performance wise, although it is probably unneccesary.
At the very least, you should encrypt the data in your database.
All sensitive data should be encrypted when transferred over an insecure wire. If you just transfer login details over SSL, all your data is still vulnerable to eavesdropping.
Since the data's not encrypted, anybody with sufficient network access (i.e. physical access) can read the data passing back and forth from server to browser and back. As long as everyone who has physical access to the network also has authorization to read the data, you're probably alright. If any of the information is sensitive, and should be restricted to being viewed by a subset of people who have physical access to the network, then you need to encrypt it.
Anyone on your network would be able to see everyone else's traffic with a network packet sniffer like WireShark. The connection between your web server and MySQL is also in cleartext. MySQL may not actually send passwords in cleartext; it may be a hash, for instance.
If you are really trying to be paranoid, you may not need to run your app over HTTPS. There are other lower-level possibilities like IPSec. Since this is an internal network, you can probably get away with implementing this on all workstations.
Not much to add to the above correct responses. But, one think you can do is use a Threat Modeling tool for your application. That will inform you on the types of threats you are exposing your data to by not using transport level encryption (TLS/SSL). Once you understand the threats, you can decide on an appropriate risk mitigation plan.