how to identify remote machine uniquely in php? - php

how to identify remote machine uniquely in proxy server environment, i have used $_SERVER['REMOTE_ADDR'] but all machines in proxy network has same IP Address, is there any way

Don't ever depend on information that is coming from the client. In this case, you're running up against simple networking problems (you can never be sure the client's IP address is correct), in other cases the client may spoof information on purpose.
If you need to uniquely identify your clients, hand them a cookie upon their first visit, that's the best you can do.

Your best bet would be :
$uid = md5($_SERVER['HTTP_USER_AGENT'] . $_SERVER['REMOTE_ADDR']);
however, there's no way to know if they changed their user agent or different browser.

You could use some other headers to help, like these ones (ones that come to mind when looking at a dump of $_SERVER) :
HTTP_USER_AGENT
HTTP_ACCEPT
HTTP_ACCEPT_LANGUAGE
HTTP_ACCEPT_ENCODING
HTTP_ACCEPT_CHARSET
Using several informations coming from the client will help differenciate different clients (the more information you use, the more chances you have that at least one of those is different between two clients)...
... But it will not be a perfect solution :-(
Depending on the kind of proxy software and it's configuration, there might be a header called X-Forwarded-For, that you could use :
The X-Forwarded-For (XFF) HTTP header
is a de facto standard for identifying
the originating IP address of a client
connecting to a web server through an
HTTP proxy or load balancer. This is a
non-RFC-standard request header which
was introduced by the Squid caching
proxy server's developers.
But I wouldn't rely on that either : it will probably not always be present (don't think its' required)
Good luck !

I do not think there are other ways to do what you want. This is because the proxy server proxies the clients' requests and acts on their behalf. So, the clients are virtually hidden from the server's point of view. However, I may be wrong.

If you are aware of the proxy server, I think that implies this is some kind of company LAN. Are you in control of the LAN? Perhaps building and installing some ActiveX plugin which sends a machine-unique ID to the server might be the solution.
In general, HTTP proxy servers are not required to send the IP of their client. So every request sent by a proxy looks like it came from the proxy's IP. (Although the wikipedia has some mention of custom headers some proxies send to forward the client's ip.)
It gets even worse when an HTTP proxy is itself using another HTTP proxy - the server getting the request will only get the IP of the last proxy in the chain, and there's no guarantee that the 2nd proxy is even aware that the 1st proxy wasn't a regular client!

There is currently no way of doing this as you don't get information about the MAC address, and even that can be wrong, as if there are 2 network cards like a wired one or wireless one.
The best thing to do is locally to get JavaScript to write and read to local storage and send that saved setting back to your server with an Ajax command. This still isn't perfect as if they clear their cache, the setting is lost.

JKS,
Remote machines do not have unique identifiers. This is impossible.
Usually developers like to track machines when the end-user visits a page with a form like a login for security reasons.
Here is what I do: I store a cookie, a session variable and use the new html5 localStorage to track folks on my sensitive pages. This is really the only way to do this accurately. The nice thing about localStorage (when browsers can do this), the end-user typically has no idea you are storing stuff on their machine and deleting cookies has no effect.
So you might make a database table with tracking details like:
timestamp, ip_address, user_agent
then let's say you are tracking failed login attempts.. I would do this:
if(isset($_SESSION['failed_logins'])) {
$failed_logins = $_SESSION['failed_logins'];
$_SESSION['failed_logins'] = ($failed_logins + 1);
} else {
$_SESSION['failed_logins'] = 1;
}
I would then do the same for with setcookie() and then the localStorage script..
Now I am tracking this person and know how many times they are failing a login..
I would then write this user's data to my failed_login table as described above.
I'm sure this isn't the answer you were looking for, but it really is the best way to track users on your site.

Related

Restrict php script to only one computer without login

I want to make a php webpage accessible from only one computer.
IP checking isn't suitable for that (Dynamic IP).
I could set a cookie (with no expiration date) with a token. Then I could check if the cookie has the correct token and display the page, else I could die(). I think that this isn't a secure solution, because a cookie can be stolen, can't it?
So, what to do?
P.S. Obviously I can't login every time.
So here are a couple of options:
Client side certificates
Create a client side certificate and configure your webserver to authenticate using client certificates. Problem solved. In future, if you need to have more computers connect to the server, give them client certificates as well.
IP based : using Dynamic DNS
Give your computer a dynamic-dns name (myclient.dyndns.com) and install a dyndns client on your computer. The dyndns client keeps checking its own IP and updates the nameserver entry whenever your computer's IP changes. On server side all you need to check is if the IP that the requester presents is same as myclient.dyndns.com and allow access if it is.
A slight gotcha in this one is that there is a small (configurable) window of time between when IP changes and the dyndns client pupulates it to the nameserver. So, whenever your IP changes, until the dyndns client on your computer detects it and updates the nameserver, your server will not allow any requests from your computer in that time window. Thats because your computer will present the new IP and myclient.dyndns.com will resolve to your old UP. This time window can be made as small as you want (even 1 second). The other small gotcha is that in this n second window, any random computer that gets your old IP assigned by the ISP can access your server. The probablity of this is very small but just mentioning as a possibility.
There are many free dynamic dns services out there. You can google them.
Cookie Based
You could use cookies. However as you correctly identified, cookies can be stolen. Now, there are two ways they can be stolen:
Copying the cookie off the computer: Someone who has access to the computer can copy that specific cookie and impersonate as your computer to your webserver. If this is possible (if potential malicious users can remote desktop or physically access your computer), then cookie based solution is not for you.
Sniffing over the network: Cookies can be easily sniffed over the network. A easy way to prevent sniffing is enabling SSL. Given that you are confident that cookies cannot be stolen off the computer by copying it over, cookie+SSL option works in your case. In this case its just like a shared secret key. You do it via cookie or querystring, it doesn't matter. Cookie obviously are preferred over querystring because cookies aren't normally logged in browser history or webserver logs.
Also just a thought: For all the computers that aren't authenticated, send a standard 404 response rather than some custom "Access denied" page. This way anyone who is running a crawler/bot/scanner on your site will not be intrigued by this custom response and will not attempt to circumvent your security controls.
Couldn't you just use a unique passphrase as a parameter in the uri?
e.g. http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
and check to see if it matches the one stored in the server?
Well I get if you are not the user it is someone else... then you need only that specific client (computer) to be able to access the page
Either way the first time there must be some sort of registration. Maybe the example uri above works like this:
you request: http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
the passphrase is checked of being correct and a boolean value is stored in the server as to never be able to "register" again.
If it is correct, a cookie is being generated with a unique key.
This same key is also stored in the server (file, database or something)
Therefore on subsequent requests when you just compare the key stored in the server and the key in the cookie you know who is the client

Securing Cookies and Sessions

The issue I'm having, which may not be solvable, is as follows:
I have a client that is a large organization of 1,500+ users at 7-8 different locations. The application is a PHP application build on the Kohana v3.0 framework. The organization sits behind a proxy filtering server at the ISP level. Each location has one main public IP address that funnels through the proxy then to the web. Each user has a Mac or Windows workstation issued by the employer.
What they are experiencing appears to be cookie collisions. Example: One user logs in at their workstation then another user logs in from the same location, different workstation, with the same OS and browser type. The second user receives the first users' active session by receiving a newly generated cookie (token) that matches the first user. This appears to only be related to the 'authautologin' cookie (set when the remember me check-box is engaged on the login screen), but I'm keeping my options open to caching from the proxy (I can't prove that the proxy is caching yet).
Because of the network setup, the server sees hundreds of users logging in from the same IP address with the same user agent. My initial thought is that the Kohana v3's way of generating cookies that are unique to the browser (user agent) is not unique enough for this real-world application.
Has anyone ever experienced anything like this? And what would be the proper actions to take in cookie and session generation? Would managing cookies and active sessions in the database be better?
Kohana Modules: Jelly-Auth, Jelly, and Auth
Server: Apache/2.2.9 (Debian) mod_fastcgi/2.4.6 mod_jk/1.2.26 PHP/5.2.6-1+lenny8 with Suhosin-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g
Known Browsers: IE 8 & 9, Firefox (OS and Win), and Safari (OS)
It's just an idea but there is / used to be (depending on your Debian and PHP version) a bug with PHP sessions. What I suggest you to try:
Check this link - this may not be related to your problem but it's worth a try
Switch to database driver - I'd give 90% chance that this will fix everything
Test on different then Debian server - this may not be easy to accomplish though
Wow thats a nasty vulnerability, good catch!
By far the best way to generate cookies under PHP is to let PHP do it:
session_start(). And thats all! If you are generating your own cookie, then you really messed up somewhere. Now you can use the $_SESSION[] super global. The best practice is to call session_start() in a common header file before you access $_SESSION in your application.
There are probably other problems you should take into consideration such as owasp a9, csrf, and the cookie flags: HTTP_Only, and the "secure" flag (forcing the cookie over https).
I'm not sure if I understood you correctly, but... I understood that request goes like this:
user (workstation) ==> proxy () ==> internet ==> company website (and response in reverse direction).
Check if proxy sets "HTTP_X_FORWARDED_FOR" (in $_SERVER superglobal variable). It could be the only way to determine user's workstation IP address. If so, you're done.

How to ensure the HTTP_REQUEST Is coming from the right place?

I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).

How to use https and how things differ

How would you use https ?, would sending information via GET and POST be any different while using https ?
Any information and examples on how https is used in php for something simple like a secure login would be useful,
Thank you!
It will be no different for your php scripts, the encryption and decryption is done transparently on another layer.
Both GET and POST get encrypted, but GET will leave a trace in the web server log files.
HTTPS is handled at the SSL/TLS Layer, not at the Application Layer (HTTP). Your server will handle it as aularon was saying.
SSL and/or HTTPS is used to provide some level of confidentiality for data in transit between the web users and the web server. It can also be used to provide a level of confidence that the site the users are communicating with is in fact the one they intend to be.
In order to use SSL, you'll need to configure these capabilities on the server itself, which would include either purchasing (an authority-signed) or creating (a self-signed) certificate. If you create your own self-signed certificate, the level of confidence that the site is the intended one is significantly reduced for your users.
PHP
Once your webserver is able to serve SSL-protected pages, PHP will continue to operate as usual. Things to look out for are port numbers (normal HTTP is usually on port 80, while HTTPS traffic is usually on port 443), if your code relies on them.
GET & POST Data
Pierre 303 is correct, GET data may end up in the logs, and POST data will not, but this is no different than a non-SSL web server. SSL is meant to protect data in transit, it does nothing to protect you and your customers from web servers and their administrators that you may not trust.
Secure Login
There is also a performance hit (normally) when using SSL, so, some sites will configure their pages to only use https when the user is sending sensitive information, for example, their password or credit card details, etc. Other traffic would continue to use the normal, http server.
If this is the sort of thing you'd like to do, you'll want to ensure that your login form in HTML uses a ACTION that points to the https server's pages. Once the server accepts this form submission, it can send a redirect to send the user back to the page they requested using just http again.
Just ensure you're sending the correct headings when allowing files to be downloaded over ssl... IE can be a bit quirky. http://support.microsoft.com/kb/323308 for details of how to resolve

how do you detect CGIproxy?

i have cgiproxy (http://www.jmarshall.com/tools/cgiproxy/), which lets users use it to navigate pages.
it seems like myspace.com detects it and forwards the user to google.com
doing a quick test to determine my ip using the proxy fails, meaning it doesn't reveal my ip. it shows proxy server's ip.
<?php
if (getenv("HTTP_X_FORWARDED_FOR")) {
$ip = getenv("HTTP_X_FORWARDED_FOR");
} else {
$ip = getenv("REMOTE_ADDR");
}
print"$ip";
So the mystery is, how are sites out there detecting that i am using CGI proxy ? is it possible for cgi proxy to stay undetected?
btw CGI proxy is best because it renders JS.
Perhaps in your PHP test program, you could dump out all the HTTP headers to see what's coming through and whether there is anything that looks like identifying information. It's hard for us to guess what Myspace is doing.
Totally a guess, but you may not be getting the MySpace cookies through CGIProxy.
CGIProxy states it as a known limitation:
If you browse to many sites with
cookies, CGIProxy may drop some. If a
site keeps telling you to enable
cookies, delete your existing cookies
(via the "Manage cookies" link) and
try the site again.
One other option (assuming you have shell access to the machine running the proxy) is to use the SOCKS proxy included in SSH with the -D flag.
I believe what you would want to install is PHProxy:
http://sourceforge.net/projects/poxy/
Back in HS days this is what we used to get around the filters that the school put in place to block it. Worked fairly well as far as I remember it, I haven't tried it recently but it is worth a shot.
Some sites, like MySpace, don't want users connecting through a proxy, so they go to lengths to detect this. By default, CGIProxy, does add any header to make it detectable. An easy way to check your http headers is to visit http://www.ioerror.us/ip/headers .
The usual method to detect this sort of thing is for a bit of client side Javascript to inspect the URL of the page it's on, and send that to the server. Using nph-proxy.cgi I'm able to visit Myspace without any such redirections.
Other methods are for detection include embedding a Flash or Java object on the page, and having that object attempt to connect to a hard coded server.

Categories