how do you detect CGIproxy? - php

i have cgiproxy (http://www.jmarshall.com/tools/cgiproxy/), which lets users use it to navigate pages.
it seems like myspace.com detects it and forwards the user to google.com
doing a quick test to determine my ip using the proxy fails, meaning it doesn't reveal my ip. it shows proxy server's ip.
<?php
if (getenv("HTTP_X_FORWARDED_FOR")) {
$ip = getenv("HTTP_X_FORWARDED_FOR");
} else {
$ip = getenv("REMOTE_ADDR");
}
print"$ip";
So the mystery is, how are sites out there detecting that i am using CGI proxy ? is it possible for cgi proxy to stay undetected?
btw CGI proxy is best because it renders JS.

Perhaps in your PHP test program, you could dump out all the HTTP headers to see what's coming through and whether there is anything that looks like identifying information. It's hard for us to guess what Myspace is doing.

Totally a guess, but you may not be getting the MySpace cookies through CGIProxy.
CGIProxy states it as a known limitation:
If you browse to many sites with
cookies, CGIProxy may drop some. If a
site keeps telling you to enable
cookies, delete your existing cookies
(via the "Manage cookies" link) and
try the site again.
One other option (assuming you have shell access to the machine running the proxy) is to use the SOCKS proxy included in SSH with the -D flag.

I believe what you would want to install is PHProxy:
http://sourceforge.net/projects/poxy/
Back in HS days this is what we used to get around the filters that the school put in place to block it. Worked fairly well as far as I remember it, I haven't tried it recently but it is worth a shot.

Some sites, like MySpace, don't want users connecting through a proxy, so they go to lengths to detect this. By default, CGIProxy, does add any header to make it detectable. An easy way to check your http headers is to visit http://www.ioerror.us/ip/headers .
The usual method to detect this sort of thing is for a bit of client side Javascript to inspect the URL of the page it's on, and send that to the server. Using nph-proxy.cgi I'm able to visit Myspace without any such redirections.
Other methods are for detection include embedding a Flash or Java object on the page, and having that object attempt to connect to a hard coded server.

Related

PHP user-level session persistence

I have a use case where I need to be able to access my site from the local server. Specifically, it's for a HTML-to-PDF export of parts of various pages, but this would be nice for testing parts of the website as well.
The problem is that we have a login splash page, which needs to be dealt with before I can access any parts of the website. It would be really nice if I could just call a command "wkhtml2pdf 'localhost/[myurl]'" and have it PDF some stuff, but it hits this splash page.
Is there some way that I can perma-persist just one single session on the server? Or enable login-less access from localhost? Or could I just add a new Apache entry that accesses our site, whitelists only localhost and somehow circumvents the login?
What's the best solution?
You can pass your session cookie as parameter in wkhtml2pdf to solve your problem.
You can also execute it from a php file like this.
exec("wkhtmltopdf --cookie '{$cookieName}' '{$cookieValue}' http://example.com");
Soliciting feedback on this solution now:
I whitelisted localhost via $_SERVER['REMOTE_ADDR'] in the login scripts to bypass the usual user authentication and get an automatic localhost-user login. The server is running, however, on a university LAN, so the LAN maybe really big, possibly enabling bidirectional TCP spoofing.
Should I be worried about this, or does someone need admin rights on the routers or something? I trust the IT folks, but not others.
I realize that this sounds like a separate question, but I feel that security relates to whether or not this is a good solution.

Glype Proxy configuration

I am trying to setup a web proxy so I can bypass the web filter. I am using Glype proxy php script but the web filter detected it and blocked it. Is there any way I can config or edit the script so the web filter would not be able to detect my proxy?
You have to rename the file browse.php and all references to it. Here is explained how: http://glypetemplates.com/rename-browse.php-to-prevent-abuse-from-automated-scripts.html
Two useful hints:
Avoid the name 'proxy' in your url. This might trigger someone's attention (or a script that scans for this term) which will make sure that the url is quickly added to the blacklist.
If you have already been blocked you are probably on a blacklist from which you won't be removed. Therefor, try to put your proxy on a new url (e.g. if your previous url was example.com/secret now try example.com/randomWord).
A bit of background information: The following article gives a nice overview of what companies can do to detect proxy sites: http://www.sans.org/reading-room/whitepapers/detection/detecting-preventing-anonymous-proxy-usage-32943
It is a bit outdated (2008) but what it says is that the Glype is detected by looking at the link the user is browsing to. Since this is always of the same format (browse.php?u=....):
To block or detect any usage of a Glype anonymous proxy server, use
the following regular expression: (browse\.php\?u=).+(&b).*
Renaming browse.php will solve this problem. This is assuming your webfilter is not using any more advanced techniques...

How to ensure the HTTP_REQUEST Is coming from the right place?

I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).

How to reliably identify a website

I have a file that is being linked to from other sub websites.
The file: http://site.com/file.img
Website A linking to it <img src="http://site.com/file.img"></img>
website B linking to it <img src="http://site.com/file.img"></img>
I need to reliably identify which of these websites has accessed the file, but I know that $_SERVER['HTTP_REFERER'] can be spoofed. What other ways do I have to reliably confirm the requester site? By IP, get them to register an IP? not sure. setup an API key? What options are there?
If a website is only linking to a file, the "website" itself will never actually access your image. Instead, the client who's viewing the site will make a request for the image.
As such, you're depending on information sent by the client, which is completely out of your control and not reliable at all. If you have the opportunity to set some sort of unique cookie on the client, you may be able to use this in some fashion for extended identification, but even that won't be reliable.
There is no 100% reliable solution.
Getting the referrer is the best you can do without getting into complicated territory.
If you don't mind complicated, then read on: set up your Web server to serve file.img only to Website A and Website B, then require that Website A and Website B set up a proxy configuration on their end that will retrieve file.img on behalf of their visitors.
Example:
A visitor to Website A loads a page that contains an image tag like <img src="http://websiteA.com/file.img"/> (note reference to Website A rather than your site). Client requests file.img from WebsiteA.com accordingly. Website A is configured to proxy requests for the path /file.img to your server, http://site.com/file.img. Your site verifies that it is in fact Website A that is requesting the image and then serves it to Website A's proxy. Website A then serves it to the visitor.
Basically, that makes it a pain for Websites A and B, gives you a performance hit, and also requires further configuration on your part. But I imagine that would satisfy your requirement.
Have a look at how OpenID relying is implemented, it allows one site to authenticate against another. The protocol specification will give a hint at the effort and overhead required to reliably implement such a scheme.
http://googlecode.blogspot.com/2010/11/googles-sample-openid-relying-party.html

how to identify remote machine uniquely in php?

how to identify remote machine uniquely in proxy server environment, i have used $_SERVER['REMOTE_ADDR'] but all machines in proxy network has same IP Address, is there any way
Don't ever depend on information that is coming from the client. In this case, you're running up against simple networking problems (you can never be sure the client's IP address is correct), in other cases the client may spoof information on purpose.
If you need to uniquely identify your clients, hand them a cookie upon their first visit, that's the best you can do.
Your best bet would be :
$uid = md5($_SERVER['HTTP_USER_AGENT'] . $_SERVER['REMOTE_ADDR']);
however, there's no way to know if they changed their user agent or different browser.
You could use some other headers to help, like these ones (ones that come to mind when looking at a dump of $_SERVER) :
HTTP_USER_AGENT
HTTP_ACCEPT
HTTP_ACCEPT_LANGUAGE
HTTP_ACCEPT_ENCODING
HTTP_ACCEPT_CHARSET
Using several informations coming from the client will help differenciate different clients (the more information you use, the more chances you have that at least one of those is different between two clients)...
... But it will not be a perfect solution :-(
Depending on the kind of proxy software and it's configuration, there might be a header called X-Forwarded-For, that you could use :
The X-Forwarded-For (XFF) HTTP header
is a de facto standard for identifying
the originating IP address of a client
connecting to a web server through an
HTTP proxy or load balancer. This is a
non-RFC-standard request header which
was introduced by the Squid caching
proxy server's developers.
But I wouldn't rely on that either : it will probably not always be present (don't think its' required)
Good luck !
I do not think there are other ways to do what you want. This is because the proxy server proxies the clients' requests and acts on their behalf. So, the clients are virtually hidden from the server's point of view. However, I may be wrong.
If you are aware of the proxy server, I think that implies this is some kind of company LAN. Are you in control of the LAN? Perhaps building and installing some ActiveX plugin which sends a machine-unique ID to the server might be the solution.
In general, HTTP proxy servers are not required to send the IP of their client. So every request sent by a proxy looks like it came from the proxy's IP. (Although the wikipedia has some mention of custom headers some proxies send to forward the client's ip.)
It gets even worse when an HTTP proxy is itself using another HTTP proxy - the server getting the request will only get the IP of the last proxy in the chain, and there's no guarantee that the 2nd proxy is even aware that the 1st proxy wasn't a regular client!
There is currently no way of doing this as you don't get information about the MAC address, and even that can be wrong, as if there are 2 network cards like a wired one or wireless one.
The best thing to do is locally to get JavaScript to write and read to local storage and send that saved setting back to your server with an Ajax command. This still isn't perfect as if they clear their cache, the setting is lost.
JKS,
Remote machines do not have unique identifiers. This is impossible.
Usually developers like to track machines when the end-user visits a page with a form like a login for security reasons.
Here is what I do: I store a cookie, a session variable and use the new html5 localStorage to track folks on my sensitive pages. This is really the only way to do this accurately. The nice thing about localStorage (when browsers can do this), the end-user typically has no idea you are storing stuff on their machine and deleting cookies has no effect.
So you might make a database table with tracking details like:
timestamp, ip_address, user_agent
then let's say you are tracking failed login attempts.. I would do this:
if(isset($_SESSION['failed_logins'])) {
$failed_logins = $_SESSION['failed_logins'];
$_SESSION['failed_logins'] = ($failed_logins + 1);
} else {
$_SESSION['failed_logins'] = 1;
}
I would then do the same for with setcookie() and then the localStorage script..
Now I am tracking this person and know how many times they are failing a login..
I would then write this user's data to my failed_login table as described above.
I'm sure this isn't the answer you were looking for, but it really is the best way to track users on your site.

Categories