Glype Proxy configuration - php

I am trying to setup a web proxy so I can bypass the web filter. I am using Glype proxy php script but the web filter detected it and blocked it. Is there any way I can config or edit the script so the web filter would not be able to detect my proxy?

You have to rename the file browse.php and all references to it. Here is explained how: http://glypetemplates.com/rename-browse.php-to-prevent-abuse-from-automated-scripts.html
Two useful hints:
Avoid the name 'proxy' in your url. This might trigger someone's attention (or a script that scans for this term) which will make sure that the url is quickly added to the blacklist.
If you have already been blocked you are probably on a blacklist from which you won't be removed. Therefor, try to put your proxy on a new url (e.g. if your previous url was example.com/secret now try example.com/randomWord).
A bit of background information: The following article gives a nice overview of what companies can do to detect proxy sites: http://www.sans.org/reading-room/whitepapers/detection/detecting-preventing-anonymous-proxy-usage-32943
It is a bit outdated (2008) but what it says is that the Glype is detected by looking at the link the user is browsing to. Since this is always of the same format (browse.php?u=....):
To block or detect any usage of a Glype anonymous proxy server, use
the following regular expression: (browse\.php\?u=).+(&b).*
Renaming browse.php will solve this problem. This is assuming your webfilter is not using any more advanced techniques...

Related

How to offer calls over HTTPS exclusively

I'm writing some APIs for another website to be able to interact with my website. They say this in their documentation:
All calls made over HTTPS.
I don't know what that means for me on my end. Does it just mean I need to be hosted on a
httpS://www.mywebsite.com
page instead of
http://www.mywebsite.com
What do I need to do on my end (PHP based code) to accept "calls over https".
I don't need any code written or anything like that, I just need to understand the scope of what I'm trying to do. Is it my code that deciphers the HTTPS call? Is it the server that I'm hosted on? What does this mean?
You need an SSL certificate installed on your server, which you can get from a Certificate Authority like Thawte or Verisign. Once that is done, your site will be able to serve the same content over https://... and http://...
You can then restrict it via the webserver's configuration to only allow the https://.... (simplest thing to do). Or you can leave it at the default which will allow both, and you can make a determination in the particular script being called whether it will accept both or only one or the other.
But for simplicity (especially when you really don't understand the concept of SSL as to when you should use it or don't really have to) you probably ought to just restrict your webserver to serve only https://... How you do that depends on whether you are using IIS or Apache HTTPD, etc.

file_get_contents and ajax requests

i have php proxy script which uses file_get_contents to get web sites and outputs it ...
everything is working as long as web sites are static, but as long as i use some sites that uses ajax requests to update it's content, lik twitter, 9gag, youtube ... new content doesn't get added
i get this error in console:
XMLHttpRequest cannot load http://9gag.com/new/json?list=hot&id=6408098. Origin is not allowed by Access-Control-Allow-Origin.
since 9gag site is now my local site served by my local proxy it can't access new content from original 9gag site, which this is cross domain issue ....
so my question is how do i take ajax requests and put them through my local proxy server?
This is a security feature. It is made to prevent such requests that you are trying to do. As I can see, you have only two possibilities:
Add site to hosts file to forward it to your proxy. It this way you have to ensure that your proxy responds correctly this way. But I don't know if there are some other checks browser-side except checking the domain. If only domain taken into account, everything will be ok.
Set OS to use your proxy site as a system proxy. This way you should make it to respond as a regular proxy server.
P.S. May be it is better to use some ready-to-use transparent proxy utility?

Website Administration Location + PHP CURL

I'm building an online dating website at the moment.
There needs to be an admin backend to the site to approve users/photos etc.
I can add this admin part of the site/login etc to the same domain.
eg: www.domainname.com/admin
Or from my experience with PHP CURL I can put this site on a different domain and CURL the requests through.
Question: is it more secure to put the admin code/site on a completely different domain? or it really doesn't matter if it sits on the same domain? hacking/security is the really point of this.
thx
Technically it might be more secure if you ran it from a different server and hosted it on a subdomain using a different IP/vhost, or use a proxy mod for your webserver (see Apache mod_proxy) to proxy requests from yourdomain.com/admin to admin.otherdomain.com and enforce additional IP or access control using .htaccess or equivalent to access the proxy url.
Of course, if those other domains are web accessible, then they are only as secure as the users and passwords that use them.
For corporate applications, you may want to make the admin interface accessible from a VPN connection, but I don't know if that applies to you.
If there is a vulnerability on your public webserver that allows someone to get shell access, then it may make it slightly more difficult to get administrative access since they don't have the code for the administration portion.
In other words, it can provide additional security depending on the lengths you go to, but is not necessarily a solid solution.
Using something like cURL is a possibility, but you'd have far less troubleshooting to do using a more conventional method like proxy or subdomain on another server.

How can I restrict / authorize access to PHP script?

There is this PHP script on my website which I don't want people to be able to run by just typing its name in the browser.
Ideally I would like this script to be run only by registered users and only from within a Windows app (which I will have to provide). Can this be done ?
Alternatively, how can I protect this script so that it can only be called from a specific page or script?
Also how can I hide the exact URI from appearing on the address bar?
Thanks !
If you are running Apache for your webserver, you can protect it with a username/password combo using .htaccess. It takes a little configuration if your server is not already configured to allow .htaccess. Here are the Apache docs.
If you need authentication based on application-specific factors, you can put something at the top of your script like
<?php
if(!$user->isLoggedIn()) {
// do 404
header('HTTP/1.0 404 Not Found');
}
Do you have a question about how you would implement isLoggedIn?
You can also use mod_rewrite to rewrite URIs, and those directives can go inside your .htaccess as well. mod_rewrite can rewrite incoming requests transparently (from the browser's perspective) so a request for /foo/bar can be translated into secret_script.php/foo/bar. Docs for mod_rewrite.
However you decide to implement this, I would urge you to not rely solely on the fact that your script's name is obscure as a means to secure your application. At the very least, use .htaccess with some per-user authentication, and consider having your application authenticate users as well.
As Jesse says, it's possible to restrict your script to logged in users. There are a large number of questions on this already. Search for PHP authentication.
However, it is not possible to restrict it to a single application. It is fairly simple to use a program like Wireshark to see exactly how the program logs in and makes request. At that point, they can reproduce its behavior manually or in their own application.
There are a variety of different ways that you could go about securing a script. All have pluses and minuses, and its likely that the correct answer for your situation will be a combination of several.
Like mentioned, you could lock down the account with Apache...it's a good start. Similarly, you could build a powerful 'salt-ed' security system such as this: http://www.devarticles.com/c/a/JavaScript/Building-a-CHAP-Login-System-An-ObjectOriented-Approach/ If you use SSL as well, you're essentially getting yourself security like banks use on their websites--not perfect, but certainly not easy to break into.
But there are other ideas to consider too. Park your script in a class file that sits inaccessible via direct URI, then do calls to the various functions from an intermediary view script. Not perfect, but it does limit the ways that someone could directly access the file. Consider adding a "qualifier" to the URL via a simple get--have the script check for the qualifier or fail....again, not a great solution on its own, but one additional layer to dissuade the bad guys. If you have control of who's getting access (know exactly which networks) you could even go so far as to limit the IP's or the http referers that are allowed to access the file. Consider setting and checking cookies, with a clear expiration. Don't forget to set your robots file so the browsers don't stumble upon the script your trying to protect.
A while back my company did a membership app using Delphi on the front end, talking to php and MySql on the backend....it was a bit clunky given that we were all web application developers. If you're so inclined, perhaps Adobe Flex might be an option. But ultimately, you'll have to open a door that the application could talk to, and if someone was determined, theoretically they could dig through your app to find the credentials and use them to gain instant access to the site. If you're going the desktop app route, perhaps its time to consider having the app avoid talking to an intermediary script and do its work on the local machine, communicating the db that sits remote.
you can use deny access on .htaccess on a folder with a php authentification that will redirect to those php file

how do you detect CGIproxy?

i have cgiproxy (http://www.jmarshall.com/tools/cgiproxy/), which lets users use it to navigate pages.
it seems like myspace.com detects it and forwards the user to google.com
doing a quick test to determine my ip using the proxy fails, meaning it doesn't reveal my ip. it shows proxy server's ip.
<?php
if (getenv("HTTP_X_FORWARDED_FOR")) {
$ip = getenv("HTTP_X_FORWARDED_FOR");
} else {
$ip = getenv("REMOTE_ADDR");
}
print"$ip";
So the mystery is, how are sites out there detecting that i am using CGI proxy ? is it possible for cgi proxy to stay undetected?
btw CGI proxy is best because it renders JS.
Perhaps in your PHP test program, you could dump out all the HTTP headers to see what's coming through and whether there is anything that looks like identifying information. It's hard for us to guess what Myspace is doing.
Totally a guess, but you may not be getting the MySpace cookies through CGIProxy.
CGIProxy states it as a known limitation:
If you browse to many sites with
cookies, CGIProxy may drop some. If a
site keeps telling you to enable
cookies, delete your existing cookies
(via the "Manage cookies" link) and
try the site again.
One other option (assuming you have shell access to the machine running the proxy) is to use the SOCKS proxy included in SSH with the -D flag.
I believe what you would want to install is PHProxy:
http://sourceforge.net/projects/poxy/
Back in HS days this is what we used to get around the filters that the school put in place to block it. Worked fairly well as far as I remember it, I haven't tried it recently but it is worth a shot.
Some sites, like MySpace, don't want users connecting through a proxy, so they go to lengths to detect this. By default, CGIProxy, does add any header to make it detectable. An easy way to check your http headers is to visit http://www.ioerror.us/ip/headers .
The usual method to detect this sort of thing is for a bit of client side Javascript to inspect the URL of the page it's on, and send that to the server. Using nph-proxy.cgi I'm able to visit Myspace without any such redirections.
Other methods are for detection include embedding a Flash or Java object on the page, and having that object attempt to connect to a hard coded server.

Categories