How to ensure the HTTP_REQUEST Is coming from the right place? - php

I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.

In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.

The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER

Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."

All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.

As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.

so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).

Related

Security of using a browser session within a webpage iframe

As the title says, I intend to create a web-app that uses an iframe to lock all my web sessions within the server itself. Thus when accessing from a client, i can still visit other sites, while being in the main browser page.
Since the website itself is making the connection through the page, for security wise, am I technically going through a VPN since the connection goes like
Client -> Server Hosting the Main Webpage -> facebook.com
Will my connection to facebook.com come from the client, or the server?
And is this type of solution even feasible?
Will my connection to facebook.com come from the client, or the
server? And is this type of solution even feasible?
If you're just using an IFrame, then the request will come from the browser.
If you've made a proxy handler on your site which makes a back-end HTTP request, then it will come from the server. E.g. the handler could take a query string parameter like url - http://example.com?url=https://facebook.com.
Three relevant security issues spring to mind with this approach.
Server-side Request Forgery - ensuring an attacker cannot browse to things in your DMZ like http://127.0.0.1 or http://192.168.2.4.
X-Frame-Options - lots of sites use this header, or the new CSP2 frame-ancestors header to prevent themselves from being framed. You could though strip out such headers in your proxy code.
Browser trust. If I'm on your website at http://example.com (or even https://example.com), how do I know I'm logging into Facebook. There is no assurance other than the fact the IFramed page looks like Facebook. Any case, if you're proxying the request to Facebook, how do I know you're not capturing my credentials?
If this is just for yourself, then you can ignore points one and three somewhat, however you have no way of verifying the security yourself using your browser, you'd have to trust your server-side code, and how will you be aware if a MITM downgrades your connection from HTTPS to plain HTTP (sslstrip).
The rest of it is feasible, ignoring the security issues. Handling session cookies and the like will result in some complex code (particularly if you're going to deal with certain cookies being set in JavaScript too because they'll all share an Origin with your main site's domain).

Preventing calls to php scripts from a localhost or from another domain

I have a website with some php scripts, some of them are called in ajax.
I'd like to prevent my site from some malicious users who would try calling and using those scripts from another site, or from a dummy localhost site.
I thought about filtering the domain name, but with some tools like EasyPHP and virtual host managers, you can run a local website tricking the "domain" name.
I also thought about filtering the IP adress of the caller, but I guess that if you can trick the "domain" name, you can also trick the localhost IP.
So, how may I do this to have this security work fine ?
What are you referring to is called Cross Site Request Forgery.
Calling one of your scripts from another website will be forbidden by same-origin policy. Taking this into consideration and the fact that an AJAX request can contain only a few headers without the consent of the server via Cross-Origin Resource Sharing, you can send a custom HTTP header and checking that header on the server side, from PHP. If the header is missing, most likely the request is not coming from your own application.
You could also require each client to send a unique token for each request in order to fetch the data. Most common used token method is called Synchronizer token pattern.
Sorry for the long list of links included in this answer, but I consider the subject to be a delicate one and like any security problem, I think it is crucial to read as much as you can, from many sources, in order to understand the problem from different perspectives, available solutions and pick the right one for your use case.
Resources to read:
How to stop other website to send cross domain ajax requests?
What's the point of X-Requested-With header?
Using CORS

Restrict php script to only one computer without login

I want to make a php webpage accessible from only one computer.
IP checking isn't suitable for that (Dynamic IP).
I could set a cookie (with no expiration date) with a token. Then I could check if the cookie has the correct token and display the page, else I could die(). I think that this isn't a secure solution, because a cookie can be stolen, can't it?
So, what to do?
P.S. Obviously I can't login every time.
So here are a couple of options:
Client side certificates
Create a client side certificate and configure your webserver to authenticate using client certificates. Problem solved. In future, if you need to have more computers connect to the server, give them client certificates as well.
IP based : using Dynamic DNS
Give your computer a dynamic-dns name (myclient.dyndns.com) and install a dyndns client on your computer. The dyndns client keeps checking its own IP and updates the nameserver entry whenever your computer's IP changes. On server side all you need to check is if the IP that the requester presents is same as myclient.dyndns.com and allow access if it is.
A slight gotcha in this one is that there is a small (configurable) window of time between when IP changes and the dyndns client pupulates it to the nameserver. So, whenever your IP changes, until the dyndns client on your computer detects it and updates the nameserver, your server will not allow any requests from your computer in that time window. Thats because your computer will present the new IP and myclient.dyndns.com will resolve to your old UP. This time window can be made as small as you want (even 1 second). The other small gotcha is that in this n second window, any random computer that gets your old IP assigned by the ISP can access your server. The probablity of this is very small but just mentioning as a possibility.
There are many free dynamic dns services out there. You can google them.
Cookie Based
You could use cookies. However as you correctly identified, cookies can be stolen. Now, there are two ways they can be stolen:
Copying the cookie off the computer: Someone who has access to the computer can copy that specific cookie and impersonate as your computer to your webserver. If this is possible (if potential malicious users can remote desktop or physically access your computer), then cookie based solution is not for you.
Sniffing over the network: Cookies can be easily sniffed over the network. A easy way to prevent sniffing is enabling SSL. Given that you are confident that cookies cannot be stolen off the computer by copying it over, cookie+SSL option works in your case. In this case its just like a shared secret key. You do it via cookie or querystring, it doesn't matter. Cookie obviously are preferred over querystring because cookies aren't normally logged in browser history or webserver logs.
Also just a thought: For all the computers that aren't authenticated, send a standard 404 response rather than some custom "Access denied" page. This way anyone who is running a crawler/bot/scanner on your site will not be intrigued by this custom response and will not attempt to circumvent your security controls.
Couldn't you just use a unique passphrase as a parameter in the uri?
e.g. http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
and check to see if it matches the one stored in the server?
Well I get if you are not the user it is someone else... then you need only that specific client (computer) to be able to access the page
Either way the first time there must be some sort of registration. Maybe the example uri above works like this:
you request: http://www.example.com/index.php?passphrase=sfauh452h8243nf2489ht8924t48nf3984
the passphrase is checked of being correct and a boolean value is stored in the server as to never be able to "register" again.
If it is correct, a cookie is being generated with a unique key.
This same key is also stored in the server (file, database or something)
Therefore on subsequent requests when you just compare the key stored in the server and the key in the cookie you know who is the client

Submitting a form to another server: Do I need an API?

So I was asked to look at reconstructing a section of a website which I didn't build. One of the issues I'm running into is a contact form which is being loaded through an iFrame from another server. Obviously, the form's action submits to the other server, and the information is stored in a database for the client to see later.
I've never had to deal with something like this before and I'm wondering if I need to go through some sort of API the host may be able to provide, or can I recreate the form so I can style it and just have it submit to the same server. Sorry for the noob level of this question, but I'm just looking to be pointed in the right direction.
While what you are planning to do, technically works (I have done it myself on several occasions), it is possible the remote host might reject POST data from locations other than itself.
For example, if your site is running at www.example.com and the host site is running www.host.com The server running at host.com will be able to determine if you are sending POST data from example.com. This again, is only a problem if they are cross site checking.
Since you don't have access to their server to know, you will just have to try it and see.
Actually, this type of reject might or might not happen: Since a server needs to read the referrer to reject, but the referrer isn't sent by each and any browser.
Additionally, beware of protection mechanisms like session ids. Or some kind of authorization hash injected into forms as a hidden field.

how to identify remote machine uniquely in php?

how to identify remote machine uniquely in proxy server environment, i have used $_SERVER['REMOTE_ADDR'] but all machines in proxy network has same IP Address, is there any way
Don't ever depend on information that is coming from the client. In this case, you're running up against simple networking problems (you can never be sure the client's IP address is correct), in other cases the client may spoof information on purpose.
If you need to uniquely identify your clients, hand them a cookie upon their first visit, that's the best you can do.
Your best bet would be :
$uid = md5($_SERVER['HTTP_USER_AGENT'] . $_SERVER['REMOTE_ADDR']);
however, there's no way to know if they changed their user agent or different browser.
You could use some other headers to help, like these ones (ones that come to mind when looking at a dump of $_SERVER) :
HTTP_USER_AGENT
HTTP_ACCEPT
HTTP_ACCEPT_LANGUAGE
HTTP_ACCEPT_ENCODING
HTTP_ACCEPT_CHARSET
Using several informations coming from the client will help differenciate different clients (the more information you use, the more chances you have that at least one of those is different between two clients)...
... But it will not be a perfect solution :-(
Depending on the kind of proxy software and it's configuration, there might be a header called X-Forwarded-For, that you could use :
The X-Forwarded-For (XFF) HTTP header
is a de facto standard for identifying
the originating IP address of a client
connecting to a web server through an
HTTP proxy or load balancer. This is a
non-RFC-standard request header which
was introduced by the Squid caching
proxy server's developers.
But I wouldn't rely on that either : it will probably not always be present (don't think its' required)
Good luck !
I do not think there are other ways to do what you want. This is because the proxy server proxies the clients' requests and acts on their behalf. So, the clients are virtually hidden from the server's point of view. However, I may be wrong.
If you are aware of the proxy server, I think that implies this is some kind of company LAN. Are you in control of the LAN? Perhaps building and installing some ActiveX plugin which sends a machine-unique ID to the server might be the solution.
In general, HTTP proxy servers are not required to send the IP of their client. So every request sent by a proxy looks like it came from the proxy's IP. (Although the wikipedia has some mention of custom headers some proxies send to forward the client's ip.)
It gets even worse when an HTTP proxy is itself using another HTTP proxy - the server getting the request will only get the IP of the last proxy in the chain, and there's no guarantee that the 2nd proxy is even aware that the 1st proxy wasn't a regular client!
There is currently no way of doing this as you don't get information about the MAC address, and even that can be wrong, as if there are 2 network cards like a wired one or wireless one.
The best thing to do is locally to get JavaScript to write and read to local storage and send that saved setting back to your server with an Ajax command. This still isn't perfect as if they clear their cache, the setting is lost.
JKS,
Remote machines do not have unique identifiers. This is impossible.
Usually developers like to track machines when the end-user visits a page with a form like a login for security reasons.
Here is what I do: I store a cookie, a session variable and use the new html5 localStorage to track folks on my sensitive pages. This is really the only way to do this accurately. The nice thing about localStorage (when browsers can do this), the end-user typically has no idea you are storing stuff on their machine and deleting cookies has no effect.
So you might make a database table with tracking details like:
timestamp, ip_address, user_agent
then let's say you are tracking failed login attempts.. I would do this:
if(isset($_SESSION['failed_logins'])) {
$failed_logins = $_SESSION['failed_logins'];
$_SESSION['failed_logins'] = ($failed_logins + 1);
} else {
$_SESSION['failed_logins'] = 1;
}
I would then do the same for with setcookie() and then the localStorage script..
Now I am tracking this person and know how many times they are failing a login..
I would then write this user's data to my failed_login table as described above.
I'm sure this isn't the answer you were looking for, but it really is the best way to track users on your site.

Categories