How are $_SERVER variables sent from browser to server PHP - php

I'm writing an app using php and have been looking into security issues. I'd like to know how the following code grabs browser information and how it is passed from the browser to the server:
$_SERVER['HTTP_USER_AGENT']
$_SERVER['REMOTE_ADDR']
gethostbyaddr($_SERVER['REMOTE_ADDR'])
Is this information encrypted when it's passed from the client PC to the server? Would it be easy for a hacker to steal this data?

Browser -> Apache -> PHP
Spoofing/Faking $_SERVER variables other than HTTP, is difficult as there are some handshakes between your Apache and Browser so if someone tries to spoof these variables he will not receive any response. For example if someone tries to spoof REMOTE_ADDR, it is probable that the request will not be completed.
On the other hand all the variables that start from HTTP_ are easy to spoof and they are sent to PHP just as received by Apache from the Browser. So for example user can write a Curl script with a custom User Agent (HTTP_USER_AGENT) and you will receive the response as it is.

$SERVER this super global var is passed from web server instead PHP, but some of them is reference by the HTTP request header, let say with prefix "HTTP" is generated by client (request header), and REMOTE_ADDR is the address on TCP level, not a arbitrary but also no guarantee.
HTTP_USER_AGENT is in plain text at header, easy to modify
REMOTE_ADDR technically is on TCP level IP address, require some equipment or specific software to fake Server.

Essentially the PHP script gets these variables from the web server. On the manual page, there is a list of the variable names, and their descriptions.
So to answer your question shortly, they are gotten from the Web Server you are using.
If someone was to try to fake an example, like $_SERVER['REMOTE_ADDR'], there is information on how it can be done here, though I've never looked into it.
Hope this helps in some way :)

Related

How to use cookies on called remote page?

Ok I guess this question may be similar to other in the "remote cookies" kind, but I'm not sure that other answers I've read are applied to my case anyway, so here we go.
I have two applications, a client and a server. The server "has" (I know they're actually stored client-side) a cookie and a page which uses it to print out a computed data based on the cookie.
If I access the server page directly, the cookie is taken into account and the data is output correctly.
If I call the same server page from the client via a file_get_contents() the cookie on the server page doesn't get read, and I get an answer computed with an empty cookie.
How to make the server read its own cookies when answering a similar request? Is cURL the only option?
You need to:
Make a request that gets a Set-Cookie header in the response (assuming the cookies are HTTP cookies and not JS cookies)
Store the cookies
Include the cookies in the HTTP request to the page that displays them
cURL is probably the sanest way to go about dealing with being an HTTP client in PHP when you need to pay attention to the headers. Another question gives some guidance about how to go about doing that.
Note that there is no way to send the cookies that the browser accessing your PHP script would sent to the remote server. They are a secret that belong to the browser and that server and will not be shared with your server.

Get soap server "referer" (in php)

How can i get from what website the soap server was called? Already tried the $_SERVER variable but no information about the source is saved there.
My soap server is accessed from various websites (ex site1.com, site2.com etc). How can i get inside the soap server class if it was from site1.com or site2.com
Thank you.
You can't.
You could get the IP address of the machine that the request came from, but multiple domains could resolve to it.
If you care about "who" is making the request, then you need to hand out identifying credentials and require they be included in the request.
You might try overriding SoapServer::handle() and looking at the raw Soap request. Though as #Quentin says, YMMV.

jQuery getJSON() - What server is called?

When using PHP I can use file_get_contents or cURL to get a URL.
jQuery runs on the client
In jQuery there is a function called jQuery.getJSON(). Javascript is run on the client. What server is used for the download of the JSON code of the external URL? What information does the called URL know about? Does it know of the domain? The IP of the client user? It's a client language.
Prefered for many request
To make many requests, is it safer to do this with Javascript than PHP because it runs on the every client instead of one server point?
What server is used for the download of the JSON code of the external URL?
The one that the domain name in the URL passed to that function resolves to.
What information does the called URL know about?
It is an HTTP request, like any other. The usual information will be available.
Does it know of the domain? The IP of the client user?
Of course.
It's a client language.
… making an HTTP request.
To make many requests, is it safer to do this with Javascript than PHP because it runs on the every client instead of one server point?
You control the server. You don't control the client. JavaScript can be disabled. It is safer to make the request from your server.
(For a value of "safe" equal to "Less likely to fail assuming the service you are using doesn't impose rate limiting")
Because of the Same Origin Policy all requests made in JavaScript must go to the domain from which the document was loaded. It's a standard HTTP request, so the server will have the same information it would if a user was just navigating around (including cookies, etc.) From the phrasing of your question it appears you need to make requests to some external site, in which case making those requests from your server which is not subject to such a security policy would likely be best.
In jQuery there is a function called jQuery.getJSON(). Javascript is
run on the client. What server is used for the download of the JSON
code of the external URL? What information does the called URL know
about? Does it know of the domain? The IP of the client user? It's a
client language.
The code that runs your web browser is only on your PC, too, yet it is perfectly capable of retrieving content via the HTTP protocol from a web server, and has done so for several decades.
AJAX requests are no different. jQuery creates an XMLHttpRequest object that performs an HTTP request in a manner uncoupled from the general page context. As far as the server's concerned, it's just an HTTP request like any other.
The text contents of the result you get back happen to be written in JSON format, but the HTTP layer neither knows nor cares about that.

How to ensure the HTTP_REQUEST Is coming from the right place?

I learn that HTTP_REFERER or any HTTP request header can be fake and not reliable.
REMOTE_ADDR is reliable though.
so, how can I ensure the incoming HTTP_REQUEST call is coming from a website that I white-list?
For example, I have a js code that will send from client site to server. (something like a sniper, cross platform). however, I only allow this happen from several websites. Not others. so, even other people copy the code and put onto their website, it won't work.
In the general case you simply can't do it. You are entirely at the mercy of the client. You can make it more difficult by checking the referrer, but not impossible.
The only way to do this reliably is to have all those several websites generate unique tokens for every users, similarly as how you protect yourself from CSRF attacks. The tokens would then be sent along with the request by your script, and your server would need to have a way to check the token for authenticity against the other websites. Needless to say this is very likely impossible unless you control all sites.
See also this question on HTTP_REFERER
Haven't used this in practice, so there might be practicality issues I wasn't counting on, but thought I'd contribute the idea anyway. If I interpret correctly, this is similar to (if not the same as) the idea #Seldaek posted.
Your Server generates a unique ID for each page-serve and embeds the ID in the page.
Server stores the ID and the Client's IP address.
The js on the client places the ID in its request to the Server and sends the request.
When the Server receives the js request from the Client, it only responds if the IP/ID pair matches one that is on-file (see #2).
After some specified time (and/or when the browser session ends), the ID/IP entries expire.
This could perhaps be faked if a person sharing the visitor's IP address (perhaps both are behind the same NAT box) hijacks another visitor's session in real-time, but it will at least prevent someone from making another web page which piggybacks on your server's service.
There could also be issues if, for some reason, your visitor's IP address changes between when the page was served and when the js request was sent.
Basically, your server is saying "I will not service your js request unless you possess the data from a page I recently served and you are coming from (to the best of my knowledge) the place to which I served that page."
All http headers can be faked.
If you are just accepting communication from the remote server (and not having a client browser be redirected to your server) then you can either set up a VPN between that remote server and yours or you can change your firewall config to only allow communication from a specific set of IP addresses. However, even the later can be faked by people willing to go that far.
If the client browser is the one either being redirected to your server or loading the file(s) from your server then there is absolutely nothing you can do.
As #Billy says this simply isn't possible, you're thinking about the internets' request response mechanism incorrectly.
For example, I have a js code that
will send from client site to server.
(something like a sniper, cross
platform).
I assume what you're saying is that you have some javascript code served up on some website on your 'whitelist' which redirects the user to your website. Its on your website that you want to check that the user came from the 'whitelisted' site?
Aside from setting a cookie (might not be possible - cross domains) you might find it tough. Have you taken a look at OpenID? If you can post more details a solution may be more obvious.
so, how can I ensure the incoming
HTTP_REQUEST call is coming from a
website that I white-list?
I think if you sign every request(from whitelist) which is valid for that request only(once). I assume using uniqid for this is safe(enough?).

how to identify remote machine uniquely in php?

how to identify remote machine uniquely in proxy server environment, i have used $_SERVER['REMOTE_ADDR'] but all machines in proxy network has same IP Address, is there any way
Don't ever depend on information that is coming from the client. In this case, you're running up against simple networking problems (you can never be sure the client's IP address is correct), in other cases the client may spoof information on purpose.
If you need to uniquely identify your clients, hand them a cookie upon their first visit, that's the best you can do.
Your best bet would be :
$uid = md5($_SERVER['HTTP_USER_AGENT'] . $_SERVER['REMOTE_ADDR']);
however, there's no way to know if they changed their user agent or different browser.
You could use some other headers to help, like these ones (ones that come to mind when looking at a dump of $_SERVER) :
HTTP_USER_AGENT
HTTP_ACCEPT
HTTP_ACCEPT_LANGUAGE
HTTP_ACCEPT_ENCODING
HTTP_ACCEPT_CHARSET
Using several informations coming from the client will help differenciate different clients (the more information you use, the more chances you have that at least one of those is different between two clients)...
... But it will not be a perfect solution :-(
Depending on the kind of proxy software and it's configuration, there might be a header called X-Forwarded-For, that you could use :
The X-Forwarded-For (XFF) HTTP header
is a de facto standard for identifying
the originating IP address of a client
connecting to a web server through an
HTTP proxy or load balancer. This is a
non-RFC-standard request header which
was introduced by the Squid caching
proxy server's developers.
But I wouldn't rely on that either : it will probably not always be present (don't think its' required)
Good luck !
I do not think there are other ways to do what you want. This is because the proxy server proxies the clients' requests and acts on their behalf. So, the clients are virtually hidden from the server's point of view. However, I may be wrong.
If you are aware of the proxy server, I think that implies this is some kind of company LAN. Are you in control of the LAN? Perhaps building and installing some ActiveX plugin which sends a machine-unique ID to the server might be the solution.
In general, HTTP proxy servers are not required to send the IP of their client. So every request sent by a proxy looks like it came from the proxy's IP. (Although the wikipedia has some mention of custom headers some proxies send to forward the client's ip.)
It gets even worse when an HTTP proxy is itself using another HTTP proxy - the server getting the request will only get the IP of the last proxy in the chain, and there's no guarantee that the 2nd proxy is even aware that the 1st proxy wasn't a regular client!
There is currently no way of doing this as you don't get information about the MAC address, and even that can be wrong, as if there are 2 network cards like a wired one or wireless one.
The best thing to do is locally to get JavaScript to write and read to local storage and send that saved setting back to your server with an Ajax command. This still isn't perfect as if they clear their cache, the setting is lost.
JKS,
Remote machines do not have unique identifiers. This is impossible.
Usually developers like to track machines when the end-user visits a page with a form like a login for security reasons.
Here is what I do: I store a cookie, a session variable and use the new html5 localStorage to track folks on my sensitive pages. This is really the only way to do this accurately. The nice thing about localStorage (when browsers can do this), the end-user typically has no idea you are storing stuff on their machine and deleting cookies has no effect.
So you might make a database table with tracking details like:
timestamp, ip_address, user_agent
then let's say you are tracking failed login attempts.. I would do this:
if(isset($_SESSION['failed_logins'])) {
$failed_logins = $_SESSION['failed_logins'];
$_SESSION['failed_logins'] = ($failed_logins + 1);
} else {
$_SESSION['failed_logins'] = 1;
}
I would then do the same for with setcookie() and then the localStorage script..
Now I am tracking this person and know how many times they are failing a login..
I would then write this user's data to my failed_login table as described above.
I'm sure this isn't the answer you were looking for, but it really is the best way to track users on your site.

Categories