Does 'php://input' include the client's device ID information? - php

When I print $data = file_get_contents('php://input'),It printed lots of messy code. I want to know from the php server which android device has uploaded a certain file to the server. Thank you so much!

That isn't messy code, it's details about the request which you do not understand.
Go learn how the HTTP protocol works, you can't tackle this problem if you don't understand what you're working with.
You can only find out what a user-agent provides. If they are not sending a request header over HTTP with a device version number, then no you will not get that.
Typically the OS is included in the User-agent string. This is likely to be the only place you'll find anything close to what you're looking for.

If you want to identify the specific user, the best you can do (by design of many of the systems between you and the device), is implement "sessions" using HTTP cookies. If the requests are coming from a browser, once you set a cookie, every subsequent request from that individual user will have the same cookie with it.
The general pattern we use is to look for an existing cookie, and if there isn't one (i.e. new user): create a unique random string, store it somewhere on the server associated with this "user", and then send it to the user as a cookie.
The PHP session features can greatly assist you in this.
Keep in mind you can only track a user in this anonymous way. You cannot get an actual device ID from the device unless you also wrote the app on the device which is sending the request. and this is good.

Related

What should I do in my webpage so that only a user from a web browser can request a page and post some data and not from user using Fiddler?

I recently created two pages, front-end.php and back-end.php.
front-end.php post some data to back-end.php on mouse click (I am currently using ajax for this).
Now if I use Fiddler, then also I am able to post data to back-end.php. I do not want this to happen. What should I do?
I searched on Internet for answer and came to a word 'Setting User_agent', but solution is not given clearly.
Regarding what I want, Actually I do not want some bot or some other type of automatic program to get some data from some source and post it to my back-end.php. I want to assure that the user comes to my webpage and then post some data.
User Agent is a header that your browser sends to the web server with each request identifying itself. Here you can see what it is like.
Fiddler sends "*" or "Fiddler" as user agent, so you can ignore requests having those values. However, this is far from optimal solution to your problem because one can simply spoof the user-agent header by sending whatever she likes.
An other non-secure condition would be to check the referer. So, you can ignore all requests except those coming from "front-end.php". Keep in mind that this, too, can be spoofed by the user.
You should keep also in mind that since a user can send data to the web-server using her browser, there is nothing that can stop her from sending data or making requests using any other way.
In general, web developers should respect the user's freedom and not force such tactics, so please be more specific and tell us what exactly is the real problem you want to solve and a more elegant/secure solution may exist.
EDIT: If you do not want crawlers to index some/all of your pages you should add them in your robots.txt file.
Regarding all other automations/programs I'm afraid there is no perfect way to be sure if the request was made from a human being or a robot. I would do two things: a) Make sure to add validation rules to my backend and b) as a last resort implement a CAPTCHA test.
I would use CAPTCHA only if absolute necessary because it irritates most users and makes their lives difficult.
You should add a hash of some internal secret and the value you want to send. As you are the only one who knows how to make the hash, a fiddler cannot know how to create the secret.
For instance, you make a hash of "asdflj8######GJlk" concatenated by whatever your value in your form is. Now the hacker cannot change the form. The problem is, you can post the same value (with the same hash) from another place. To stop this from happening you should make sure all hashes can be used only once. The only thing a hacker can do now is to post your request from fiddler instead of from your site, but not at the same time. As a final step you can add something as a time-limitation
So what you need is a hash with:
a secret
a method to make the hash single-use
a methode to make the hash time-limited
add this as a field. Specific implementation is left as an exercise ;)
I would not use user_agent, these can be easily faked.
(these methods are the same that payment-providers use to ensure the data (e.g. the amount to be payed!) is not tampered with)
The shortest answer is that anything your browser can do, Fiddler can do. It can send any header it wants using any value it wants.
If your goal is to be able to pass some values from one page to another, without ANYONE changing them (either the browser or Fiddler) you use a Message Authentication Code (a signed hash of the data).
ASP.NET builds this feature in for their "ViewState" data; see http://msdn.microsoft.com/en-us/library/system.web.configuration.pagessection.enableviewstatemac(v=vs.110).aspx
However, that precludes the client (e.g. your JavaScript) from changing the values at all; if JavaScript can change the values, it means that it has the key, and if it has the key, so does Fiddler.

Verify in a PHP server that one request is made from a specific client

I'm writing a mobile game where the user sends his highscore to a PHP server.
I want to verify in the server that the HTTP request comes only from the mobile devices. I want to refuse calls that a malicious user may send via curl or other HTTP clients with a fake score.
What is the standard, usual way of doing this?
I thought that I could encript the HTTP message in the mobile client, but then I would need to release the binary with the encription key, which could be retrieved if decompiled.
Thank you.
Take a look at this:
https://github.com/serbanghita/Mobile-Detect
It is pretty accurate, however it won't stop clients who are faking their User-Agent.
Generally the best way however is not to let the client make any decisions.
Take a game like Eve Online for example. Every action you make is sent as a user action to the server, the server then validates the action and makes the appropriate decision.
If the server relied on the client to decide how much damage a ship is doing, the game would be subject to no end of trainer hacks.
You can use JavaScript to fetch information about the user's client, such as Browser CodeName, Browser Name, Browser Version, Platform, User-agent header, User-agent language, and so on.
There are probably viable libraries out there (maybe something like Flosculu's mentioned) that can aid you with mobile specifically detection, but you must understand that all of those information can be manipulated anyway, it mainly depends on HOW you transfer the data, so maybe you shouldn't be over-thinking this but instead focus on safe data transfer methods.
A quick search pointed to this mobile detection scripts though.
But again.. you can't rely on anything like this but if it's not too much time-consuming, then by all means add another validation to the script if it makes you feel better :)

Can GET Requests Contain Hidden Data or Paramters?

For my school, we have to do these "Advisory Lessons" that tell you about College, etc. After completing the lesson, I am wondering if I would be able to replicate the same process using a set of requests from a PHP script with cURL.
I went through the lesson again, this time with Firebug on and an HTTP Analyzer.
Much to my surprise, the only GET requests were sent out during the entire lesson.
In case your curious, here is what the "Lesson" window looks like. It's sort of powerpoint-type thing where you read the slide and then some slides have questions on them. At the end, there is a quiz and if you don't pass it, the lesson doesn't count.
My question is this: If I were to setup a PHP/cURL script that logged into my account, and then made every single one of those requests, would the lesson be counted as complete?
Now obviously it's impossible for you guys to know how their server works and such...
I guess what I am saying is, is there any hidden content or fields that you can pass through a GET request? It just doesn't seem like the lesson window is passing enough info to the server for it to know if the lesson was complete or not.
Thanks so much for any advice and tips on my project!
EDIT: Here is my official test run (please don't do it too many times):
As many of you hinted, it did not work....but I am still not completely sure why.
Like you say, we can't speak to the details of their server, but it is possible to do these kinds of things with GET requests only because servers can use cookies and store state (associated with these cookies) on the server.
This gives the appearance, probably, of passing extra hidden information to the server.
You can research cookies, and even that jsessionid thing that is appearing in their URLs. That BTW tips you off that they are using at least some Java. :)
The lesson application may very well be storing data in a session or some other persistant data store server-side and using a token from your browser (usually a cookie or a GET parameter) to look up that data when needed.
Its a kinda complicated task. With only cURL you can't emulate execution of javascript code, AJAX requests etc
I am not sure what you are trying to do. For one HTTP is stateless protocol meaning the server gets request and gives a response to that particular request (that might be GET, POST or whatever and might have some request parameters). Statefullness in system usage is usually achieved by server creating a session and setting up a cookie on client side to pass session id in later requests. Session id is used to recognize the client and track his session. Everything you send during request is plain text. What response you get most likely will depend on session state and will also be a plain text. There is nothing hidden on a client side about client side. You just don't get to know what information server keeps in session and how requests are processed based on that and information you give during requests.

Securing JSONP?

I have a script that uses JSONP to make cross domain ajax calls. This works great but my question is, is there a way to prevent other sites from accessing and getting data from these URL's? I basically would like to make a list of sites that are allowed and only return data if they are in the list. I am using PHP and figure I might be able to use "HTTP_REFERER" but have read that some browsers will not send this info.... ??? Any ideas?
Thanks!
There really is no effective solution. If your JSON is accessible through the browser, then it is equally accessible to other sites. To the web server a request originating from a browser or another server are virtually indistinguishable aside from the headers. Like ILMV commented, referrers (and other headers) can be falsified. They are after all, self-reported.
Security is never perfect. A sufficiently determined person can overcome any security measures in place, but the goal of security is to create a high enough deterrent that laypeople and or most people would be dissuaded from putting the time and resources necessary to compromise the security.
With that thought in mind, you can create a barrier of entry high enough that other sites would probably not bother making requests with the barriers of entry put into place. You can generate single use tokens that are required to grab the json data. Once a token is used to grab the json data, the token is then subsequently invalidated. In order to retrieve a token, the web page must be requested with a token embedded within the page in javascript that is then put into the ajax call for the json data. Combine this with time-expiring tokens, and sufficient obfuscation in the javascript and you've created a high enough barrier.
Just remember, this isn't impossible to circumvent. Another website could extract the token out of the javascript, and or intercept the ajax call and hijack the data at multiple points.
Do you have access to the servers/sites that you would like to give access to the JSONP?
What you could do, although not ideal is to add a record to a db of the IP on the page load that is allowed to view the JSONP, then on the jsonp load, check if that record exists. Perhaps have an expiry on the record if appropriate.
e.g.
http://mysite.com/some_page/ - user loads page, add their IP to the database of allowed users
http://anothersite.com/anotherpage - as above, add to database
load JSONP, check the IP exists in the database.
After one hour delete the record from the db, so another page load would be required for example
Although this could quite easily be worked around if the scraper (or other sites) managed to work out what method you are using to allow users to view the JSONP, they'd only have to hit the page first.
How about using a cookie that holds a token used with every jsonp request?
Depending on the setup you can also use a variable if you don't want to use cookies.
Working with importScript form the Web Worker is quite the same as jsonp.
Make a double check like theAlexPoon said. Main-script to web worker, web worker to sever and back with security query. If the web worker answer to the main script without to be asked or with the wrong token, its better to forward your website to the nirvana. If the server is asked with the wrong token don't answer. Cookies will not be send with an importScript request, because document is not available at web worker level. Always send security relevant cookies with a post request.
But there are still a lot of risks. The man in the middle knows how.
I'm certain you can do this with htaccess -
Ensure your headers are sending "HTTP_REFERER" - I don't know any browser that wont send it if you tell it to. (if you're still worried, fall back gracefully)
Then use htaccess to allow/deny access from the right referer.
# deny all except those indicated here
order deny,allow
deny from all
allow from .*domain\.com.*

How do I restrict JSON access?

I have a web application that pulls data from my newly created JSON API.
My static HTML pages dynamically calls the JSON API via JavaScript from the static HTML page.
How do I restrict access to my JSON API so that only I (my website) can call from it?
In case it helps, my API is something like: http://example.com/json/?var1=x&var2=y&var3=z... which generates the appropriate JSON based on the query.
I'm using PHP to generate my JSON results ... can restricting access to the JSON API be as simple as checking the $_SERVER['HTTP_REFERER'] to ensure that the API is only being called from my domain and not a remote user?
I think you might be misunderstanding the part where the JSON request is initiated from the user's browser rather than from your own server. The static HTML page is delivered to the user's browser, then it turns around and executes the Javascript code on the page. This code opens a new connection back to your server to obtain the JSON data. From your PHP script's point of view, the JSON request comes from somewhere in the outside world.
Given the above mechanism, there isn't much you can do to prevent anybody from calling the JSON API outside the context of your HTML page.
The usual method for restricting access to your domain is prepend the content with something that runs infinitely.
For example:
while(1);{"json": "here"} // google uses this method
for (;;);{"json": "here"} // facebook uses this method
So when you fetch this via XMLHttpRequest or any other method that is restricted solely to your domain, you know that you need to parse out the infinite loop. But if it is fetched via script node:
<script src="http://some.server/secret_api?..."></script>
It will fail because the script will never get beyond the first statement.
In my opinion, you can't restrict the access, only make it harder. It's a bit like access-restriction by obscurity. Referrers can be easily forged, and even with the short-lived key a script can get the responses by constantly refreshing the key.
So, what can we do?
Identify the weakness here:
http://www.example.com/json/getUserInfo.php?id=443
The attacker now can easily request all user info from 1 to 1.000.000 in a loop. The weak point of auto_increment IDs is their linearity and that they're easy to guess.
Solution: use non-numeric unique identifiers for your data.
http://www.example.com/json/getUserInfo.php?userid=XijjP4ow
You can't loop over those. True, you can still parse the HTML pages for keys for all kinds of keys, but this type of attack is different (and more easily avoidable) problem.
Downside: of course you can't use this method to restrict queries that aren't key-dependent, e.g. search.
Any solution here is going to be imperfect if your static pages that use the API need to be on the public Internet. Since you need to be able to have the client's browser send the request and have it be honored, it's possibly for just about anyone to see exactly how you are forming that URL.
You can have the app behind your API check the http referrer, but that is easy to fake if somebody wants to.
If it's not a requirement for the pages to be static, you could try something where you have a short-lived "key" generated by the API and included in the HTML response of the first page which gets passed along as a parameter back to the API. This would add overhead to your API though as you would have to have the server on that end maintain a list of "keys" that are valid, how long they are valid for, etc.
So, you can take some steps which won't cost a lot but aren't hard to get around if someone really wants to, or you can spend more time to make it a tiny bit harder, but there is no perfect way to do this if your API has to be publically-accessible.
The short answer is: anyone who can access the pages of your website will also be able to access your API.
You can attempt to make using your API more difficult by encrypting it in various ways, but since you'll have to include JavaScript code for decrypting the output of your API, you're just going to be setting yourself up for an arms race with anyone who decides they want to use your API through other means. Even if you use short-lived keys, a determined "attacker" could always just scrape your HTML (along with the current key) just before using the API.
If all you want to do is prevent other websites from using your API on their web pages then you could use Referrer headers but keep in mind that not all browsers send Referrers (and some proxies strip them too!). This means you'd want to allow all requests missing a referrer, and this would only give you partial protection. Also, Referrers can be easily forged, so if some other website really wants to use your API they can always just spoof a browser and access your API from their servers.
Are you, or can you use a cookie based authentication? My experience is based on ASP.NET forms authentication, but the same approach should be viable with PHP with a little code.
The basic idea is, when the user authenticates through the web app, a cookie that has an encrypted value is returned to the client browser. The json api would then use that cookie to validate the identity of the caller.
This approach obviously requires the use of cookies, so that may or may not be a problem for you.
Sorry, maybe I'm wrong but... can it be made using HTTPS?
You can (?) have your API accessible via https://example.com/json/?var1=x&var2=y, thus only authenticated consumer can get your data...
Sorry, there's no DRM on the web :-)
You can not treat HTML as a trusted client. It's a plain text script interpreted on other people's computers as they see fit. Whatever you allow your "own" JavaScript code do you allow anyone. You can't even define how long it's "yours" with Greasemonkey and Firebug in the wild.
You must duplicate all access control and business logic restrictions in the server as if none of it were present in your JavaScript client.
Include the service in your SSO, restrict the URLs each user has access to, design the service keeping wget as the client in mind, not your well behaved JavaScript code.

Categories