In php, when you process GET/POST requests, you must check them etc. What if a parameter is missing or bad? If it's supposed to be some kind of hack, what should you do then? Just die(); with "dontHackMe"?
No, don't just die() with "dontHackMe". I see two outcomes for that:
If the user isn't malicious and it was a bug or an honest problem, you've offered the user no recourse for help.
That's actually a quite amateurish response and may even provoke an actual attacker to look for more security holes.
HTTP defines a list of status codes for responses. Simply choose the correct one and respond with that code, along with a default response page of some kind.
For example, if the request is malformed or incorrect in some way, then the 400 (Bad Request) response code is the way to go. The actual page in the response should indicate that it was a bad request and perhaps even offer an option for the user to seek help if they need it (such as a link to a help section on the site or a link to a contact form).
The reason for this response structure is two-fold:
By offering a helpful page, you create a human-readable form of output which people can see and appreciate, making your application that much more user-friendly.
By returning the correct status codes in your responses (all of your responses, not just rejected potential hacking attempts), you create a machine-readable interface that automated clients can use to more effectively interact with your application. (A 400 response, for example, tells a non-malicious automated client that it shouldn't even bother making that request again... The request was received and processed and was found to be bad.)
Edit: To clarify... This is for responding to genuinely bad requests. If the data being submitted in a form is simply incorrect (doesn't meet business rules, uses a letter where a number was intended, etc.) then Michael Hampton offers a perfectly sound suggestion. Essentially the server would "play dumb" and just re-display the form.
Don't give a potential attacker any more information than they already have. (Keeping in mind that a bad error message is more information than they already have.) The application would simply be saying, "Hmm... You tried to submit this form, but it's wrong. Here, try again."
Don't send an HTTP 4xx response unless something was wrong with the HTTP headers themselves.
If you receive invalid input from the client, redisplay the form with the invalid fields highlighted as being invalid.
Related
I'm creating a small application for double opt-in purposes, the user has to click on a link in a email, but this sends a GET HTTP request to my REST API. But logically requesting a REST API with GET results in getting data instead of setting data.
I have tried to use a
<form method="post" action="x.x.x.x/api/optin/double/"></form>
element to set the method to POST and creating an input element:
<input name="method" value="put" style="display:none">
to"set the method by a parameter.
But this does not seem to be the right solution.
I could create a file("accepteddooubleoptin.php") for that purpose but I'm not sure if that is the right solution. Or am I totally miss understanding the REST purpose?
There's no practical way to have a link in an email result in a POST request. The best you could do is send them to a page which displays a button which they must click which generates the POST request, but it's debatable whether you would want this flow for the user as opposed to a single click in their email.
The request is basically idempotent, i.e. even if the user clicks multiple times, it still results in them simply being in the opted-in state, so no state is repeatedly modified (as opposed to a new post being generated every time you POST to /blog/posts, for example).
In conclusion, it's alright, just use the GET request.
GET is a perfectly acceptable way to accomplish what you're trying to accomplish.
It's not a bad instinct to want to use your GET/POST/PATCH/DELETE verbs in their most literal sense, but this is one case where the technology (i.e., modern email clients) all but makes the decision for you. And there's nothing wrong with that.
See this short post from Campaign Monitor explaining what it looks like when you try to generate a POST request in an email. In short, the user's email client gets weirded out at best.
In fact, if you take a look at account verification or password reset emails from any popular web service (even StackOverflow, for example), you'll find you're in good company in that they use links with query strings to pass tokens or account identifiers in order to drop the user into the right workflow on their sites.
If you're still uncomfortable with the idea of "setting" a value via GET, you might think of it more like your user is clicking their link to "get" the appropriate form through which they ultimately "set" their preference.
Is it correct using “GET” as method for changing data in REST API
The important thing to understand about the standard meaning of GET is that it has safe semantics. Here is how Fielding described the matter in 2002.
HTTP does not attempt to require the results of a GET to be safe. What it does is require that the semantics of the operation be safe, and therefore it is a fault of the implementation, not the interface or the user of that interface, if anything happens as a result that causes loss of property (money, BTW, is considered property for the sake of this definition).
Because the semantics of the request are supposed to be safe, the email client is allowed to send a request before the recipient clicks on the link! For instance, the client might pro-actively load the response into a cache so that the latency for the human being is reduced if the link does get clicked.
For a use case like "Opt In", you really need to be thinking about what liabilities you incur if that link is fetched without the explicit consent of the human being.
The right way to do it would be to use an unsafe request (like POST).
However, implementing Opt In the "right" way may have a significant impact on your acceptance rate; the business might prefer to accept the liability rather than losing the opportunities.
I am new to the world of programming and I have learnt enough about basic CRUD-type web applications using HTML-AJAX-PHP-MySQL. I have been learning to code as a hobby and as a result have only been using a WAMP/XAMP setup (localhost). I now want to venture into using a VPS and learning to set it up and eventually open up a new project for public use.
I notice that whenever I send form data to my PHP file using AJAX or even a regular POST, if I open the Chrome debugger, and go to "Network", I can see the data being sent, and also to which backend PHP file it is sending the data to.
If a user can see this, can they intercept this data, modify it, and send it to the same backend PHP file? If they create their own simple HTML page and send the POST data to my PHP backend file, will it work?
If so, how can I avoid this? I have been reading up on using HTTPS but I am still confused. Would using HTTPS mean I would have to alter my code in any way?
The browser is obviously going to know what data it is sending, and it is going to show it in the debugger. HTTPS encrypts that data in transit and the remote server will decrypt it upon receipt; i.e. it protects against any 3rd parties in the middle being able to read or manipulate the data.
This may come as a shock to you (or perhaps not), but communication with your server happens exclusively over HTTP(S). That is a simple text protocol. Anyone can send arbitrary HTTP requests to your server at any time from anywhere. HTTPS encrypted or not. If you're concerned about somebody manipulating the data being sent through the browsers debugger tools… your concerns are entirely misdirected. There are many simpler ways to send any arbitrary crafted HTTP request to your server without even going to your site.
Your server can only rely on the data it receives and must strictly validate the given data on its own merits. Trying to lock down the client side in any way is futile.
This is even simpler than that.
Whether you are using GET or POST to transmit parameters, the HTTP request is sent to your server by the user's client, whether it's a web browser, telnet or anything else. The user can know what these POST parameters are simply because it's the user who sends them - regardless of the user's personal involvement in the process.
You are taking the problem from the wrong end.
One of the most important rules of programming is :
Never trust user entries is a basic rule of programming ! Users can and will make mistakes, and some of them will try to damage you or steal from you.
Welcome into the club.
Therefore, you must not allow your code to perform any operation that could damage you in any way if the POST or GET parameters you receive aren't what you expect, be it by mistake or from malicious intents. If your code, by the way it's designed, renders you vulnerable to harm simply by sending specific POST values to one of your pages, then your design is at fault and you should redo it taking that problematic into account.
That problematic being a major issue while designing programs, you will find plenty of documentation, tutorials and tips regarding how to prevent your code to turn against you.
Don't worry, that's not that hard to handle, and the fact that you came up with that concern by yourself show how good you are at figuring things out and how commited you are to produce good code, there is no reason why you should fail.
Feel free to post another question if you are stuck regarding a particular matter while taking on your security update.
HTTPS encrypts in-transit, so won't address this issue.
You cannot trust anything client-side. Any data sent via a webform can be set to whatever the client wants. They don't even have to intercept it. They can just modify the HTML on the page.
There is no way around this. You can, and should, do client side validation. But, since this is typically just JavaScript, it can be modified/disabled.
Therefore, you must validate all data server side when it is received. Digits should be digits, strip any backslashes or invalid special characters, etc.
Everyone can send whatever they want to your application. HTTPS just means that they can't see and manipulate what others send to your application. But you always have to work under the assumption that what is sent to your application as POST, GET, COOKIE or whatever is evil.
In HTTPS, the TLS channel is established before and HTTP data is transfered so, from that point of view, there is no difference between GET and POST requests.
It is encrypted but that is only supposed to protects against mitm attacks.
your php backend has no idea where the data it receives comes from which is why you have to assume any data it receives comes straight from a hacker.
Since you can't protect against unsavoury data being sent you have to ensure that you handle all data received safely. Some steps to take involve ensuring that any files uploaded can't be executed (i.e. if someone uploads a php file instead of an image), ensuring that data received never directly interacts with the database (i.e. https://xkcd.com/327/), & ensuring you don't trust someone just because they say they are logged in as a user.
To protect further do some research into whatever you are doing with the received post data and look up the best practices for whatever it is.
We have certain action links which are one time use only. Some of them do not require any action from a user other than viewing it. And here comes the problem, when you share it in say Viber, Slack or anything else that generates a preview of the link (or unfurls the link as Slack calls it) it gets counted as used since it was requested.
Is there a reliable way to detect these preview generating requests solely via PHP? And if it's possible, how does one do that?
Not possible with 100% accuracy in PHP alone, as it deals with HTTP requests, which are quite abstracted from the client. Strictly speaking you cannot even guarantee that user have actually seen the response, even tho it was legitimately requested by the user.
The options you have:
use checkboxes like "I've read this" (violates no-action requirement)
use javascript to send "I've read this" request without user interaction (violates PHP alone requirement)
rely on cookies: redirect user with set-cookie header, then redirect back to show content and mark the url as consumed (still not 100% guaranteed, and may result with infinite redirects for bots who follow 302 redirects, and do not persist cookies)
rely on request headers (could work if you had a finite list of supported bots, and all of them provide a signature)
I've looked on the entire internet to solve this problem. And I've found some workarounds to verify if the request is for link preview generation.
Then, I've created a tool to solve it. It's on GitHub:
https://github.com/brunoinds/link-preview-detector
You only need to call a single method from the class:
<?php
require('path\to\LinkPreviewOrigin.php');
$response = LinkPreviewOrigin::isForLinkPreview();
//true or false
I hope to solve your question!
For my school, we have to do these "Advisory Lessons" that tell you about College, etc. After completing the lesson, I am wondering if I would be able to replicate the same process using a set of requests from a PHP script with cURL.
I went through the lesson again, this time with Firebug on and an HTTP Analyzer.
Much to my surprise, the only GET requests were sent out during the entire lesson.
In case your curious, here is what the "Lesson" window looks like. It's sort of powerpoint-type thing where you read the slide and then some slides have questions on them. At the end, there is a quiz and if you don't pass it, the lesson doesn't count.
My question is this: If I were to setup a PHP/cURL script that logged into my account, and then made every single one of those requests, would the lesson be counted as complete?
Now obviously it's impossible for you guys to know how their server works and such...
I guess what I am saying is, is there any hidden content or fields that you can pass through a GET request? It just doesn't seem like the lesson window is passing enough info to the server for it to know if the lesson was complete or not.
Thanks so much for any advice and tips on my project!
EDIT: Here is my official test run (please don't do it too many times):
As many of you hinted, it did not work....but I am still not completely sure why.
Like you say, we can't speak to the details of their server, but it is possible to do these kinds of things with GET requests only because servers can use cookies and store state (associated with these cookies) on the server.
This gives the appearance, probably, of passing extra hidden information to the server.
You can research cookies, and even that jsessionid thing that is appearing in their URLs. That BTW tips you off that they are using at least some Java. :)
The lesson application may very well be storing data in a session or some other persistant data store server-side and using a token from your browser (usually a cookie or a GET parameter) to look up that data when needed.
Its a kinda complicated task. With only cURL you can't emulate execution of javascript code, AJAX requests etc
I am not sure what you are trying to do. For one HTTP is stateless protocol meaning the server gets request and gives a response to that particular request (that might be GET, POST or whatever and might have some request parameters). Statefullness in system usage is usually achieved by server creating a session and setting up a cookie on client side to pass session id in later requests. Session id is used to recognize the client and track his session. Everything you send during request is plain text. What response you get most likely will depend on session state and will also be a plain text. There is nothing hidden on a client side about client side. You just don't get to know what information server keeps in session and how requests are processed based on that and information you give during requests.
I have done extensive client-side validation through the help from jQuery. Now come to the server side validation, if I found some fields are not valid, can I simply return an error to client and without any useful message?
My assumption is that the user has to enable JavaScript in order to access my webpage. The user will not see the form if the web browser disabled the JavaScript. I can simply use noscript to do that. If they submit the information through the form I designed, there should be no invalid field submitted to server. Thus, if I found any invalid field in server side, then my form is corrupted by some hacker rather than some regular users.
Does my understanding make sense to you?
< The following message is appended based on comments from many experts here >
I am sorry that I didn't mention my question clearly here. I always do server side validation b/c I should not trust any user input.
However, my point here is that whether or not I should pass the server side error message to the user. Since, if a user uses my form and submits the form to server PHP, there should never be an invalid field. If such thing happens, then I assume that some hackers are playing with my PHP. So I would like to ignore them.
The major reason why I try to avoid passing the server side error messages back to client is that I didn't find a better solution to do so. I have posted several related questions here without good suggestion or examples.
< --- END ---- >
Thank you
You should always always have server-side validation.
I would suggest you to have a look at:
Validation, client-side vs. server-side
The client side validation is always a good idea and you should go for it but server-side validation should be a must and good coding practice.
In your situation, it sounds like you can be fairly certain that valid users won't be hitting PHP validation errors. So you can respond with slacker error messages. You really ought to give some decent indication of what went wrong though. If you don't, you'll regret it one day when the javascript fails for some reason (like you changed it and didn't test well enough).
Client side validation is only for users that actually use your site. Spam-bots, etc can easily omit it, so there always should be validation at the server site. When validation error is occurred a message should be sent back to user, that informs what is wrong.
Never use only client side validation. It can be only an extras.
Server validation is absolute must have for every web application because it's really easy to send fake headers. For example, you javascript do not allow to enter letters to the form, but I can simply send fake headers containing letters, digits etc.
Validation of anything on the client is only useful to help your users catch mistakes like "oh, you didn't give us a Last Name, please go fill that in". Someone such as myself can simply send any request I desire what-so-ever to your server, be able to deal with it, or be ready to deal with a potentially corrupted database and a CD-Rom of your customer's CreditCard numbers floating around Estonia.
Having the server reply with the form depends on how your structure your code-- eg, if it's ajax or whatever. But reporting the problem is always nice to have.
I always send a useful error message back. You will likely need a way for the server to report other error conditions anyway (database errors, etc.).