Getting this out of the way; I don't frequently do much in php or database programming, so I'm just trying to be extra careful in this current project.
So basically, I have a site powered by a database and I have some Jquery code that uses the "post" method to insert rows into the database. As soon as I started I had my thoughts about security, seeing as people can't see my php, but can see my Javascript. As soon as I finished setting up the system I did a test and running the jquery does insert information even if I'm on a remote source... which can obviously lead to a possible major security problem. So being as unrefined as I am, I just want to know how I should prevent this and secure the system better?
And... Please don't insult me or be rude... I understand that I may be stupid or whatever, but I'm just trying to learn!
This is what Cross Site Request Forgery (CSRF) and the same origin policy sandbox are designed to prevent.
From another point of view, how do you stop someone from writing a form on their site and submitting it as a POST request to your script to add junk records?
Typically CSRF protection is provided by your framework. If you want to implement it directly in PHP, it is quite simple.
Generate a unique token that from your server's code, and add it to the session.
Embed it as a hidden field in the form.
On the script that accepts the form, check for the presence of this hidden field.
If the field exists, match it against what is in the session, if they match - then it is a legitimate request.
If they don't match (or the field is missing), then its a remote request and you can reject it.
This page provides some PHP code to implement the above.
Once you have done that, you need to make sure that the jquery script can be authenticated the same way. This snippet details steps to implement the same technique in jquery.
Related
I am new to the world of programming and I have learnt enough about basic CRUD-type web applications using HTML-AJAX-PHP-MySQL. I have been learning to code as a hobby and as a result have only been using a WAMP/XAMP setup (localhost). I now want to venture into using a VPS and learning to set it up and eventually open up a new project for public use.
I notice that whenever I send form data to my PHP file using AJAX or even a regular POST, if I open the Chrome debugger, and go to "Network", I can see the data being sent, and also to which backend PHP file it is sending the data to.
If a user can see this, can they intercept this data, modify it, and send it to the same backend PHP file? If they create their own simple HTML page and send the POST data to my PHP backend file, will it work?
If so, how can I avoid this? I have been reading up on using HTTPS but I am still confused. Would using HTTPS mean I would have to alter my code in any way?
The browser is obviously going to know what data it is sending, and it is going to show it in the debugger. HTTPS encrypts that data in transit and the remote server will decrypt it upon receipt; i.e. it protects against any 3rd parties in the middle being able to read or manipulate the data.
This may come as a shock to you (or perhaps not), but communication with your server happens exclusively over HTTP(S). That is a simple text protocol. Anyone can send arbitrary HTTP requests to your server at any time from anywhere. HTTPS encrypted or not. If you're concerned about somebody manipulating the data being sent through the browsers debugger tools… your concerns are entirely misdirected. There are many simpler ways to send any arbitrary crafted HTTP request to your server without even going to your site.
Your server can only rely on the data it receives and must strictly validate the given data on its own merits. Trying to lock down the client side in any way is futile.
This is even simpler than that.
Whether you are using GET or POST to transmit parameters, the HTTP request is sent to your server by the user's client, whether it's a web browser, telnet or anything else. The user can know what these POST parameters are simply because it's the user who sends them - regardless of the user's personal involvement in the process.
You are taking the problem from the wrong end.
One of the most important rules of programming is :
Never trust user entries is a basic rule of programming ! Users can and will make mistakes, and some of them will try to damage you or steal from you.
Welcome into the club.
Therefore, you must not allow your code to perform any operation that could damage you in any way if the POST or GET parameters you receive aren't what you expect, be it by mistake or from malicious intents. If your code, by the way it's designed, renders you vulnerable to harm simply by sending specific POST values to one of your pages, then your design is at fault and you should redo it taking that problematic into account.
That problematic being a major issue while designing programs, you will find plenty of documentation, tutorials and tips regarding how to prevent your code to turn against you.
Don't worry, that's not that hard to handle, and the fact that you came up with that concern by yourself show how good you are at figuring things out and how commited you are to produce good code, there is no reason why you should fail.
Feel free to post another question if you are stuck regarding a particular matter while taking on your security update.
HTTPS encrypts in-transit, so won't address this issue.
You cannot trust anything client-side. Any data sent via a webform can be set to whatever the client wants. They don't even have to intercept it. They can just modify the HTML on the page.
There is no way around this. You can, and should, do client side validation. But, since this is typically just JavaScript, it can be modified/disabled.
Therefore, you must validate all data server side when it is received. Digits should be digits, strip any backslashes or invalid special characters, etc.
Everyone can send whatever they want to your application. HTTPS just means that they can't see and manipulate what others send to your application. But you always have to work under the assumption that what is sent to your application as POST, GET, COOKIE or whatever is evil.
In HTTPS, the TLS channel is established before and HTTP data is transfered so, from that point of view, there is no difference between GET and POST requests.
It is encrypted but that is only supposed to protects against mitm attacks.
your php backend has no idea where the data it receives comes from which is why you have to assume any data it receives comes straight from a hacker.
Since you can't protect against unsavoury data being sent you have to ensure that you handle all data received safely. Some steps to take involve ensuring that any files uploaded can't be executed (i.e. if someone uploads a php file instead of an image), ensuring that data received never directly interacts with the database (i.e. https://xkcd.com/327/), & ensuring you don't trust someone just because they say they are logged in as a user.
To protect further do some research into whatever you are doing with the received post data and look up the best practices for whatever it is.
When working in PHP, to avoid duplicate form submissions, I used to generate a unique id of some sort, store it into a session variable, and have the id in the form, so on submission I can compare the values, regenerating the session value at that point. I never considered it a great solution, but I was never able to think of/find a better solution.
Now, I'm doing an Angular front end with a PHP backend (Lumen), and I'm struggling to think of a solution that doesn't involve me writing into a database. Unless I'm misunderstanding something, I can't use sessions between Angular and PHP, right? So this solution won't work. The only other thing I can think of is to have a key/pair value in a DB, but I never quite understood how that prevents duplicates on something like an accidental double click, wherein the session/database may not update it's key before the second submission starts processing. And as I'm learning more about stateless systems, it feels like a session isn't the best place to put this sort of thing?
Overall, I'm having trouble with creating a secure, backend system to avoid duplicate forms. With angular, I can always prevent duplicate submissions through preventing the button from being clicked, the API call from firing, etc, but I'd like to add backend protection too, and I'd love to hear how the experts do it.
In most cases this should be protected in the front end. Backend protection solution depends upon your definition of unique request. You can create a composite key from the attributes that together makes the request unique. For eg. email id, request id, or any set of request parameters
DB will not accept duplicate keys and you can catch the exception and gracefully handle it on front end.
Eg. Newsletter subscription request application - the uniqueness of the request is determined by email address and newsletter type
I'm sure there are ways to hack it, but I think the short answer is that you don't want to hack it. When you transition to a split back/front end, one thing you are doing is specifically making your API calls stateless. This is a good thing! The statelessness, lack of sessions, etc, can dramatically simplify your back-end application. In short, statelessness is half the reason why you do something like this.
Preventing double submits as you are used to doing is decidedly something that you need a stateful application to do. As a result, it is now the job of the front-end application exclusively.
Your best bet is to think about your application in a whole new way. Your PHP backend handles stateless REST requests, and as such it is not PHP's problem if it gets duplicate submissions. In practice angular should have no problem making sure duplicates don't get submitted (it is really easy to prevent it on the front-end). Your PHP backend does need to make sure it always returns appropriate responses. So for instance on a registration page, back-to-back duplicate registration requests would result a successful registration followed by a failed registration with a message of "That email already exists" (or something like that). Otherwise, your PHP backend doesn't care. I can still do its job. It's the client's job to make sure the double submit doesn't happen, or make sense of the conflicting answers if it does.
Statelessness is a desirable quality in API calls: you'll make your life much more difficult if you muck that up.
The logic is simple, just disable the submit button and show a loader to the user on form submit and enable the button again and hide the loader when you get a response from the first API and also you can empty the form fields too. Now, If the user submits the form again with same details you can easily access the old saved data and compare them and can warn the user that fields are already submitted.
If you are using Angular 4 or 2,
[disabled] , *ngIf or [hidden], Observables will help
to reset form formRef.reset()
https://codecraft.tv/courses/angular/forms/submitting-and-resetting/
and angular 1
ng-disabled, ng-if and HTTP promise will help
and for resetting form
angularjs form reset error
It's rare, but I have to pay MS a compliment: the ASP.NET WebMethod (AJAX) authorization is a dream, regarding my desire for security and laziness.
Encosia's ASP.NET page methods are only as secure as you make them absolutely fits those needs. ASP.NET is actually workable for me now. Free at last! (From the noble but disastrous AJAXControlToolkit).
Anyways, the problem is, that's for work. I'm not buying the MS architecture when LAMP's out there for free. I'm new to AJAX, and I can't seem to find a clear answer on how to authorize AJAX calls to PHP in the same way as Encosia above.
Can anyone suggest the PHP equivalent of what Encosia does in the link above?
Thanks in advance!
More Details
OK, let me be more specific. Encosia's solution above gives 401 denied to anyone not logged in trying to access a webmethod. Neat, clean, easy. Before, I tried to user session data to give access, but it, unknowingly to me, forced synchronous mode. Nono.
I need both, for my site. I need to be able to give 401 denieds on certain pages if a user isn't logged in. I need to be able to allow anyone to call other phps via ajax regardless of login.
Clarity
Bottom line: I don't want anyone accessing certain AJAX PHPs unless if they are logged in. I don't care what the response or any other details as long as its' still AJAX. How to?
Not really clear from the question, but if you want to only allow access to your AJAX server side listening scripts (maybe XML or JSON output) to users that have either authed or are on the related page,then how about adding a session identifier to your JS AJAX requests? In the server side script you can check that identifier against maybe a DB table holding your current sessions.
For extra security, you could check against IP, a cookie etc. These are all values that you can set when the session is started.
The main thing you need to ask yourself is this:
If a user is either logged in or browsing, what kind of access to the database do you really want / need to give? Each application will have its own needs. If you are going to have AJAX listeners on your server, then all that's needed is a quick look at Firebug (example) to see where your scripts are and the format of the requests. This could allow a potential security hole to be found. Make sure all your incoming requests are correctly treated so as to remove the possibility of injection attacks.
I'd like to automate some administrative task for myself on my wpmu install. For example, I'm trying to write php curl script for logging in and adding a new blog. So i'm already logged in via curl and now i want to post form that's in wpmu-blogs.php but it has hidden wp nonce field. How do i get this value into variable? I checked source but there are more than one wp nonce hidden fields. I'm assuming that there are different nonce values for different tasks on different forms. How do i get the one i need - for adding new blog?
The point of a nonce is to protect against a cross site forgery attack. Because of this, a new nonce value is going to be generated on a regular basis. If the nonce was predictable, it wouldn't be effective.
To post to a nonce enabled form using curl, you'd need to
Turn on all cookie handling options (both saving cookies to a cookie jar, and sending cookies in the saved cookie jar)
Make a request to the page that contains your form
Using regular expressions or an HTML/XHTML parsing library, pull out the nonce value you want
With that value in hand, post to the page you want, sending the nonce along
This kind of programming can be tedious. You're essentially trying to emulate a web browser. It's doable, but you may want to consider
Looking into the Wordpress XML-RPC API. This is the supported way of doing the kind of things you're trying to automate with CURL, and will be much more straight forward once you climb the learning curve.
There's also the AtomPub API. AtomPub is, in part, an attempt to come up with a standard way of performing common actions to weblogs and personal publishing sites. The advantage is, in theroy, scripts written for one system (Wordpress) will work on another system (MovableType). The disadvantage is AtomPub features tend to lag behind/differ from features supported in each engine's custom API.
Finally, if you're not up for leaning either API, you might want to give Selenium a try. Selenium IDE will allow you to script Firefox and have the nonce handled automatically, since you're actually using a browser to visit each page.
you can also use greasemonkey to script your firefox.
This addon permit to customize webpage and do some action using javascript.
Why should I bother to use JavaScript for form validation when I still have to use PHP since the user could have JavaScript support turned off.
Isn't it unnecessary?
Update:
Ok thanks for your answers. it sounds like a good idea to have it on the client side too. where can I download good JavaScript validations?
Do you know where I can download a validation script like that one in yahoo when you register an account?
Javascript validation allows your user to be informed of any errors prior to their submitting the form to the server. This saves irritating page-reloads (since on submit the JS catches the event and validates the form, preventing form-submission if errors are found) and minimises the chances of their having to re-enter information again (and again and again...), or leaving prior to completing the form properly. JS validation is not a substitute for server-side validation (since the user can see the JS, and, by saving the page and amending the JS do whatever they want); but it's a convenience for them.
This is simply part of the concept of progressive enhancement, whereby JS provides a mechanism for enhancing the experience for the user, if it's there and turned on, and hopefully makes their interaction with your site pleasant, or, at least, minimally irritating.
Edited in response to OP's question regarding 'where to download a JS validation tool.'
While I can't -necessarily- recommend any one library (I tend to write my own as required, or borrow from previously self-written examples), a Google search threw these options up:
http://www.jsvalidate.com/
Stephen Walther's page, discussing Microsoft's CDN and jQuery-validation, linking to jQuery Validation plug-in:
jQuery.validate (hosted at MS' ajax.microsoft.com subdomain)
jQuery.validate.min
jQuery validate plug-in homepage (bassistance.de).
You should ALWAYS validate in PHP on the SERVER SIDE and validation in JavaScript is CLIENT SIDE validation for user CONVENIENCE. Thanks to validation on client user may find errors in his form without page relodaing. But user may sent form data without data script validation
(for example he may not have JS support in web browser), thus always validate on the server side.
... as courtesy to the users pretty much. Makes life easier for the ordinary users that simply commit human things from time to time.
I recommend you using unified server-side and client-side validation using a framework, since it may avoid confronting the user to data valid on client side but rejected by the server, or the opposite (client side too restrictive).
Following list of framework give information about server/client side validation:
http://en.wikipedia.org/wiki/Comparison_of_web_application_frameworks
It's a matter of whether you want your form (and website as a whole) to be interactive-cum-user-friendly or not. You can just let the server-side do the validations and throw the error back to the users but that would be less interactive and less user-friendly than warning the users before they submit the form (although you still need to validate the inputs on server-side no matter what). Just my 2 cents :P
I recomend to use Javascript for client side and Php for server side
This will make interaction or user friendly site nad reduce reloading page many times in case user submit wrong data
Yes, it is best practice to validate the user input values from both sides client and server side ,
some cases client was disabled javascript or mobile browser that doesn't javascript, remember there is spammers also.
To my mind, only client-side-checking of form input does not work because of security. Imagine you want to check a user password(if "yourpwd" == userinput), with js the user will see the password because it is in the browser-sourcecode .With php, it is not visible because php is for some reason hidden.