Where should we put data validation in a website? - php

I have looked through many threads and/or questions on the internet looking for a clear answer but so far havn't gotten any.
A part of the question was already answered (unless you tell me it is wrong). Data validation should be done client side AND server side. Client side to notify the user of data who is invalid and to offload the server and as well on the server to prevent any kind of attacks.
Validating on both sides can be a tedious task though and I wondered if there was some way to do it so that you don't have so much duplicated code.
There is also something else I was wondering...I have a table with rows who contain the id (of the database) of that row. At the end of each row I have a delete button to delete it from the html, my JSON object who contains an array of my values and who is sent to an ajax call to be deleted from the database (a link between 2 tables).
This isn't safe (well in an unsafe environment like the internet) and I know it. I can always check client side to see if the id is only numbers and if so then check server side if it exists. But who tells me the user did not go in the debugger and inverted 2 lines and end up deleting rows who should not be? What would be the best way to have my ids and be safe from people inverting them?
Any suggestions appreciated

Data validation should be done client side AND server side.
It is not really needed to do it client side IMHO. But it is nice to be able to notify the user before submitting the data. So I would call it a pre.
Validating on both sides can be a tedious task though and I wondered if there was some way to do it so that you don't have so much duplicated code.
Because PHP is serverside and javscript is clientside it isn't really easy to prevent duplication of code. There is one way that I know of however using xhr requests. This way you can use the php validation in javascript (clientside).
At the end of each row I have a delete button to delete it from the html
Please tell me you are using CSRF protection. :-)
I can always check client side to see if the id is only numbers and if so then check server side if it exists.
To prevent malicious sql injected you should use prepared statements and parameterized queries.

Validation should always be done on server. Client-side validation is for user convenience only and is not strictly necessary from the security point of view (except when it opens a XSS security hole).
Apart from that, you should sanitize every input you receive from user. After the input has passed a sanity check (for example if there is text where there should be a number), you should do a permission test (Does the user have the appropriate permission to perform an action that is requested?)
After you have done this, you can actually execute the transaction.

Since other answers have already been posted, i would recommend that you dedicate a utility class that you can call Utility and where you would store many useful functions to check data type & content in the server side, that way even if you'll be doing some great verifications on both client and server side, at least your code will be more readable and manageable.
Another benefit of creating and managing your own utility classes is that you can easily use them across all your projects.
For example, a utility class can contain methods ranging from verifying that a certain field is within a specific range of values, or that it doesn't contain special characters (use Regular Expressions) and so on ... Your advanced utility method would focus on preventing scripts insertion, sql injections as mentionned before.
All in all, you'll do the work once and benefit from it all the time, i'm sure you no want to start checking if an age field is negative or whatever on every project you do from scratch. Using Ajax and XHR can be a bridge between your server side and client side code and would help you unifiy your validation and avoid duplication.

Related

Saving attempted inputs to replace front-end validation (PHP)

While looking for a way to speed up form validation on a large number of fields I build my own library of php validation functions to be re-used across multiple websites. I am now trying to avoid duplicating these rules in javascript without sacrificing user-friendliness.
I am thinking of storing attempted inputs in a $_SESSION['attempted_inputs']
Upon failed server-side validation, user would be redirected back to the original form where an error message will be printed and all fields will be prefilled with attempted inputs, thus eliminating the need for JS validation.
Assuming attempted inputs will be properly sanitized upon saving and server resources are not a concern on my clients small-scaled applications, what could be the downsides of using this method instead of a classic js client-side approach ?
Thanks
Downsides:
Unnecessary start/save/delete of the session. The GC process could do a lot of unnecessary work. You could use a POST back style instead of session saving. I know you said you don't care about performance, but this could bite you back someday.
Slower validation. Going back and forth with the form isn't a nice UX. If you can say what's wrong before posting it, you should do it.
If i can think of more downsides i'll edit my answer.
While this is not the right place to ask for concepts, I just wanted to give you a quick heads-up.
You should always validate your inputs on the server side. Manually changing requests these days with tools as the developer console, makes your application really vulnerable to many kinds of attacks.

Does HTTPS make POST data encrypted?

I am new to the world of programming and I have learnt enough about basic CRUD-type web applications using HTML-AJAX-PHP-MySQL. I have been learning to code as a hobby and as a result have only been using a WAMP/XAMP setup (localhost). I now want to venture into using a VPS and learning to set it up and eventually open up a new project for public use.
I notice that whenever I send form data to my PHP file using AJAX or even a regular POST, if I open the Chrome debugger, and go to "Network", I can see the data being sent, and also to which backend PHP file it is sending the data to.
If a user can see this, can they intercept this data, modify it, and send it to the same backend PHP file? If they create their own simple HTML page and send the POST data to my PHP backend file, will it work?
If so, how can I avoid this? I have been reading up on using HTTPS but I am still confused. Would using HTTPS mean I would have to alter my code in any way?
The browser is obviously going to know what data it is sending, and it is going to show it in the debugger. HTTPS encrypts that data in transit and the remote server will decrypt it upon receipt; i.e. it protects against any 3rd parties in the middle being able to read or manipulate the data.
This may come as a shock to you (or perhaps not), but communication with your server happens exclusively over HTTP(S). That is a simple text protocol. Anyone can send arbitrary HTTP requests to your server at any time from anywhere. HTTPS encrypted or not. If you're concerned about somebody manipulating the data being sent through the browsers debugger tools… your concerns are entirely misdirected. There are many simpler ways to send any arbitrary crafted HTTP request to your server without even going to your site.
Your server can only rely on the data it receives and must strictly validate the given data on its own merits. Trying to lock down the client side in any way is futile.
This is even simpler than that.
Whether you are using GET or POST to transmit parameters, the HTTP request is sent to your server by the user's client, whether it's a web browser, telnet or anything else. The user can know what these POST parameters are simply because it's the user who sends them - regardless of the user's personal involvement in the process.
You are taking the problem from the wrong end.
One of the most important rules of programming is :
Never trust user entries is a basic rule of programming ! Users can and will make mistakes, and some of them will try to damage you or steal from you.
Welcome into the club.
Therefore, you must not allow your code to perform any operation that could damage you in any way if the POST or GET parameters you receive aren't what you expect, be it by mistake or from malicious intents. If your code, by the way it's designed, renders you vulnerable to harm simply by sending specific POST values to one of your pages, then your design is at fault and you should redo it taking that problematic into account.
That problematic being a major issue while designing programs, you will find plenty of documentation, tutorials and tips regarding how to prevent your code to turn against you.
Don't worry, that's not that hard to handle, and the fact that you came up with that concern by yourself show how good you are at figuring things out and how commited you are to produce good code, there is no reason why you should fail.
Feel free to post another question if you are stuck regarding a particular matter while taking on your security update.
HTTPS encrypts in-transit, so won't address this issue.
You cannot trust anything client-side. Any data sent via a webform can be set to whatever the client wants. They don't even have to intercept it. They can just modify the HTML on the page.
There is no way around this. You can, and should, do client side validation. But, since this is typically just JavaScript, it can be modified/disabled.
Therefore, you must validate all data server side when it is received. Digits should be digits, strip any backslashes or invalid special characters, etc.
Everyone can send whatever they want to your application. HTTPS just means that they can't see and manipulate what others send to your application. But you always have to work under the assumption that what is sent to your application as POST, GET, COOKIE or whatever is evil.
In HTTPS, the TLS channel is established before and HTTP data is transfered so, from that point of view, there is no difference between GET and POST requests.
It is encrypted but that is only supposed to protects against mitm attacks.
your php backend has no idea where the data it receives comes from which is why you have to assume any data it receives comes straight from a hacker.
Since you can't protect against unsavoury data being sent you have to ensure that you handle all data received safely. Some steps to take involve ensuring that any files uploaded can't be executed (i.e. if someone uploads a php file instead of an image), ensuring that data received never directly interacts with the database (i.e. https://xkcd.com/327/), & ensuring you don't trust someone just because they say they are logged in as a user.
To protect further do some research into whatever you are doing with the received post data and look up the best practices for whatever it is.

Preventing duplicate form submissions in a stateless system

When working in PHP, to avoid duplicate form submissions, I used to generate a unique id of some sort, store it into a session variable, and have the id in the form, so on submission I can compare the values, regenerating the session value at that point. I never considered it a great solution, but I was never able to think of/find a better solution.
Now, I'm doing an Angular front end with a PHP backend (Lumen), and I'm struggling to think of a solution that doesn't involve me writing into a database. Unless I'm misunderstanding something, I can't use sessions between Angular and PHP, right? So this solution won't work. The only other thing I can think of is to have a key/pair value in a DB, but I never quite understood how that prevents duplicates on something like an accidental double click, wherein the session/database may not update it's key before the second submission starts processing. And as I'm learning more about stateless systems, it feels like a session isn't the best place to put this sort of thing?
Overall, I'm having trouble with creating a secure, backend system to avoid duplicate forms. With angular, I can always prevent duplicate submissions through preventing the button from being clicked, the API call from firing, etc, but I'd like to add backend protection too, and I'd love to hear how the experts do it.
In most cases this should be protected in the front end. Backend protection solution depends upon your definition of unique request. You can create a composite key from the attributes that together makes the request unique. For eg. email id, request id, or any set of request parameters
DB will not accept duplicate keys and you can catch the exception and gracefully handle it on front end.
Eg. Newsletter subscription request application - the uniqueness of the request is determined by email address and newsletter type
I'm sure there are ways to hack it, but I think the short answer is that you don't want to hack it. When you transition to a split back/front end, one thing you are doing is specifically making your API calls stateless. This is a good thing! The statelessness, lack of sessions, etc, can dramatically simplify your back-end application. In short, statelessness is half the reason why you do something like this.
Preventing double submits as you are used to doing is decidedly something that you need a stateful application to do. As a result, it is now the job of the front-end application exclusively.
Your best bet is to think about your application in a whole new way. Your PHP backend handles stateless REST requests, and as such it is not PHP's problem if it gets duplicate submissions. In practice angular should have no problem making sure duplicates don't get submitted (it is really easy to prevent it on the front-end). Your PHP backend does need to make sure it always returns appropriate responses. So for instance on a registration page, back-to-back duplicate registration requests would result a successful registration followed by a failed registration with a message of "That email already exists" (or something like that). Otherwise, your PHP backend doesn't care. I can still do its job. It's the client's job to make sure the double submit doesn't happen, or make sense of the conflicting answers if it does.
Statelessness is a desirable quality in API calls: you'll make your life much more difficult if you muck that up.
The logic is simple, just disable the submit button and show a loader to the user on form submit and enable the button again and hide the loader when you get a response from the first API and also you can empty the form fields too. Now, If the user submits the form again with same details you can easily access the old saved data and compare them and can warn the user that fields are already submitted.
If you are using Angular 4 or 2,
[disabled] , *ngIf or [hidden], Observables will help
to reset form formRef.reset()
https://codecraft.tv/courses/angular/forms/submitting-and-resetting/
and angular 1
ng-disabled, ng-if and HTTP promise will help
and for resetting form
angularjs form reset error

What validation is better to practice - Server-side or Client side validation?

I am making a new system and wanted to know what kind of validations to use for a more convenient coding and a secure system.
Should I use Server-side or Client side validation?
You absolutely need server side validation as the client can't force data in with it in place.
Client side is optional, as without it bad data still gets caught via post back. With it, you can warn the user faster that there is an issue.
There's a pessimistic theme I meant to mention - never trust the user. Either they're going to make a mistake, or they're out to break your app.
I will go with both sides validation. As both have there separate significance.
If you just put the validation only on client side then someone can make your life miserable. And if you just put server side validation then for any error every time client have to fill complete data to server and then only he/she will be able to know the error. So if you just show the error right there just by clicking then it will be good for both of you as you don't have to handle erroneous data every time.
I second, 'The1nk' points. But one additional point is we have to support the user in-terms of fast-responds for mistakes and purposeful attempts, thus the client-side validation is effective.
Definitely you must go with the Server-side validation and for client side validations, just like in the past you are not required to go with the hard-coded Javascript validations (though can go for complex data validations). If you wanted to use some simple validation there are many features available in HTML 5 which you can use like below,
Must required text fields: add html input attribute 'required' (note: no value for this attribute)
Specific types text fields: there are lots of different types of 'type' attribute values are added in HTML 5 like email, number, date, etc... can use them to validate those fields (there are additional attributes also available for those input types, for example for number min and max attributes)
Some useful links
HTML form input types
HTML form attributes
Since this is pure HTML the long term concern (What if the user disable the JS in browser?) with JS can be somewhat addressed. But, Keep in mind "There are NO Silver Bullet in Software Engineering" - Fred Brooks.

Blocking remote access through PHP

Getting this out of the way; I don't frequently do much in php or database programming, so I'm just trying to be extra careful in this current project.
So basically, I have a site powered by a database and I have some Jquery code that uses the "post" method to insert rows into the database. As soon as I started I had my thoughts about security, seeing as people can't see my php, but can see my Javascript. As soon as I finished setting up the system I did a test and running the jquery does insert information even if I'm on a remote source... which can obviously lead to a possible major security problem. So being as unrefined as I am, I just want to know how I should prevent this and secure the system better?
And... Please don't insult me or be rude... I understand that I may be stupid or whatever, but I'm just trying to learn!
This is what Cross Site Request Forgery (CSRF) and the same origin policy sandbox are designed to prevent.
From another point of view, how do you stop someone from writing a form on their site and submitting it as a POST request to your script to add junk records?
Typically CSRF protection is provided by your framework. If you want to implement it directly in PHP, it is quite simple.
Generate a unique token that from your server's code, and add it to the session.
Embed it as a hidden field in the form.
On the script that accepts the form, check for the presence of this hidden field.
If the field exists, match it against what is in the session, if they match - then it is a legitimate request.
If they don't match (or the field is missing), then its a remote request and you can reject it.
This page provides some PHP code to implement the above.
Once you have done that, you need to make sure that the jquery script can be authenticated the same way. This snippet details steps to implement the same technique in jquery.

Categories