As the needs of web apps have grown, I have found myself writing more and more API driven web applications. I use frameworks like AngularJS to build rich web clients that communicate with these APIs. Currently I am using PHP (Lumen or Laravel) for the server side / API.
The problem is, I find myself repeating business logic between the client and the server side often.
When I say business logic I mean rules like the following for an order form:
You can buy X if you buy Y.
You cannot buy Y if you have Z.
If you buy 10 of these you get 10% off.
Height x Width x Depth x Cost = Final Cost.
Height must be between 10 and 20 if your width is greater than 5.
Etc etc.
To make this app both responsive and fast, the logic for calculations (along with other business logic) is being done on the client side. Since we shouldn't trust the client, I then re-verify those numbers on the server side. This logic can get pretty complex and writing this complex logic in both places feels dangerous.
I have three solutions in mind:
Make everything that require business logic make an ajax call to the API. All the business logic would live in one place and can be tested once. This could be slow since the client would have to wait for each and every change they make to the order form to get updated values and results. Having a very fast API would help with this. The main downside is that this may not work well when users are on poor connections (mobile devices).
Write the business logic on the client side AND on the server side. The client gets instant feedback as they make changes on the form, and we validate all data once they submit on the server. The downside here is that we have to duplicate all the business logic, and test both sides. This is certainly more work and would make future work fragile.
Trust the client!?! Write all the business logic on the client side and assume they didn't tamper with the data. In my current scenario I am working on a quote builder which would always get reviewed by human so maybe this is actually ok.
Honestly, I am not happy about any of the solutions which is why I am reaching out to the community for advice. I would love to hear your opinions or approaches to this problem!
You can do one more thing.
Create your validation and business logic code with JavaScript only. But make it very loosely coupled, as much as possible. If possible, only take JSON as input and give JSON as output.
Then set up a separate NodeJS server alongside the existing PHP server to serve that logic to the client, so that on the client side it can be used without an AJAX call.
Then from the PHP server, when you need to validate and run all those business logic rules, use cURL to call the NodeJS business logic and validate the data. That means an HTTP call from the PHP server to the NodeJS server. The NodeJS server will have additional code which will take the data, validate with the same code, and return the result.
By this way you can make
Faster development - one place to unit test your logic.
Faster client code execution - no need for AJAX, since the same validation JavaScript code is being served by NodeJS to your client.
All business logic lives in the NodeJS server - when business logic changes, you only need to touch this part; so that in the near future, if you need to create some other additional interfaces, then you can use this server to validate your data. It will work just like your Business Rule Server.
The only thing you need to do is setup a NodeJS server alongside your PHP server. But you do not need to change all of your code to run on the NodeJS server.
I had the same issue when I decided to create an application using Laravel for back end, and Angular 2 for front-end. And it seems to me there is no solution to avoid the business logic duplicate so far, because:
At the moment PHP and JavaScript cannot be converted from one to another. Would it be nice if we can use same language for writing the business logic and then embed them into both back-end and front-end. From this point it leads me to another point:
To achieve the goal, we should write the business logic in one language only, and so far JavaScript is the best solution. As you know TypeScript/EMCA Script help us to write the code in the OOP way. Meteor framework NodeJS infrastructure help us to write code in JavaScript for running in both sides Back-end and front-end.
So from my point of view, we can use TypeScript/EMCA to write packages for business logic, for example a class for validation written in JavaScript can be implemented both side, so you just write one time only, but it will be called twice from front-end and back-end also.
That's my point. Hope to see some other solutions for this very interesting topic.
One possible solution is to declare your validation rules in a declarative abstract language like XML or JSON Schema.
Then in the client side, say AngularJS -- you can transform these rules into a an off the shelf form renderer. So now on the client side you end up with forms that validate the declared rules.
Then on your server side API you need to create a reusable validation engine that will validate based on the defined rules.
What you end up with is a single place, your JSON Schema or where ever you declaratively define your rules, that your form and validation rules are defined.
I was also in this position when I worked on some of my own projects. It is always tempting to make use of the power of the client's device to do the heavy lifting and then just validate the results on the server side. Which will result in the business logic appearing twice, both front-end and back-end.
I think option 1 is the best option, it makes the most sense and seems most logical as well. If you want to expand your web app to native mobile apps in the future you will be able to re-use all of the business logic through calling those APIs. To me, this is a massive win.
If the worry if making too many API requests and this could impact mobile performance, then maybe try to group together some of the requests and perform a single check at the end? So instead of doing a check for each field in a form, do a check when the user submit the entire form. Also most internet connection will be sufficient if you keep to request and response data to the minimum, so I won't worry about this.
A bigger problem I normally come across is that since your web app will be broken down into sections, with each section calling the relevant APIs. The state of the app is much more complex to understand, since the user could jump between these states. You will need to think very carefully about the user journey and ensure that the process is not buggy.
Here are some of the common issues I had to deal with:
Does the front-end display error if the API returns one?
If the user made a mistake and submitted the form, he/she should see an error. But once the user fixed the mistake and submits again, the error should hide and success message should now show.
What if the API is buggy or internet connection is unstable, so nothing is returned. Will the front-end hang?
What if there are multiple errors messages, can/does the front-end display them all?
I would recommend have a lot of Unit tests on the front-end to ensure it is stable, even if the business logic is only on the back-end.
First of all: Never trust the client.
That being said, I deal with this all the time, and sadly I haven't find an easy solution. You need to do validation on both sides, BUT, you don't need to do the whole validation on them both.
What I do is try to balance it out. On client side, you do most of the simple (but valuable) validation, normal stuff, numbers must be numbers, dates must be dates, data within range, etc, so than when you submit it, it goes to the server to get fully validated, but you are making sure, on client side, that most of the information is at the very least in its proper format, and some (or most) of it, its already validated, however, the real business logic is done server side, but as most of the data is already correct, the server side validation will most likely approve the request, so you will avoid a lot resubmits.
Now, how to make it so that when you need to change something, you don't need to change it on both sides? Well, sometimes you won't be able to avoid this, when major changes are required, BUT, business logic parameters can be shared, and like you suggested, this can be done through ajax. You make a php file, where you have all your business logic parameters, and with an ajax request, you load this on client side, only once (when the script is loaded), you need to optimize this, so you get only the parameters values, everything else should be already there on the client side, so if some parameter value in the business logic changes, you only change it on your parameter file. (If a parameter is changed after the script was loaded, validation will fail on server side, now you need to decide if you force them to reaload script, so parameters are realodaded, or not, I make them reload them)
I think you get the idea. This is what I do, and it works pretty ok for me, saves me a lot of recoding.
I hope you find this helpful.
I feel option 1 is the best going forward in the future. API first development allows all business logic to be tested and work properly and allow interfaces to access. You should NEVER ever ever trust the user!
The power API first development has is unlimited compared to coding the same logic again and again for each interface needed.
Here's a similar thread about whether to put logic client-side or server-side. At the end of the day, each situation is unique and warrants a different plan, but there are some good, guiding tips in this thread.
Client-side vs. Server-side
Today the solution is clearly the one from #ParthaSarathiGhosh, but the near future will certainly give us another solution...
WebAssembly is a low level assembly language that can be deployed with your application and run in the browser. It will allow you to request some logic from the JavaScript by calling the compiled code in the assembly. This is recommended for heavy scripts that runs client side, but will at the same time allow you to reuse your backend code in the front. In that way, you will be able to write your logic for your backend, and reuse it in the front.
Today this technology is already supported in most modern browser, but it's only available from c/c++. So you can already use it if you've these skills.
It's surely planned to expand it to other language also (as there is already some researches for c# - ex: blazor - and other languages). But the maturity level seems not stable enough for production (even the blazor developer team don't recommend it yet for production).
It's only my own opinion but => Logic in NodeJS is a solution to reuse the javascript code, but I still feel the need for a strongly typed language when it comes to big maintainable logic code. (Yes, I know TypeScript and it's really good, but I miss something). WebAssembly is still a bit young, but will for sure bring a big improvement to respect the DRY principle.
Very interesting problem - another caveat can be that we want to support offline mode, i.e. app must run offline as well.
Another further complication will be if lets say your nice server side was all in one technology like java or .Net etc. and on client side you are choosing between something like native tools or Xamarin but unfortunately not the same as server.
So Partha's approach seems most promising - but as it is stated, that will not work in completely offline mode. So a slightly modified approach will be to consider validation rules as data. But not simple data - rather say that "the whole damn code is data". You can choose any interpreted code language you like - Groovy, JavaScript, CScript etc. - but one rule you will follow 100% is that ALL BUSINESS LOGIC IS IN THAT CODE!
If you are able to achieve this, then in offline mode - when you are syncing data --- you will also sync this very special type of data, i.e. the code! (so no risk of "trusting" client)
And then the offline API and online API is 100% same code - but code is in our interpreted language. I think this approach will not only solve this problem but also make business logic maintainence much simpler. We often created highly complex data models to support rules; when in fact in 2019 - you could simply create the rule with ifs/elses and it will be much simpler. We could train end users in a very simple scripting tool and achive less code to do better things.
I have put together a blog post with these ideas: https://medium.com/#thesaadahmad/business-logic-conundrum-offline-mobile-apps-a06ecc134aee
Related
The thing is this, I have several projects where the customer have an horrific backend definition, returning data in several formats and with lot of stuff I don't need. Since Im doing the mobile webapps, Im creating a middle layer in php, using slimframework (www.slimframework.com), which basically give me a RESTFUL syntax also removing all the data I dont need, and in the format I want (JSON). Of course this middle layer will be deployed in customer backend, so even if makes me so easy the frontend implementation, im a little worry about the performance and also adding another break-point to the 'chain'. For performance, every call I get to my slimframework Im saving in a unique JSON data as a cache, and I have a text file where I can configure easily the max amount of seconds of each petition.
More technically, Im reading with curl the real web service, convert to PHP object, remove and change all the data I need and then make a json_encode...also, Ive though another ideas, like creating a batch process in cron that pulls all the web services from customer and generate local jsons... dont worry about not getting the lastest data, since is an video catch up application, so Im caching every WS but the final url is no cached.
Is there any simpler solution for my workflow?
Sounds good to me.
Sure, you're adding a new potential point of failure, but you're also adding a new place at which problems can be caught and handled resiliently — it sounds like the existing back-end cannot be trusted to do that itself. Unit/stress test the heck out of your intercept layer and gain all confidence that you're not adding undue new risk.
As for performance? Well, as with anything, you need to benchmark it and then balance the results with the other benefits. I love a good abstraction layer† and as long as you're not seeing service-denying levels of performance drop (and I don't see why you should) it's almost certainly well worth it.
If nothing else you're abstracting away this data backend that you appear to have no control over, which will effectively allow you complete flexbility to switch it out for something else someday.
And if the backend changes spontaneously? Well, at least you only need to adjust some isolated portion of your intercept layer, and not every piece of your customer-facing front-end that relies on pieces of that third-party data.
In conclusion, it seems to me like a perfectly robust solution and I think you should absolutely go ahead with it.
† of course, you don't want too many of them. It's up to us to decide how many is appropriate. I usually find zero to be an unacceptable answer. :)
I am trying to build a very user-friendly user interface for my site. The standard right now is to use client side as well as server side validation for forms. Right? I was wondering if I could just forgo client side validation, and rely simply on server side. The validation would be triggered on blur, and will use ajax.
To go one step ahead, I was also planning to save a particular field in the database if it has been validated as correct. Something like a real-time form update.
You see, I am totally new to programming. So I dont know if this approach can work practically. I mean, will there be speed or connection problems? Will it take toll on the server in case of high traffic? Will the site slow down on HTTPS?
Are there any site out there which have implemented this?
Also, the way I see it, I would need a separate PHP script for every field! Is there a shorter way?
What you want to do is very doable. In fact, this is the out-of-the-box functionality you would get if you were using JSF with a rich component framework like ICEfaces or PrimeFaces.
Like all web technology, being able to do it with one language means you can do it with others. I have written forms like you describe in PHP manually. It's a substantial amount of work, and when you're first getting started it will definitely be easiest with one script per field backing the form. As you get better, you will discover how you can include the field name in the request and back it down to one script for Ajax interactions per form. You can of course reduce the burden even further.
PHP frameworks may be able to make this process less onerous, but I haven't used them and would recommend you avoid them initially until you get your bearings. The magic that a system like Cake or Rails provides is very helpful but you have to understand the tradeoffs and the underlying technology or it will be very hard to build robust systems atop their abstractions.
Calculating the server toll is not intuitive. On the one hand, handling large submissions is more work than handling smaller ones. It may be that you are replacing one big request with several tiny ones for a net gain. It's going to depend on the kind of work you have to do with each form field. For example, auto completion is much more expensive than checking for a username already being taken, which is more expensive than (say) verifying that some string is actually a number or some other obvious validation.
Since you don't want to repeat yourself it's very tempting to put all your validation on one side or the other, but there are tradeoffs either way, and it is true that server-side validation is going to be slower than client-side. But the speed of client-side validation is no substitute for the fact that it will introduce security problems if you count on it. So my general approach is to do validation on the server-side, and if I have time, I will add it to the client side as well so as to improve responsiveness. (In point of fact, I actually start with validation in the database as much as possible, then in the server-side code, then client-side, because this way even if my app blows up I don't have invalid data sticking around to worry about).
It used to be that you could expect your site to run about 1/3 as fast under SSL. I don't have up-to-date numbers but it will always be more expensive than unencrypted. It's just plain more work. SSL setup is also not a great deal of fun. Most sites I've worked on either put the whole thing under SSL, or broke the site into some kind of shopping cart which was encrypted and left the rest alone. I would not spend undue energy trying to optimize this. If you need encryption, use it and get on with your day.
At your stage of the game I would not lose too much sleep over performance. Since you're totally new, focus on the learning process, try to implement the features that you think will be gratifying and aim for improvement. It's easy to obsess about performance, but you're not going to have the kind of traffic that will squash you for a long time, unless half the planet is going to want to buy your product and your site is extremely heavy and your host extremely weak. When it comes, you should profile your code and find where you are doing too much work and fix that, and you will get much further than if you try and design up front a performant system. You just don't have enough data yet to do that. And most servers these days are well beyond equipped to handle fairly heavy load—you're probably not going to have hundreds of visitors per second sustained in the near future, and it will take a lot more than that to bring down a $20 VPS running a fairly simple PHP site. Consider that one visitor a second works out to about 80,000 hits a day, you'd need 8 million hits a day to reach 100/second. You're not going to need a whole second to render a page unless you've done something stupid. Which we all do, a few times, when we're learning. :)
Good luck on your journey!
I'm developing a web site, which would make use of PHP, Javascript (JQuery) and use AJAX to connect the two. My question is, how should the coding process go.
I know that Javascript is supposed to be used as an extra kick, and should not be relied upon because it can be turned off. So, should I code the entire site in PHP, and then after all of that is done, add the JQuery code, or should I do both side by side?
If you decide to use AJAX as a core part of the site then you are basically excluding people without javascript which depending on your application can be a legitimate design decision. If you choose to do that then you should check if the user has JavaScript and warn them if they do not.
If you are requiring JavaScript you can develop with it simultaneously with the development of your server side code PHP code. If not, and JavaScript is just a UI enhancement it should be added in later.
Either way validating user input should always also be done on the server side in addition to the client side. All security related code should be only on the server side.
If you are creating a Rich Internet Application than Javascript/Flash/Silverlight will become a requirement of the user in order to use your website. In that case you should perform a check to ensure the user has the correct plugin or javascript enabled. Otherwise display a page which states your site requires it, etc.
If you're just trying to use JavaScript to enhance your site but without it adding a ton of development than the backend would be developed first or if you need to support a large client base then different versions of your site could be made to support JavaScript and non-JavaScript users.
If you want your site to work even if Javascript is disabled, I would develop the site with PHP first and then add your Javascript enhancements. For example, on one site I developed, there is a calendar that lists upcoming events. When the user clicks on that event, a colorbox will appear and the details of that event will be loaded into that colorbox using AJAX. If Javascript is disabled, clicking on the event will just take the user to the event's page.
I think it depends on the approach you take as a programmer.
If you take a top-down approach and start from the user interface and its features, then start from the Javascripts and HTML markup. In the process you can find out how your server API should respond.
If you take a "server capabilities" approach and implement what you can do in the server, then obviously you start implementing that part first. Then you'll continue with the markup and client javascript code, and adapt it to the available APIs that you built. (And probably, in the process, adapt them too).
In both cases, a bit of a design on paper wouldn't hurt.
Of course, as other people have answered, it also greatly depends on how extensive your javascript interface is, how much burden it takes away from PHP, and if you intend to provide an HTML-only interface where the PHP would need to do much of the work.
For instance, let's say you have a table in your code and you want the user to allow to sort it by different columns. This can be done in Javascript, and can be done in PHP, and it can be done in both. It's up to you and your decisions.
If you're planning with jQuery and Ajax in mind then JavaScript off would be almost a completely separate project. However if that is the case, I'd recommend taking the following steps.
Develop the data access and business logic layers.
Build the PHP UI layer w/ full page reloads, etc.
Build an API over the DAL and BL that can be called from JS.
Build the jQuery / Ajax UI from the ground up.
I'm skeptical about the pure server-side UI implementation though, and I'd go about it only if your user base will likely prefer that over the now mainstream JQuery based one.
So, it's been about 3 years since I wrote and went live with my company's main internet facing website. Originally written in php, I've since just been making minor changes here and there to progress the site as we've needed to.
I've wanted to rewrite it from the ground up in the last year or so and now, we want to add some major features so this is a perfect time.
The website in question is as close to a banking website as you'd get (without being a bank; sorry for the obscurity, but the less info I can give out, the better).
For the rewrite, I want to separate the presentation layer from the processing layer as much as I can. I want the end user to be stuck in a box and not be able to get out so to speak
(this is all because of PCI complacency, being PEN tested every 3 months, etc...)
So, being probed every 3 months has increasingly made me nervous. We haven't failed yet and there hasen't been a breach yet, but I want to make sure I continue to pass (as much as I can anyways)
So, I'm considering rewriting the presentation layer in Adobe Flex and do all the processing in PHP (effectively IMO, separating presentation from processing) - I would do all my normal form validation in flex (as opposed to javascript or php) and do my reads and writes to the db via php.
My questions are:
I know Flash has something like 99% market penetration - do people find this to be true? Has anyone seen on their own sites being in flash that someone couldn't access it?
Flash in general has come under alot of attacks about security and the like - i know this. I would use a swf encryptor - disable debugging (which i got snagged on once on a different application), continue to use https and any other means i can think of.
At the end of the day, everyone knows if someone wants in to the data bad enough, their going to find a ways in; i just wanna make it as difficult for them as i can.
Any thoughts are appreciated.
-Mario
There are always people who, for one reason or another, don't install the Flash plugin. Bear in mind that these are distinctly in the minority. Realize also that some people still refuse to enable Javascript. The question you have to ask yourself is whether this small group is enough to get you to move off of some newer technologies.
If the answer to that is yes, you will have to resort to vanilla HTML form processing, sending everything to the server for validation, etc.
If the answer is no, don't be afraid to use Flex. It works fine with https protocol, and is as secure as you want. That said, I wouldn't use it for username/password validation on the client; that information should always be encrypted and sent to a secure server. But validation of other types of field (phone number, etc.) shouldn't be a problem.
There are definitely people who don't have Flash installed and yes, there are people who have JavaScript disabled. But no matter whether you develop for the common denominator which is plain HTML forms or if you go high end, e.g. Flex or AJAX, never ever rely on the client to validate the inputs. It's a good first step, but everything that comes from the client, be it Flash or Ajax or Silverlight or whatever, could be forged.
I am relatively new to PHP, but experienced Java programmer in complex enterprise environments with SOA architecture and multitier applications. There, we'd normally implement business applications with business logic on the middle tier.
I am programming an alternative currency system, which should be easy deployable and customizable by individuals and communities; it will be open source. That's why php/mysql seems the best choice for me.
Users have accounts, and they get a balance. also, the system calculates prices depending on total services delivered and total available assets.
This means, on a purchase a series of calculations happen; the balance and the totals get updated; these are derived figures, something normally not put into a database.
Nevertheless, I resorted to putting triggers and stored procedures into the db, so that in the php code none of these updates are made.
What do people think? Is that a good approach? My experience suggests to me that this is not the best solution, and prompts me to implement a middle tier. However, I would not even know how to do that. On the other hand, what I have so far with store procs seems to me the most appropriate.
I hope I made my question clear. All comments appreciated. There might not be a "perfect" solution.
As is the tendency these days, getting away from the DB is generally a good thing. You get easier version control and you get to work in just one language. More than that, I feel that stored procedures are a hard way to go. On the other hand, if you like that stuff and you feel comfortable with SPs in MySql, they're not bad, but my feeling has always been that they're harder to debug and harder to handle.
On the triggers issue, I'm not sure whether that's necessary for your app. Since the events that trigger the calculations are invoked by the user, those things can happen in PHP, even if the user is redirected to a "waiting" page or another page in the meantime. Obviously, true triggers can only be done on the DB level, but you could use a daemon thread that runs a PHP script every X seconds... Avoid this at all costs and try to get the event to trigger from the user side.
All of this said, I wanted to plug my favorite solution for the data access layer on PHP: Doctrine. It's not perfect, but PHP being what it is, it's good enough. Does most of what you want, and keeps you working with objects instead of database procedures and so forth.
Regarding your title, multiple tiers are, in PHP, totally doable, but you have to do them and respect them. PHP code can call other PHP code, and it is now (5.2+) nicely OO and all that. Do make sure to ignore the fact that a lot of PHP code you'll see around is total crap and does not even use methods, let alone tiers, and decent OO modelling. It's all possible if you want to do it, including doing your own (or using an existing) MVC solution.
One issue with pushing lots of features to the DB level, instead of a data abstraction layer, is that you get locked into the DBMS's feature set. Open source software is often written so that it can be used with different DBs (certainly not always). It's possible that down the road you will want to make it easy to port to postgres or some other DBMS. Using lots of MySQL specific features now will make that harder.
There is absolutely nothing wrong with using triggers and stored procedures and other features that are provided by your DB server. It works and works well, you are using the full potential of the DB, instead of simply relegating it to being a simplistic data store.
However, I'm sure that for every developer on here who agrees with you (and me), there are at least as many who think the exact opposite and have had good experiences with doing that.
Thanks guys.
I was using db triggers because I thought it might be easier to control transaction integrity like that. As you might realize, I am a developer who is also trying to get grip of the db knowledge.
Now, I see there is the solution to spread the php code on multiple tiers, not only logically but also physically by deploying on different servers.
However, at this stage of development, I think I'll stick to my triggers/sp solution, as that doesn't feel to be that wrong. Distributing on multiple layers would require me to redesign my app consistently.
Also, thinking open source, if someone likes the alternative money system, it might be easier for people to just change layout for their requirements, while I would not need to worry that calculations get wrong if people touch php code.
On the other hand, of course, I agree that db stuff might get very hard to debug.
The DB init scripts are in source control, as are the php files :)
Thanks again