Here's the context : we're actually using the basic web stack, and our website builds HTML templates with datas it gets from the database directly.
For tons of reasons, we're splitting this into two projects, one will be responsible for talking with the database directly, the other one will be responsible for displaying the datas.
To make it simple, one is the API, the other one is the client.
Now we're wondering about how we should ask our API for datas. To us, there are 2 totally different options :
One request, one route, for one page. So we would get a huge object to use which would contain everything needed to build the corresponding page.
One request for one little chunk of data. For example on a listing page, we'd make one request to get datas about the current logged user and display its name along with its avatar, then another request to get every articles, another request to get datas about the current page category...
Some like the first option, I don't like it at all. I feel like we're going to have a lot of redundance. I'm also not sure one huge request is that much faster than X tiny requests. I also don't like binding data to a specific page, as I feel like the API should be (somewhat) independant from our front website.
Some also don't like the second option, they fear we overcharge the server by making too many calls, and I can understand this fear. It also looks like it'll be hard to properly define the scope of what to send, what to not send without any redundancy. If we're sending only what's needed to display a page, isn't that the first option in the end ? But isn't sending unneeded information a waste ?
What do you guys think ?
The first approach will be good if getting all data is fast enough. The less requests - the faster app. Redundancy I think you mean code redundancy because sending the same amount of data in one request will be definitely faster than in 10 small non-parallel ones (network overhead). If you send a few parallel requests from UI you can get performance gain of cause. And you should take into account that browsers have some limitations for parallel requests.
Another case if getting some data is fast but another is slow you can return the first data and on UI show loading image and load the second data when it will come. It will improve user experience showing the page as fast as possible.
The second approach is more flexible as you can use some requests from other pages. But it comes with price - logic with making these requests (gathering information) you need to move to UI code making it more complex. And if you need the same data on another app like mobile you have to copy this logic. As a rule creating such code on backend side is easier.
You can also take a look at this pattern which allow you to locate business/domain logic inside one service and “frontend friendly” logic to another service (orchistration service).
As the needs of web apps have grown, I have found myself writing more and more API driven web applications. I use frameworks like AngularJS to build rich web clients that communicate with these APIs. Currently I am using PHP (Lumen or Laravel) for the server side / API.
The problem is, I find myself repeating business logic between the client and the server side often.
When I say business logic I mean rules like the following for an order form:
You can buy X if you buy Y.
You cannot buy Y if you have Z.
If you buy 10 of these you get 10% off.
Height x Width x Depth x Cost = Final Cost.
Height must be between 10 and 20 if your width is greater than 5.
Etc etc.
To make this app both responsive and fast, the logic for calculations (along with other business logic) is being done on the client side. Since we shouldn't trust the client, I then re-verify those numbers on the server side. This logic can get pretty complex and writing this complex logic in both places feels dangerous.
I have three solutions in mind:
Make everything that require business logic make an ajax call to the API. All the business logic would live in one place and can be tested once. This could be slow since the client would have to wait for each and every change they make to the order form to get updated values and results. Having a very fast API would help with this. The main downside is that this may not work well when users are on poor connections (mobile devices).
Write the business logic on the client side AND on the server side. The client gets instant feedback as they make changes on the form, and we validate all data once they submit on the server. The downside here is that we have to duplicate all the business logic, and test both sides. This is certainly more work and would make future work fragile.
Trust the client!?! Write all the business logic on the client side and assume they didn't tamper with the data. In my current scenario I am working on a quote builder which would always get reviewed by human so maybe this is actually ok.
Honestly, I am not happy about any of the solutions which is why I am reaching out to the community for advice. I would love to hear your opinions or approaches to this problem!
You can do one more thing.
Create your validation and business logic code with JavaScript only. But make it very loosely coupled, as much as possible. If possible, only take JSON as input and give JSON as output.
Then set up a separate NodeJS server alongside the existing PHP server to serve that logic to the client, so that on the client side it can be used without an AJAX call.
Then from the PHP server, when you need to validate and run all those business logic rules, use cURL to call the NodeJS business logic and validate the data. That means an HTTP call from the PHP server to the NodeJS server. The NodeJS server will have additional code which will take the data, validate with the same code, and return the result.
By this way you can make
Faster development - one place to unit test your logic.
Faster client code execution - no need for AJAX, since the same validation JavaScript code is being served by NodeJS to your client.
All business logic lives in the NodeJS server - when business logic changes, you only need to touch this part; so that in the near future, if you need to create some other additional interfaces, then you can use this server to validate your data. It will work just like your Business Rule Server.
The only thing you need to do is setup a NodeJS server alongside your PHP server. But you do not need to change all of your code to run on the NodeJS server.
I had the same issue when I decided to create an application using Laravel for back end, and Angular 2 for front-end. And it seems to me there is no solution to avoid the business logic duplicate so far, because:
At the moment PHP and JavaScript cannot be converted from one to another. Would it be nice if we can use same language for writing the business logic and then embed them into both back-end and front-end. From this point it leads me to another point:
To achieve the goal, we should write the business logic in one language only, and so far JavaScript is the best solution. As you know TypeScript/EMCA Script help us to write the code in the OOP way. Meteor framework NodeJS infrastructure help us to write code in JavaScript for running in both sides Back-end and front-end.
So from my point of view, we can use TypeScript/EMCA to write packages for business logic, for example a class for validation written in JavaScript can be implemented both side, so you just write one time only, but it will be called twice from front-end and back-end also.
That's my point. Hope to see some other solutions for this very interesting topic.
One possible solution is to declare your validation rules in a declarative abstract language like XML or JSON Schema.
Then in the client side, say AngularJS -- you can transform these rules into a an off the shelf form renderer. So now on the client side you end up with forms that validate the declared rules.
Then on your server side API you need to create a reusable validation engine that will validate based on the defined rules.
What you end up with is a single place, your JSON Schema or where ever you declaratively define your rules, that your form and validation rules are defined.
I was also in this position when I worked on some of my own projects. It is always tempting to make use of the power of the client's device to do the heavy lifting and then just validate the results on the server side. Which will result in the business logic appearing twice, both front-end and back-end.
I think option 1 is the best option, it makes the most sense and seems most logical as well. If you want to expand your web app to native mobile apps in the future you will be able to re-use all of the business logic through calling those APIs. To me, this is a massive win.
If the worry if making too many API requests and this could impact mobile performance, then maybe try to group together some of the requests and perform a single check at the end? So instead of doing a check for each field in a form, do a check when the user submit the entire form. Also most internet connection will be sufficient if you keep to request and response data to the minimum, so I won't worry about this.
A bigger problem I normally come across is that since your web app will be broken down into sections, with each section calling the relevant APIs. The state of the app is much more complex to understand, since the user could jump between these states. You will need to think very carefully about the user journey and ensure that the process is not buggy.
Here are some of the common issues I had to deal with:
Does the front-end display error if the API returns one?
If the user made a mistake and submitted the form, he/she should see an error. But once the user fixed the mistake and submits again, the error should hide and success message should now show.
What if the API is buggy or internet connection is unstable, so nothing is returned. Will the front-end hang?
What if there are multiple errors messages, can/does the front-end display them all?
I would recommend have a lot of Unit tests on the front-end to ensure it is stable, even if the business logic is only on the back-end.
First of all: Never trust the client.
That being said, I deal with this all the time, and sadly I haven't find an easy solution. You need to do validation on both sides, BUT, you don't need to do the whole validation on them both.
What I do is try to balance it out. On client side, you do most of the simple (but valuable) validation, normal stuff, numbers must be numbers, dates must be dates, data within range, etc, so than when you submit it, it goes to the server to get fully validated, but you are making sure, on client side, that most of the information is at the very least in its proper format, and some (or most) of it, its already validated, however, the real business logic is done server side, but as most of the data is already correct, the server side validation will most likely approve the request, so you will avoid a lot resubmits.
Now, how to make it so that when you need to change something, you don't need to change it on both sides? Well, sometimes you won't be able to avoid this, when major changes are required, BUT, business logic parameters can be shared, and like you suggested, this can be done through ajax. You make a php file, where you have all your business logic parameters, and with an ajax request, you load this on client side, only once (when the script is loaded), you need to optimize this, so you get only the parameters values, everything else should be already there on the client side, so if some parameter value in the business logic changes, you only change it on your parameter file. (If a parameter is changed after the script was loaded, validation will fail on server side, now you need to decide if you force them to reaload script, so parameters are realodaded, or not, I make them reload them)
I think you get the idea. This is what I do, and it works pretty ok for me, saves me a lot of recoding.
I hope you find this helpful.
I feel option 1 is the best going forward in the future. API first development allows all business logic to be tested and work properly and allow interfaces to access. You should NEVER ever ever trust the user!
The power API first development has is unlimited compared to coding the same logic again and again for each interface needed.
Here's a similar thread about whether to put logic client-side or server-side. At the end of the day, each situation is unique and warrants a different plan, but there are some good, guiding tips in this thread.
Client-side vs. Server-side
Today the solution is clearly the one from #ParthaSarathiGhosh, but the near future will certainly give us another solution...
WebAssembly is a low level assembly language that can be deployed with your application and run in the browser. It will allow you to request some logic from the JavaScript by calling the compiled code in the assembly. This is recommended for heavy scripts that runs client side, but will at the same time allow you to reuse your backend code in the front. In that way, you will be able to write your logic for your backend, and reuse it in the front.
Today this technology is already supported in most modern browser, but it's only available from c/c++. So you can already use it if you've these skills.
It's surely planned to expand it to other language also (as there is already some researches for c# - ex: blazor - and other languages). But the maturity level seems not stable enough for production (even the blazor developer team don't recommend it yet for production).
It's only my own opinion but => Logic in NodeJS is a solution to reuse the javascript code, but I still feel the need for a strongly typed language when it comes to big maintainable logic code. (Yes, I know TypeScript and it's really good, but I miss something). WebAssembly is still a bit young, but will for sure bring a big improvement to respect the DRY principle.
Very interesting problem - another caveat can be that we want to support offline mode, i.e. app must run offline as well.
Another further complication will be if lets say your nice server side was all in one technology like java or .Net etc. and on client side you are choosing between something like native tools or Xamarin but unfortunately not the same as server.
So Partha's approach seems most promising - but as it is stated, that will not work in completely offline mode. So a slightly modified approach will be to consider validation rules as data. But not simple data - rather say that "the whole damn code is data". You can choose any interpreted code language you like - Groovy, JavaScript, CScript etc. - but one rule you will follow 100% is that ALL BUSINESS LOGIC IS IN THAT CODE!
If you are able to achieve this, then in offline mode - when you are syncing data --- you will also sync this very special type of data, i.e. the code! (so no risk of "trusting" client)
And then the offline API and online API is 100% same code - but code is in our interpreted language. I think this approach will not only solve this problem but also make business logic maintainence much simpler. We often created highly complex data models to support rules; when in fact in 2019 - you could simply create the rule with ifs/elses and it will be much simpler. We could train end users in a very simple scripting tool and achive less code to do better things.
I have put together a blog post with these ideas: https://medium.com/#thesaadahmad/business-logic-conundrum-offline-mobile-apps-a06ecc134aee
I'm just getting into using REST and have started building my first app following this design model. From what I can gather the idea is to build your service like an api which your website itself is a consumer of.
This makes sense for me since my web app does a lot of AJAX calls, however it seems a little wasteful to authenticate each request to avoid using sessions. Is this just something I have to accept as part of the REST design process?
Also, making ajax calls works fine, but say, I need to just show a view of the users profile, does this now mean I also need to make a curl call to my api to pull this data. At this point I know I'm working internally so is authentication even required?
Some remarks:
While you could set up your whole application to have a REST interface, you should set it up to still be able to call it internally. Calling it from HTTP, and getting results back by HTTP is only input-processing, and output-rendering. So, if you seperate those concerns you get a flow: input-processing -> method call -> data return -> data rendering. Shaving of the first & last bit, what do you have left? A function call that returns data, which you can just use in your code. Seperate functionality to translate an 'outside' function call into an 'internal' one, and render 'internal' data into 'external' (xml, json, html, whatever you desire), makes your app efficient, and still fully REST capable.
Authentication is needed if you allow outside calls, even if you don't 'tell' other users data can be retrieved a certain way, it is still easily discoverable. I don't know why you wouldn't want to use sessions for this authentication (which most likely happens in forementioned translation from an 'outside' call to an internal one. I would not make 'not using sessions' a requirement, but there is no reason you couldn't allow several methods of authentication (session, re-authentication on every request, tokens, etc.).
Typically I prefer to produce an interface which can be called using standard PHP and then add an interface to this which adds authentication and RESTful access. So you can access for example:
http://example/api/fetchAllFromUsers?auth-key=XXXXX
Which translates to:
$internalInterface = new Api();
$internalInterface->fetchAllFromUsers();
Instead of authenticating each time, save a chunk of state (in, eg, a cookie) that identifies your session, and use that. It then becomes either a parameter to a GET (using the ?name-value syntax) or can be a part of the URI itself, eg
http://example.com/application/account/ACCTNO/TOKEN
where ACCTNO and TOKEN identify the account and the authentic session respectively.
This may seem a little flaky at first, but it means that your application, as it grows larger, never needs complicated load-balancing with session state and so on -- a simple proxy scheme works fine. This reduces the architeccture complexity by great staggering amounts.
Need some advice on the best approach.
Currently we are going to start a new CI web project where we need to leverage data heavily from a external web-services or API for data?
Is it better to manipulate the data programically (in objects or array) when i need to sort them or store them in database and call them with order, group by etc..?
Is there a known architecture or framework for this?
What's the best approach use nowadays like how aggregater website is doing where they pull many data sources from various vendor API?
I would suggest getting the data using curl etc manipulate as arrays etc then store.
Make sure you build in somekind of caching as well so you don't end up making unnessecary requests.
The reason behind my method is to process once rather than everytime your site is requested.
After all these while, i've have come up with the plan and it's working great !
Consume webservices
Deserialize XML to arrays/ objects
Store in cache (APC/File cache, i'm using codeigniter by the way ) (expire every 4hrs)
First request will take 3-4 secs to complete(first call to webservice to grab data, stored it in cache), while subsequent requests from users take 0.002 secs due to cached data. 4hours later, the cycle will repeat so as to make sure data is 4hourly updated from webservice.
If you are the first user that access the site after each refresh, you are the unlucky chap. But you sacrificed for all other chaps.
Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including:
1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways
2) Creating iPhone applications that use that data;
3) Having other web applications use that data for their own end.
Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off.
My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.)
So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?
I think that 2-legged OAuth is what you want to satisfy #2 and #3. For #1 I would suggest that instead of the customer making JS requests directly against your application, they could instead proxy those requests through their own web application.
A midway solution is to require an API key; and then demand that whomsoever uses it doesn't actually use it directly with the AJAX; but wrap their calls in a server-side request, e.g.:
AJAX -> customer server -> your server -> customer server -> user
Creating a simple PHP API for interested parties shouldn't be too tricky, and your own iPhone applications would obviously cut out the middle man, shipping with their own API key.
OAuth and OpenID are unlikely to have much to do with the AJAX calls directly. Most likely, you'll have some sort of authorization filter in front of your AJAX handler that checks a cookie, and maybe that cookie is set as a result of an OpenID authentication.
It seems like this is coming down to a question of "how do I prevent screen scraping." If only logged-in customers get to see the prices, that's one thing, but assuming you're like most retail sites and your barrier to customer sign-up is as low as possible, that doesn't really help.
And, hey, if your prices aren't available, you don't get to show up in search engines like Froogle or Nextag or PriceGrabber. But that's more of a business strategy decision, not a programming one.