I've always wondered how to decide on choosing between using server-side code versus client-side code to build HTML pages. I'll use a very simple php vs javascript/jquery example to further explain my question. Your advice and comment is very much appreciated.
Say I'm about to present a web page to a user to select a type of report in my web page. Which makes more sense?
For server-side creation, I'd do this:
<div id="reportChoices">
<?php
// filename: reportScreen.php
// just for the sake of simplicity, say a database returns the following rows
// that indicates the type of reports that are available:
$results = array(
array("htmlID"=>"battingaverage", "htmlLabel"=>"Batting AVG report"),
array("htmlID"=>"homeruntotals", "htmlLabel"=>"Home Run Totals report"),
);
foreach ($results AS $data)
echo "<input type='radio' name='reportType' value='{$data['htmlID']}'/>{$data['htmlLabel']}";
?>
</div>
Using client-side code, I'd get the javascript to build the page like the following:
<!-- filename: reportScreen.html -->
<div id="reportChoices">
</div>
<!-- I could put this in the document.ready handler, of course -->
<script type="text/javascript">
$.getJSON("rt.php", {}, function(data) {
var mainDiv = $("#reportChoices");
$.each(data, function(idx, jsonData) {
var newInput = $(document.createElement('input'));
newInput
.attr("type", "radio")
.attr("name", "reportType")
.attr("value", jsonData["htmlID"])
mainDiv.append(newInput).append(jsonData["htmlLabel"]);
});
};
</script>
All I would need on the server is a data dump php script such as:
<?php
// filename: rt.php
// again, let's assume something like this was returned from the db regarding available report types
$results = array(
array("htmlID"=>"battingaverage", "htmlLabel"=>"Batting AVG report"),
array("htmlID"=>"homeruntotals", "htmlLabel"=>"Home Run Totals report"),
);
echo json_encode($results);
?>
This is a very simple example, but from this, I see pros and cons in different area.
1 - The server-side solution has the advantage of being able to hide most of the actual programming logic behind how everything is built. When the user looks at the page source, all they see is the already-built web page. In other words, the client-side solution gives away all your source code and programming logic on how certain things are built. But you could use a minifier to make your source look more cryptic.
2 - The client-side solution transfers the "resource load" onto the client system (i.e. the browser needs to use the client's computer resources to build most of the page) whereas the server side solution bogs down, well, the server.
3 - The client-side solution is probably more elegant when it comes to maintainability and readability. But then again, I could have used php libraries that modularize HTML controls and make it a lot more readable.
Any comments? Thanks in advance.
Con (client solution): The client-side solution relies on the client to execute your code properly. As you have no control over what client system will execute your code, it's much harder to ensure it will consistently give the same results as the server-side solution.
This particular problem doesn't really seem to need a client-side solution, does it? I'd stick with the server-side solution. The only extra work there is a foreach loop with one echo and that's not really so resource heavy is it (unless you've profiled it and know that it IS)? And the resulting code is all in one place and simpler.
I'm sceptical that moving the report generation on to the client side really saves any resources - remember that it's still doing an HTTP request back to your (?) server, so the database processing still gets done.
Also, giving away your database schema on the client side could be a recipe for database attacks.
Perhaps you should use a model-view-controller pattern to separate the business logic from the presentation on the server? At least this keeps all the code in one place but still lets you logically separate the components. Look at something like Zend Framework if this sounds useful to you.
Typically, it's best not to depend on Javascript being enabled on the client. In addition, your page will not be crawled by most search engines. You also expose information about your server/server-side code (unless you explicitly abstract it).
If you want to transform data into the view, you might want to take a look at XSLT. Another thing to read up on if you have not already, is progressive enhancement.
http://alistapart.com/articles/understandingprogressiveenhancement/
In the first client-side solution you presented, it's actually less efficient because there's an extra HTTP request. And the second one is possibly not very efficient as well, in that all the data must be processed with json_encode.
However, if what you're working on is a rich web application that depends on Javascript, I see no problem with doing everything with Javascript if you want to.
You can maintain a better separation of concerns by building it on the client side, but that can come at a cost of user experience if there is a lot to load (plus you have to consider what FrustratedWithForms mentioned). To me it's easier to build it on the server side, which means that becomes a more desirable option if you are on a strict timeline, but decide based on your skill set.
Related
** ANGULAR 1.X **
Hello everyone! I need help with making this $http.get function asynchronous, as you can see from the code, my current temp solution is to setInterval the displayData scope. Which obviously is not an efficient solution, because it takes up too much CPU, too much of the users data and can cause some flickering on the UI. I want the array to be updated when the database is updated.
Please do not recommend I switch to other frameworks.
thank you
$scope.displayData = function() {
$http.get("read.php").success(function(data) {
$scope.links = data;
});
}
setInterval(function(){$scope.displayData();}, 500);
This is my PHP ("read.php")
<?php
include("../php/connect.php");
session_start();
$output = array();
$team_id = $_SESSION['team_id'];
$sql = "SELECT record_id, user_id, link, note, timestamp FROM
link_bank WHERE team_id = '$team_id' AND status = 'valid'";
$result = mysqli_query($connect, $sql);
if (mysqli_num_rows($result) > 0) {
while ($row = mysqli_fetch_array($result)) {
$output[] = $row;
}
echo json_encode($output);
}
?>
$http.get is already asynchronous! An asynchronous function is just any function that finishes running at some unknown time in the future.
What you are really trying to do is called long polling. This is where you periodically send a request to your server to get the latest data, and there are several reasons why it's not a good idea (including the flickering and high CPU usage you spoke of).
I know you said you don't want anyone to suggest other frameworks, but trying to write your own framework that will notify the client when the database is updated is a monumental task. There is no short snippet of code we can give you that will give you that functionality from just PHP and Javascript.
However, you could try to roll your own code using WebSockets. That is the most straightforward, non-framework way to have server-to-client communication in the way you are suggesting.
Some details from checking the debug tools, there are tons of network requests that are taking a long time and not returning any new data.
Using the timeline recording to get some details on the client side processing.
The client side isn't suffering that much in the view I'm seeing but without data it's hard to really assess.
You can see by taking a timeline then zooming in on a section of the recorded data what actual functions were called and how long they took. There are also nice extensions for checking $watchers in angular
https://chrome.google.com/webstore/detail/angular-watchers/nlmjblobloedpmkmmckeehnbfalnjnjk?hl=en
You can use {{::bindOnce}} syntax to reduce your watchers if bindings are only ever updated one time (useful in ng-repeats many times). This helps to reduce digest time since less watchers need to be checked for changes. If you have a very long list of elements (1000s) then using some sort of virtual scroller or paging UI-component is helpful to avoid making 1000*elements per row elements in the DOM.
On the server side you can use the xdebug plugin/module for collecting profiling data from the server side and can use kcachegrind to evaluate that data to look for where the server is spending the most time but could also utilize some sort of server side caching and smarter logic there to avoid hitting the database constantly if nothing has changed (perhaps look into using Redis or look into memcached for speeding those server side things up or see if it's just network latency).
Changing languages or frameworks without actually profiling to get data on what exactly is slow isn't a great move IMO will just be jumping around between whatever is the new hotness without an understanding of why or if it matters.
Example below of a relatively fast response from a PHP script. It does basically nothing but spit out a hard coded JSON response, using Redis or memcached there wouldn't be much extra overhead to get back a response especially an empty one.
I have a form which i'm sending with jQuery post function to a php script.
The in the php script i do some checkings and send back a response with a formate like so:
$output = json_encode(array('type'=>'error', 'text' => 'your id is wrong!'));
die($output);
in the page where i have the form i can use a simple way to fire some functions based on the response. for example:
if(response.type == 'error'){
output = '<div class="clienConError">'+response.text+'</div>';
$(".results").hide().html(output).slideDown();
}
which means if it is the response is set as error type one do this and that...
My question is:
Is it possible to send back a jQuery function? so instead of saying: if it's a response set as error type do this. i say never mind what response is it, just do what the response tell you (for example hide some element, inject some html some where and so on... all kinds of jQuery functions).
If it is possible it will give me a few advantages. one of them is the ability to actually hide some jQuery functions (in the php script).
Although this is generally not recommended, it is possible to return JavaScript code from a PHP script, preferably with the appropriate Content-Type: text/javascript header.
At client side, you may execute the generated code using eval or injecting it in the page via a newly created <script> tag.
Dynamic scripts are discouraged for several reasons:
Harder to maintain: the generated code is –by essence– not static, and thus you can never see and edit the whole code as with a static file.
At best sloppy from the security point of view: allowing execution of arbitrary code is never a good idea, and attackers will more than certainly try to leverage this to perform client side attacks such as XSS.
Not friendly towards optimizers: contrary to a whole static script which can be parsed and optimized as soon as the file has finished loading, multiple fragmented bits of script cannot benefit from optimization.
Moreover, attempting to hide client code is a battle already lost. If the code is executed client side, then it is visible client side, period. Any user, and I insist, any user, can open the F12 debugger of their browser and place breakpoints or use step-by-step mode. More savvy users might overwrite the eval function or hook the beforescriptexecute event to inspect what’s going on. And there are even more ways.
The best you can do is obfuscate your code, with possible loss in performance and complexification of your workflow.
The only way you could really do this is by returning a javascript expression in text (wrapped in double quotes) in a JSON object. You would then need to eval() the response which isn't great for a variety of reasons - injection, performance, debugging.
I'd suggest against this approach anyway as you are overlapping the boundaries of what a client and server should be doing - tightly coupling both layers.
It is possible using eval() but it is not recommended due to performance and security reason. eval() executes the argument passed to it. So you can send the jquery function as a string and pass it to eval() to execute it on client side.
Sample Code:
var command= 'var output = \'<div class="clienConError">Here is the response</div>\';$(".results").hide().html(output).slideDown();'
eval(command);
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<div class='results'></div>
Please see following examples first, and which one is better?
Could you compare some aspect such as performance, programming, design, load, user experience, maintenance, security or any other aspect which I have not expected.
Can these aspect help me to decide which one is better?
<script>
$(document).ready
(
function()
{
var message={..something..};
$.ajax
(
{
url:'get_content.php',
data:message,
type:'post',
cache:false,
success:function(data)
{
foreach(data.array)
{
$('#content').append($('<tr><td>'+data.array[key]+'</td></tr>'));
}
},
error:function(){alert('error');}
}
);
}
);
</script>
<table id="content">
</table>
<table id="content">
<?php
query sql;
while(row=fetch result)
{
echo '<tr><td>'+row[field]+'</td></tr>';
}
?>
</table>
You're comparing apples to oranges.
PHP -- Server-Side Language
AJAX -- Client-Side Language (JavaScript)
AJAX is about your only (reasonable) means to grab data from the server on behalf of the client without a full page refresh. However, you can use an assortment of different languages server-side to render the data necessary for AJAX to proceed. (ASP, PHP, ...).
It's up to you and what the rest of your site is developed in really. If you want an absolute (no failure) method of producing content, a PHP dump is about the best way. It's guaranteed to be visible and won't break depending on client support (maybe they have NoScript plugin?).
However, if there is a lot of data, it's sometimes better to spread the load over multiple calls so that the client has at least a semi-complete portion of the page visible, and the data comes later. (This is generally how Facebook proceeds--they give you the general layout in the first fraction of a second, then the rest of the content comes once the framing is complete).
One thing to note though is that it's not an either/or kind of decision. You can use AJAX with a PHP fall-back, you just have to have code in place to test the client and relay that information back to the server for an informed decision.
Personally, I'd take PHP. It takes so little time that the user will never notice a delay, whereas with AJAX there's a whole separate request to make.
AJAX is good for fetching data after the page is loaded. I don't think it should be used for fetching data that should be there in the first place.
For example, you are building a dictionary app where the entries are objects, and all the values are stored in a server-side database. The entries are visible client-side, so they should eventually be JavaScript objects. However, since the data is server-side, there are two ways to construct the object:
Construct the entries objects via PHP, then pass the result to a .js script, which makes JavaScript objects from it.
Construct the entries via JavaScript, calling AJAX methods on the object to request the specific information about the entry (e.g. definition, synonyms, antonyms, etc.) from the server.
The first way ends up constructing each entry twice, once via PHP and once via JavaScript. The second way ends up calling several AJAX methods for every construction, and opening and closing the database connection each time.
Is one preferable to the other, or is there a better way to do this?
I use a rule of thumb, the less AJAX (on a page opener) the better.
If you can push all information on the page load to the user, do it. Then use AJAX on subsequent calls. Otherwise the user-experience will suffer from AJAX (rather than benefit) as the page will take longer to load.
Another option would be, if you're not tied to PHP, to have a JS-based back-end like Node.js. This way you can transmit everything in one format. In some cases you can even store the JS object directly on the database. An example of this kind back-end would be Node.js + Mondo DB, if document database is suitable to your needs.
If you're tied to PHP/JS, i'd go for minimizing AJAX calls. Making asynchronous transfer (duplicating objects) should achieve improved user experience, and the choices made should aim for this. Too many HTTP-requests usually end up making the site slow to react, which is one of the things we usually try to get rid of by using AJAX.
One way that's sometimes useful is also to render JS object by PHP, that could be used if the data is going to be needed but that should not be directly (or at all) shown to the user.
Totally depends on the project. There are just too many variables to say 'you should do it this way'.
Instead, test it. Do it one way, push it to a high number of requests, and profile it. Then switch and try the other way. Keep it easy to switch the output from the PHP by following the MVC pattern.
The only general rule is 'minimise the number of HTTP requests', as HTTP is by far the biggest bottleneck when a page is loading.
After doing a lot of reading on the subject, I realized that many developers mix javascript and php in the same file (by adding the .php extension or using other ways).
On the other hand, if I choose to separate the javascript from the php and store it in an external cacheable static file, I gain some performance advantage, but I also need to find creative ways to pass server-side data to the javascript.
For example, since I can't use a php foreach loop in the .js file I need to convert php arrays to json objects using json_encode. In other cases, I need to declare gloabl javascript variables in the original php file so I can use them in the external js file.
Since server side processing is considered faster than javascript, converting to js arrays and using global vars may also be a bad idea...
The bottom line is I'm trying to understand the trade off here. Which has more impact on performance, enable caching of js files or keeping a cleaner code by avoiding global js variables and multidemnsional js arrays?
are you talking about performance of the server or the browser?
my personal opinion is that given the choice between making a server slower or making a browser slower, you should always choose to let the browser be slower.
usually, "slow" means something like "takes 100ms" or so, which is not noticeable on an individual browser, but if you have a few hundred requests to a server and they're all delayed by that, the effect is cumulative, and the response becomes sluggish. very noticeable.
let the browser take the hit.
I think it depends on what your trying to do. My personal opinion is that it's a little bit of a pain to prevent your dynamic JavaScript from being cached.
Your static JS files need to contain your functions and no dynamic data. Your HTML page can contain your dynamic data. Either within a SCRIPT block (where you will be able to use PHP foreach), or by putting your data into the DOM where the JavaScript can read it, it can be visible (in a table) or invisible (e.g. in a comment) - depends on whether your data is presentable or not.
You could also use AJAX to fetch your dynamic data, but this will be an additional request, just like an external JS file containing the data would.
As Kae says, adding additional load onto the client would benefit your server in terms of scalability (how many users you can serve at any one time).
Data:
If the amount of dynamic data isn't too big and constantly changing (must not be cached by the browser), I would suggest adding it to the head of the HTML. To prevent it from polluting the global namespace, you can use either a closure or a namespace (object), to contain all related variables. Performance-wise, I don't think that in this case, there would be much difference between looping the data into JS-friendly format or handling it to the finest detail on the server (JS has become amazingly fast).
Things are a bit more complicated when the amount of data is huge (100+kbs to megabytes). In case the data is pretty much constant and cacheable, you should generate a external data file (not an actual new file, but an unique URL), which you could then include. Using a timestamp in the name or correctly set cache headers would then enable you to save the time on both server-side (generating the JS-friendly output) and client-side (downloading data) and still offer up to date data.
If you have a lot of data, but it's constantly changing, I'd still use external JS files generated by PHP, but you have to be extra careful to disable browser caching, which make your constantly changing data pretty much constant. You could also do dynamic loading, where you pull different parts of data in parallel and on demand via JS requests.
Code:
The functional part of your code should follow the explanation from before:
Now to the question whether JS should be inlined to the HTML or separated. This depends highly on code, mostly of it's length and reusability. If it's just 20 lines of JS, 10 of which are arrays, etc that are generated by PHP, it would make more sense to leave the code inside the HTML, because HTTP requests (the way how all resources are delivered to the client) are expensive and requesting a small file isn't necessarily a great idea.
However, if you have a bit bigger file with lots of functionality etc (10s of kbs), it would be sensible to include it as a separate .js file in order to make it cacheable and save it from being downloaded every time.
And there's no difference in PHP or JS performance, whether you include the JS inside templates/PHP or separately. It's just a matter of making a project manageable. Whatever you do, you should seriously look into using templates.
After doing a lot of reading
That's what you are probably doing wrong.
There are many people fond on writing articles (and answers on Stackovervlow as well) who has a very little experience and whose knowledge is based... on the other articles they read!
Don't follow their bad example.
Instead of "a lot of reading" you have to do a lot of profiling!.
First of all you have to spot the bottlenecks and see if any of them are caching related.
Next thing you have to decide is what kind caching your system require.
And only then you can start looking for the solution.
Hope it helps.
QDF to your problem is to send the data in a hidden HTML table.
HTML tables are easy to generate in php and easy to ready in JavaScript.
I have a solution when the situation is passing info from php to js and keep most js outside the main php file.
Use the js objects or js functions.
You make some code that needs data from php. When the page loads some small js code is generated from php. Like:
<script type="text/javascript">
a(param1, param2, param3)
</script>
and it's done. The server indicates param1, param2 and param3 directly in the code.
The function is inside a .js file that is cached. With this you reduce the server's upload and the time for the page js to start. The client's code is a bit slower but you win for the time to download and the server becomes faster.