** ANGULAR 1.X **
Hello everyone! I need help with making this $http.get function asynchronous, as you can see from the code, my current temp solution is to setInterval the displayData scope. Which obviously is not an efficient solution, because it takes up too much CPU, too much of the users data and can cause some flickering on the UI. I want the array to be updated when the database is updated.
Please do not recommend I switch to other frameworks.
thank you
$scope.displayData = function() {
$http.get("read.php").success(function(data) {
$scope.links = data;
});
}
setInterval(function(){$scope.displayData();}, 500);
This is my PHP ("read.php")
<?php
include("../php/connect.php");
session_start();
$output = array();
$team_id = $_SESSION['team_id'];
$sql = "SELECT record_id, user_id, link, note, timestamp FROM
link_bank WHERE team_id = '$team_id' AND status = 'valid'";
$result = mysqli_query($connect, $sql);
if (mysqli_num_rows($result) > 0) {
while ($row = mysqli_fetch_array($result)) {
$output[] = $row;
}
echo json_encode($output);
}
?>
$http.get is already asynchronous! An asynchronous function is just any function that finishes running at some unknown time in the future.
What you are really trying to do is called long polling. This is where you periodically send a request to your server to get the latest data, and there are several reasons why it's not a good idea (including the flickering and high CPU usage you spoke of).
I know you said you don't want anyone to suggest other frameworks, but trying to write your own framework that will notify the client when the database is updated is a monumental task. There is no short snippet of code we can give you that will give you that functionality from just PHP and Javascript.
However, you could try to roll your own code using WebSockets. That is the most straightforward, non-framework way to have server-to-client communication in the way you are suggesting.
Some details from checking the debug tools, there are tons of network requests that are taking a long time and not returning any new data.
Using the timeline recording to get some details on the client side processing.
The client side isn't suffering that much in the view I'm seeing but without data it's hard to really assess.
You can see by taking a timeline then zooming in on a section of the recorded data what actual functions were called and how long they took. There are also nice extensions for checking $watchers in angular
https://chrome.google.com/webstore/detail/angular-watchers/nlmjblobloedpmkmmckeehnbfalnjnjk?hl=en
You can use {{::bindOnce}} syntax to reduce your watchers if bindings are only ever updated one time (useful in ng-repeats many times). This helps to reduce digest time since less watchers need to be checked for changes. If you have a very long list of elements (1000s) then using some sort of virtual scroller or paging UI-component is helpful to avoid making 1000*elements per row elements in the DOM.
On the server side you can use the xdebug plugin/module for collecting profiling data from the server side and can use kcachegrind to evaluate that data to look for where the server is spending the most time but could also utilize some sort of server side caching and smarter logic there to avoid hitting the database constantly if nothing has changed (perhaps look into using Redis or look into memcached for speeding those server side things up or see if it's just network latency).
Changing languages or frameworks without actually profiling to get data on what exactly is slow isn't a great move IMO will just be jumping around between whatever is the new hotness without an understanding of why or if it matters.
Example below of a relatively fast response from a PHP script. It does basically nothing but spit out a hard coded JSON response, using Redis or memcached there wouldn't be much extra overhead to get back a response especially an empty one.
Related
Id like very much to have second thoughts on this approach Im implementing to handle very long processes in a web application.
The problem
I have a web application, all written in javascript, which communicates with the server via an API. This application has got some "bulk actions" that take a lot of time to execute. I want to execute them in a safe way, making sure the server won't time out, and with a rich feedback to the user, so he/she knows what is going on.
The usual approach
As I can see in my research, the recommended method of doing that is firing a background process in the server and make it write somewhere how its going so you can make requests to check on it and give feedback to the user. Since Im using php in the back-end, the approach would be more or less what is described here: http://humblecontributions.blogspot.com.br/2012/12/how-to-run-php-process-in-background.html
Adding a few requisites
Since Im developing an open source project (a WordPress plugin) I want it to work in a variety of situations and environments. I did not want to add server side requirements and, as far as I know, the background process approach may not work in several shared hosting solutions.
I want it to work out of the box, in (almost) any server with typical WordPress support, even if it ended up beeing a bit slower solution.
My approach
The idea is to break this process in a way it will run incrementally through many small requests.
So the first time the browser sends a request to run the process, it will run only a small step of it, and return useful information to give the user some feedback. Then the browser does another request, and repeats it until the server informs that the process is done.
In order to do this, I would store this object in a Session, so the first request will give me an id, and the following requests will send this id to the server so it will manipulate the same object.
Here is an conceptual example:
class LongProcess {
function __construct() {
$this->id = uniqid();
$_SESSION[$this->id] = $this;
$this->step = 1;
$this->total = 100;
}
function run() {
// do stuff based on the step you are in
$this->step = $this->step + 10;
if ($this->step >= $this->total)
return -1;
return $this->step;
}
}
function ajax_callback() {
session_start();
if (!isset($_POST['id']) || empty($_POST['id'])) {
$object = new LongProcess();
} else {
$object = $_SESSION[$_POST['id']];
}
$step = $object->run();
echo json_encode([
'id' => $object->id,
'step' => $return,
'total' => $object->total
]);
}
With this I can have my client to send requests recursivelly and update the feedback to the user as the responses are recieved.
function recursively_ajax(session_id)
{
$.ajax({
type:"POST",
async:false, // set async false to wait for previous response
url: "xxx-ajax.php",
dataType:"json",
data:{
action: 'bulk_edit',
id: session_id
},
success: function(data)
{
updateFeedback(data);
if(data.step != -1){
recursively_ajax(data.id);
} else {
updateFeedback('finish');
}
}
});
}
$('#button').click(function() {
recursively_ajax();
});
Of course this is just a proof of concept, Im not even using jQuery in the actual code. This is just to express the idea.
Note that this object which is stored in the session should be a very lightweight object. Any actual data beeing processed should be stored in the database or filesystem and only reference it in the object so it knows where to look for stuff.
One typical case would be processing a large CSV file. The file would be stored in the filesystem, and the object would store a pointer to the last processed line so it knows where to start in the next request.
The object may also return a more verbose log, describing everything that was done and reporting errors, so the user have complete knowledge of what has been done.
The interface I think would be great is a progress bar with a "see details" button that would open a textarea with this detailed log.
Does it make sense?
So now I ask. How does it looks like? Is it a viable approach?
Is there a better way to do this and assure it will work in very limited servers?
Your approach has several disadvantages:
Your heavy requests may block other requests. Usually you have a limit of concurrent PHP processes for handling web request. If the limit is 10, and all slots are taken by processing your heavy requests, your website will not work until some of these requests will complete releasing slot for another lightweight request.
You (probably) will not be able to estimate how much time will take to finish one step. Depending on server load it could take 5 or 50 seconds. And 50 second will probably exceed time execution limit on most of shared hostings.
This task will be controlled by client - any interruption from client side (network problems, closing browser tab) will interrupt the task.
Depending on session backend, using session for storing current state may result in race condition bugs - concurrent request from the same client may overwrite changes in session done by background task. By default PHP uses locking for session, so this should not be the case, but if someone uses alternative backend for sessions (DB, redis) without locking, this will result serious and hard to debug bugs.
There is an obvious trade-off here. For small websites where simplifying installation and configuration is a priority, your approach is OK. In any other case I would stick to simple cron-based queue for running tasks in background and use AJAX request only to retrieve current status of task. So far I have not seen hosting without cron support and adding task to cron should not be that hard for the end user (with proper documentation).
In both cases I would not use session as a storage. Save task and its status in database and use some locking system to ensure, that only one process can modify data of one task. This will be really much more robust and flexible than using session.
Thanks for all the input. I just want to document here some very good answers I got.
Some WordPress plugins, named Woocommerce, have incorporated code from "WP Background Processing" library, that is no longer mantained, but that implements the Cron approach with some important improvements. See this blog post:
https://deliciousbrains.com/background-processing-wordpress/
The actual library lives here: https://github.com/A5hleyRich/wp-background-processing
Although this is a WordPress specific library, I think the approach is valid for any situation.
There is also, for WordPress, a library called Action Scheduler, that not only tun proccesses in background, but allows you to schedule them. Its worth a look:
https://github.com/Prospress/action-scheduler
I have a query which involves getting a list of user from a table in sorted order based on at what time it was created. I got the following timing diagram from the chrome developer tools.
You can see that TTFB (time to first byte) is too high.
I am not sure whether it is because of the SQL sort. If that is the reason then how can I reduce this time?
Or is it because of the TTFB. I saw blogs which says that TTFB should be less (< 1sec). But for me it shows >1 sec. Is it because of my query or something else?
I am not sure how can I reduce this time.
I am using angular. Should I use angular to sort the table instead of SQL sort? (many posts say that shouldn't be the issue)
What I want to know is how can I reduce TTFB. Guys! I am actually new to this. It is the task given to me by my team members. I am not sure how can I reduce TTFB time. I saw many posts, but not able to understand properly. What is TTFB. Is it the time taken by the server?
The TTFB is not the time to first byte of the body of the response (i.e., the useful data, such as: json, xml, etc.), but rather the time to first byte of the response received from the server. This byte is the start of the response headers.
For example, if the server sends the headers before doing the hard work (like heavy SQL), you will get a very low TTFB, but it isn't "true".
In your case, TTFB represents the time you spend processing data on the server.
To reduce the TTFB, you need to do the server-side work faster.
I have met the same problem. My project is running on the local server. I checked my php code.
$db = mysqli_connect('localhost', 'root', 'root', 'smart');
I use localhost to connect to my local database. That maybe the cause of the problem which you're describing. You can modify your HOSTS file. Add the line
127.0.0.1 localhost.
TTFB is something that happens behind the scenes. Your browser knows nothing about what happens behind the scenes.
You need to look into what queries are being run and how the website connects to the server.
This article might help understand TTFB, but otherwise you need to dig deeper into your application.
If you are using PHP, try using <?php flush(); ?> after </head> and before </body> or whatever section you want to output quickly (like the header or content). It will output the actually code without waiting for php to end. Don't use this function all the time, or the speed increase won't be noticable.
More info
I would suggest you read this article and focus more on how to optimize the overall response to the user request (either a page, a search result etc.)
A good argument for this is the example they give about using gzip to compress the page. Even though ttfb is faster when you do not compress, the overall experience of the user is worst because it takes longer to download content that is not zipped.
What is the best way to break up a recursive function that is using a ton of resources
For example:
function do_a_lot(){
//a lot of code and processing is done here
//it takes a lot of execution time
if($true){
//if true we have to do all of that processing again
do_a_lot();
}
}
Is there anyway to make the server only have to take the brunt of the first execution and then break up the recursion into separate processes? Or am I dreaming?
Honestly, if your function is using up that much of your system's resources, I'd most likely refactor my code. However, it's not truly multithreading, but you could perhaps look at using popen to fork your process.
One of the rule of PHP is "Share nothing". That means every PHP process is independant and shares nothing with the others. So if you want to break your execution on several PHP process you'll have to store the data somewhere. It can be a memcached storage, or a database, or the session, as you want.
Then you'll need to 'fork' your PHp process. They're solutions available to get this done on the server side. IMHO this is all hacks. Dangerous and not minded in the PHP/web way. With the exception of 'work queues' tools.
I think the nicest way is to break your task with ajax. This will allow you a clean user interface and will avoid any long response timeout in the web process. i.e. show a 'working zone' to you user, then ask in ajax for next step of the job (first one), get response (in server side stor you response), then ask for next step, store new response and respond , next step, etc. You can even add a 'stop that stuff' function on the client side.
You can check as well for 'php work queue' on google.
If it's a long running task, divide and conquer with gearman
I've always wondered how to decide on choosing between using server-side code versus client-side code to build HTML pages. I'll use a very simple php vs javascript/jquery example to further explain my question. Your advice and comment is very much appreciated.
Say I'm about to present a web page to a user to select a type of report in my web page. Which makes more sense?
For server-side creation, I'd do this:
<div id="reportChoices">
<?php
// filename: reportScreen.php
// just for the sake of simplicity, say a database returns the following rows
// that indicates the type of reports that are available:
$results = array(
array("htmlID"=>"battingaverage", "htmlLabel"=>"Batting AVG report"),
array("htmlID"=>"homeruntotals", "htmlLabel"=>"Home Run Totals report"),
);
foreach ($results AS $data)
echo "<input type='radio' name='reportType' value='{$data['htmlID']}'/>{$data['htmlLabel']}";
?>
</div>
Using client-side code, I'd get the javascript to build the page like the following:
<!-- filename: reportScreen.html -->
<div id="reportChoices">
</div>
<!-- I could put this in the document.ready handler, of course -->
<script type="text/javascript">
$.getJSON("rt.php", {}, function(data) {
var mainDiv = $("#reportChoices");
$.each(data, function(idx, jsonData) {
var newInput = $(document.createElement('input'));
newInput
.attr("type", "radio")
.attr("name", "reportType")
.attr("value", jsonData["htmlID"])
mainDiv.append(newInput).append(jsonData["htmlLabel"]);
});
};
</script>
All I would need on the server is a data dump php script such as:
<?php
// filename: rt.php
// again, let's assume something like this was returned from the db regarding available report types
$results = array(
array("htmlID"=>"battingaverage", "htmlLabel"=>"Batting AVG report"),
array("htmlID"=>"homeruntotals", "htmlLabel"=>"Home Run Totals report"),
);
echo json_encode($results);
?>
This is a very simple example, but from this, I see pros and cons in different area.
1 - The server-side solution has the advantage of being able to hide most of the actual programming logic behind how everything is built. When the user looks at the page source, all they see is the already-built web page. In other words, the client-side solution gives away all your source code and programming logic on how certain things are built. But you could use a minifier to make your source look more cryptic.
2 - The client-side solution transfers the "resource load" onto the client system (i.e. the browser needs to use the client's computer resources to build most of the page) whereas the server side solution bogs down, well, the server.
3 - The client-side solution is probably more elegant when it comes to maintainability and readability. But then again, I could have used php libraries that modularize HTML controls and make it a lot more readable.
Any comments? Thanks in advance.
Con (client solution): The client-side solution relies on the client to execute your code properly. As you have no control over what client system will execute your code, it's much harder to ensure it will consistently give the same results as the server-side solution.
This particular problem doesn't really seem to need a client-side solution, does it? I'd stick with the server-side solution. The only extra work there is a foreach loop with one echo and that's not really so resource heavy is it (unless you've profiled it and know that it IS)? And the resulting code is all in one place and simpler.
I'm sceptical that moving the report generation on to the client side really saves any resources - remember that it's still doing an HTTP request back to your (?) server, so the database processing still gets done.
Also, giving away your database schema on the client side could be a recipe for database attacks.
Perhaps you should use a model-view-controller pattern to separate the business logic from the presentation on the server? At least this keeps all the code in one place but still lets you logically separate the components. Look at something like Zend Framework if this sounds useful to you.
Typically, it's best not to depend on Javascript being enabled on the client. In addition, your page will not be crawled by most search engines. You also expose information about your server/server-side code (unless you explicitly abstract it).
If you want to transform data into the view, you might want to take a look at XSLT. Another thing to read up on if you have not already, is progressive enhancement.
http://alistapart.com/articles/understandingprogressiveenhancement/
In the first client-side solution you presented, it's actually less efficient because there's an extra HTTP request. And the second one is possibly not very efficient as well, in that all the data must be processed with json_encode.
However, if what you're working on is a rich web application that depends on Javascript, I see no problem with doing everything with Javascript if you want to.
You can maintain a better separation of concerns by building it on the client side, but that can come at a cost of user experience if there is a lot to load (plus you have to consider what FrustratedWithForms mentioned). To me it's easier to build it on the server side, which means that becomes a more desirable option if you are on a strict timeline, but decide based on your skill set.
There is a family of methods (birddog, shadow, and follow)in the Twitter API that opens a (mostly) permanent connection and allows you to follow many users. I've run the sample connection code with cURL in bash, and it works nicely: when a user I specify writes a tweet, I get a stream of XML in my console.
My question is: how can I access data with PHP that isn't returned as a direct function call, but is streamed? This data arrives sporadically and unpredictably, and it's not something I've ever dealt with nor do I know where to begin looking for answers. Any advice and descriptions of libraries or pitfalls would be appreciated.
fopen and fgets
<?php
$sock = fopen('http://domain.tld/path/to/file', 'r');
$data = null;
while(($data = fgets($sock)) == TRUE)
{
echo $data;
}
fclose($sock);
This is by no means great (or even good) code but it should provide the functionality you need. You will need to add error handling and data parsing among other things.
I'm pretty sure that your script will time out after ~30 seconds of listening for data on the stream. Even if it doesn't, once you get a significant server load, the sheer number of open and listening connections will bring the server to it's knees.
I would suggest you take a look at an AJAX solution that makes a call to a script that just stores a Queue of messages. I'm not sure how the Twitter API works exactly though, so I'm not sure if you can have a script run when requested to get all the tweets, or if you have to have some sort of daemon append the tweets to a Queue that PHP can read and pass back via your AJAX call.
There are libraries for this these days that make things much easier (and handle the tricky bits like reconnections, socket handling, TCP backoff, etc), ie:
http://code.google.com/p/phirehose/
I would suggest looking into using AJAX. Im not a PHP developer, but I would think that you could wire up an AJAX call to the API and update your web page.
Phirehose is definitely the way to go:
http://code.google.com/p/phirehose/