Our site makes heavy use of AJAX, and ends up calling several different PHP files in the background to fill individual tabs. Specifically, JQuery and DataTables are used. Because PHP is stateless, I'm struggling to create an activity log to work across all requests for a single session (e.g., all SQL queries performed for this page view) as each PHP file executes its own queries in its own state, so they are unaware of each other.
Any tips on how to handle this? I fear I'm overcomplicating matters or missing an obvious solution.
In the end, ideally the footer of our application can say something like: Your application performed 6 SQL queries, here they are: ....
I don't require specific code, but hopefully the above makes sense and a Eureka moment can be discovered.
One solution using Jquery would be for each $.post Ajax call to pass a callback function. The callback then adds the result when returned from the server to a DIV or a js array.
Ajax post could be:
$.post(server_url, params, my_callback, "json")
and callback:
function my_callback(response) {
$("#thediv").append(response)
}
This would asynchronously log each response in the div with id "thediv"
Related
my problem is - for now - not about specific code but more about basic understanding - I think.
I want to create a formular and use the data without refreshing the page, so that brings me to AJAX.
Now do I always have to create a seperate file that works with the data that AJAX sends? Can't I just "grab" the data and work with it on the same page?
I think I missunderstood some basic concepts.
I thought about something like this:
<form id="load_filters_form">
..
</form>
<?php
var_dump($_GET); // values from <form>
?>
<!-- AJAX, jQuery -->
<script>
$("#load_filters_form").submit(function(event){
event.preventDefault();
$.ajax({
type: 'get',
data: $(this).serialize()
success: function() {
$("#load_filters_form")[0].reset();
}
});
});
</script>
What you're proposing is certainly possible, it's exactly how AJAX works. You make an AJAX request from JavaScript code, sending any data the server-side code will need, and handle the response from the server in your JavaScript code.
The problem with what you're proposing is that you're making it unnecessarily complex for yourself. Consider what your code in the question would return to the JavaScript code in the AJAX response. It returns an entire HTML page, most of which is already on the client.
Why re-transmit all of that data that the client already has? Why have code on the client to parse out the data it's looking for from all of the unnecessary markup around that data?
Keep your operations simple. If you need a server-side operation which receives data, performs logic, and returns a result then create an operation which does exactly that. Call that operation in AJAX and use the resulting data.
Now maybe that response is structured JSON data, which your client-side code can read and update the UI accordingly. Or maybe that response is raw HTML (not an entire page but perhaps a single <div> or any kind of container which presents an updated version of a section of the page), which your client-side code can swap into the UI directly.
The AJAX interactions with the server should generally be light. If you're intentionally re-loading the entire page in an AJAX operation then, well, why use AJAX in the first place? The point is to send to the server only the data it needs, and receive back from the server only the data you need. For example, if all you need is to update a list of records displayed on the page then you don't need the whole page or even the HTML table of records, you just need the records. JSON is useful for exactly that, returning structured data and only structured data to the client. Then the client-side code can render that data into the structure of the page.
Now do I always have to create a separate file that works with the data that AJAX sends?
Yes and no. You may choose not to have a specific file which your ajax is pulling, but you do need some sort of Routing and Controller relationship as most frameworks build it.
You could in theory create a request to self (the same page) but that's bad logic. You are going to mix backend logic with frontend and will get messy - very quickly. You really need to separate all three elements,
PHP takes the data and process it
JavaScript takes the data and displays it
Your html should be code free; Just a pretty finalized product.
The best design pattern is to separate those files in their proper environment.
Can't I just "grab" the data and work with it on the same page?
Not really, at least not consistently. You also have to keep in mind it's a potential issue in attempting to serve two separate contents from the same route/file:
if ajax
// do something
else
// do the other thing
Ajax does not want fully rendered HTML files, it takes too long; it's best to serve JSON objects/arrays which will be rendered in your frontend via JavaScript; which was also used to make the request - in the user's browser without the latency caused by their network or your server.
There's no sure way of knowing which request is what since no data from the client is trustworthy, including HTTP headers; they are easy to fake and could potentially lead to security/unwanted results.
Thus, the best solutions is to have a foreign file which you would make the requests to, instead of doing it from itself.
For example, you are building a dictionary app where the entries are objects, and all the values are stored in a server-side database. The entries are visible client-side, so they should eventually be JavaScript objects. However, since the data is server-side, there are two ways to construct the object:
Construct the entries objects via PHP, then pass the result to a .js script, which makes JavaScript objects from it.
Construct the entries via JavaScript, calling AJAX methods on the object to request the specific information about the entry (e.g. definition, synonyms, antonyms, etc.) from the server.
The first way ends up constructing each entry twice, once via PHP and once via JavaScript. The second way ends up calling several AJAX methods for every construction, and opening and closing the database connection each time.
Is one preferable to the other, or is there a better way to do this?
I use a rule of thumb, the less AJAX (on a page opener) the better.
If you can push all information on the page load to the user, do it. Then use AJAX on subsequent calls. Otherwise the user-experience will suffer from AJAX (rather than benefit) as the page will take longer to load.
Another option would be, if you're not tied to PHP, to have a JS-based back-end like Node.js. This way you can transmit everything in one format. In some cases you can even store the JS object directly on the database. An example of this kind back-end would be Node.js + Mondo DB, if document database is suitable to your needs.
If you're tied to PHP/JS, i'd go for minimizing AJAX calls. Making asynchronous transfer (duplicating objects) should achieve improved user experience, and the choices made should aim for this. Too many HTTP-requests usually end up making the site slow to react, which is one of the things we usually try to get rid of by using AJAX.
One way that's sometimes useful is also to render JS object by PHP, that could be used if the data is going to be needed but that should not be directly (or at all) shown to the user.
Totally depends on the project. There are just too many variables to say 'you should do it this way'.
Instead, test it. Do it one way, push it to a high number of requests, and profile it. Then switch and try the other way. Keep it easy to switch the output from the PHP by following the MVC pattern.
The only general rule is 'minimise the number of HTTP requests', as HTTP is by far the biggest bottleneck when a page is loading.
Facebook has introduced a ticker which shows live news scrolling down. How can I have this same time of functionality on my site? I don't care to use an iframe and have it refresh because it will A flicker and B make the page loading icon come up (depending on browser). How can this be done?
For this you'd want to fetch the data you're looking for with AJAX every X seconds.. Also known as polling.
Here's the break down: Every X seconds, we want to query our database for new data. So we send an asychronous POST to a php page, which then returns a data set of the results. We also declare a callback function (native to jQuery) that will be passed the data echo'd from our PHP.
Your PHP:
if (isset($_POST['action'])){
if ($_POST['action'] == 'pollNewData'){
pollNewData();
}
}
function pollNewData(){
$term = $_POST['term'];
$sql = "select * from TABLE where TERM = '$term'";
$result = get_rows($sql);
echo json_encode(array('status'=>200, 'results'=>$results));
}
Your front end javascript:
setTimeout(pollForNewData, 10000);
function pollForNewData(){
$.post('url/ajax.php',{
action: 'pollNewData',
term: 'your_term'
}, function(response){
if (response.status == 200){
$.each(response.results, function(index, value){
$("#container").append(value.nodeName);
});
}
}, 'json');
}
What essentially is going on here is that you will be posting asynchronously with jQuery's ajax method. The way you trigger a function in your PHP would be by referencing a key-value item in your post depicting which function you want to call in your ajax request. I called this item "Acton", and it's value is the name of the function that will be called for this specific event.
You then return your data fetched by your back end by echo'ing a json_encoded data set.
In the javascript, you are posting to this php function every 10 seconds. The callback after the post is completed is the function(response) part, with the echo'd data passed as response. You can then treat this response as a json object (since after the function we declared the return type to be json.
Pretty much the only way you can do it is with some sort of Asyncronous javascript function. The easiest way to do it is to have javascript priodically poll another http resource with that information and replace the current content in the dom with the new content. Jquery and other javascript frameworks provide AJAX wrappers to make this process reasonably simple. You aren't limited to using XML in the request.
It's a good idea to make sure that even without javascript enabled that some content is available. Just use the javascript to 'update it' without having to refresh the page.
You can do this kind of ticker using Ajax... using AJAX you can poll a URL that returns JSON/XML containing the new updates and once you get the data, you can update the DOM.
you can refer this page for introduction to Ajax.
Ajax is the best method but that's what everyone else already mentioned. I wanted to add that although I agree ajax is the best method, there are other means too, such as Flash.
There are two approaches possible for obtaining updates like this. The first is called push and the second is called pull.
With push updates, you rely on the server to tell the client when new information is available. In your example, a push update would come in the form of Facebook telling your site that something new happened. In general, push schemes will tend to be more bandwidth friendly because you only transmit information when something needs to be said (Facebook doesn't contact your site if nothing is going on). Unfortunately, push updating requires the server to be specially configured to support it. In other words, unless the service you are asking for updates from (ex. Facebook) has a push update service available to you, you cannot implement one yourself. That's where pull techniques come in.
With pull updating, the client requests new information from the server when it deems necessary. It is typically implemented using polling such that the client will regularly and periodically query the server to see what the latest information is. Depending on how the service is oriented, you may have to do client-side parsing to determine if what you receive as a response is actually new. The downside here is of course that you likely will consume unnecessary amounts of bandwidth for your polling requests where no useful information is obtained. Similarly, you may not be as up-to-date with new information as you would like to be, depending on your polling interval.
In the end, since your site is web-based and is interfacing with Facebook, you will likely want to use some sort of AJAX system.
There are a few hybrid-ish approaches, such as long polling via Comet, which are outlined rather well on Wikipedia:
Push technology: http://en.wikipedia.org/wiki/Push_technology
Pull technology: http://en.wikipedia.org/wiki/Pull_technology
I'm having to interact with the Facebook API for this project, which I find to be actually a bit slower than I expected. Because of this, I'm having to do something which I find rather unorthodox: I need to load the content Facebook provides back in my PHP script AS IT LOADS from Facebook. Traditionally I've loaded content into a div tag at the success of the script; however, I need to load the content as it appears. It would be absolutely unacceptable to have a client wait nearly a minute for Facebook to load an album and all respective comments before displaying anything. Hopefully I'm not being to vague; I'm not here to ask for code, but I've tried just about everything I can think of. Is this a simple concept I'm missing?... I feel as though this is easier than I'm making it.
I'm using jQuery AJAX as I find this easiest to work with. Any comments and/or help would be greatly appreciated.
The root of your problem is that jQuery's AJAX methods hook into the onreadystatechange event and readyState variable. readyState is only set to 4 when the file is completely transferred, and therefore your events will only fire after the download is complete.
Accessing the data as it is being sent is not consistent across different browser families. Doing it this way is going to be incredibly complex and time-consuming. I would recommend first doing this a little differently, perhaps by preloading the relevant facebook data on your own server predictively. This can be compiled to a static page, and that can in turn be served to your users very quickly.
To get the data to your users faster, you'll need to work outside the box as well. There's a jQuery plugin discussed here ( Does PHP flush work with jQuerys ajax? ) that makes jQuery ajax methods compatible with streamed output. Good luck.
The problem seems to stem from the fact that you're getting too much data at once. I suppose you are talking about receiving content from ajax as it is printed out immediately, but it is possible this content is built and sent at once and you won't have access to the data until the entire parse is complete. If this is untrue, look into COMET. If it is true, the solution is to put a limit on how much data you retrieve at once in an effort to reduce the parse time. For example, retrieve 5 photos in each request. Add those 5 photos to the DOM while you retrieve the next 5.
Instead of putting whatever code you want inside of the callback, just put it after the callback
for example
$("#div").load("facebook...", function() {
//do stuff
});
//put the stuff you want to load at the same time, here
I am hitting a lot of different sites to get a list of information and I want to display this information as I get it. Right now I am using a Smarty Template and what I would like to do is:
Pseudo code:
{foreach $page}
$smarty_var = use AJAX to call a PHP function
Render out a new table row on the fly w/ the newly assigned var
<tr><td>{$smarty_var}</td></tr>
{/foreach}
I don't know much about AJAX, I used it a long time ago, and it was similar to this, but not quite, there was user action taken. No I don't have a JS Framework in place. Am I way off here on how this should go? Basically I want to display a table row as data comes available, each table row will be a request to get the data from another site.
Sure, I will tell you about what I am trying to do: http://bookscouter.com/prices.php?isbn=0132184745+&x=19&y=6
If you click on the 'Click to view prices from all 43 links' at the bottom on that page you will see. I am using cURL to get all the pages I want a price from. Then for each page I want to get the price. So each page is gonna fire off a function that runs some fun code like this:
function parseTBDOTpageNew($page, $isbn)
{
$first_cut = preg_split('/<table[^>]*>/', $page);
$second_cut = preg_split('/<td[^>]*>/', $first_cut[2]);
if(strstr($second_cut[4], "not currently buying this book") == true)
{
return "\$0.00";
}
$third_cut = preg_split('/<b[^>]*>/', $second_cut[9]);
$last_cut = preg_split('/</', $third_cut[3]);
return $last_cut[0];
}
This function is called from another function which puts the price returned from the function above, the name of the company, and a link in an array to be added to another bigger array that is sent to smarty. Instead of doing that, I just want to get the first array that is returned with individual information and add the values into a table row on the fly.
I will take your advice on Jquery, what I have started is an onload function that receives the $pages to be parsed, and I was just in the middle of writing: foreach page get the info and spit some html w/ the info on the page.
Also the function that calls the function to get the price is in a php file, so I need the request to hit a function within a php file and NOT just call file.php?param1=foo, I need to it to actually hit the function in the file. I have Jquery in place, now just trying to figure it out and get it to do what I need, ugh. I am searching, any help would be appreciated.
No I don't have a JS Framework in place
Fix that first. You don't want to juggle XMLHTTPRequests yourself. jQuery is SO's canonical JS library, and is pretty nifty.
Basically I want to display a table row as data comes available, each table row will be a request to get the data from another site.
How many rows will you be dealing with? Do they all have to be loaded asynchronously?
Let's tackle this in a braindead, straightforward way. Create a script that does nothing more than:
Take a site ID and fetch data from the corresponding URL
Render that data to some data transport format, either HTML or JSON.
Then it's a simple matter of making the page that the user gets, which will contain Javascript code that makes the ajax calls to the data fetcher, then either shoves the HTML in the page directly, or transforms the data into HTML and then shoves that into the page.
You'll note that at no point is Smarty really involved. ;)
This solution is highly impractical for anything more than a trivial number of sites to be polled asynchronously. If you need rows for dozens or hundreds of sites, that means each client is going to need to make dozens or hundreds of requests to your site for every single normal pageview. This is going to slaughter your server if more than one or two people load the page at once.
Can you tell us more about what you're doing, and what you're trying to accomplish? There are lots of ways to mitigate this problem, but they all depend on what you're doing.
Update for your question edit.
First, please consider using an actual HTML parser instead of regular expressions. The DOM is very powerful and you can target specific elements using XPath.
Instead of doing that, I just want to get the first array that is returned with individual information and add the values into a table row on the fly.
So, here's the ultimate problem. You want to do something asynchronously. PHP does not have a built-in generalized way to perform asynchronous tasks. There are a few ways to deal with this problem.
The first is as I've described above. Instead of doing any of the curl requests on page load, you farm the work out to the end user, and have the end user's browser make requests to your scraping script one by one, filling in the results.
The second is to use an asynchronous work queue, like Gearman. It has excellent PHP support via a PECL extension. You'd write one or more workers that can accept requests, and keep a pool of them running at all times. The larger the pool, the more things you can do at once. Once all of the data has returned, you can throw the complete set of data at your template engine, and call it good.
You can even combine the two, having the user make only one or two or three extra requests via ajax to fetch part of the returned data. Heck, you can even kick off the jobs in the background and return the page immediately, then request the results of the background jobs later via ajax.
Regardless of which way you handle it, you have a giant, huge problem. You're scraping someone's site. You may well be scraping someone's site very often. Not everyone is OK with this. You should seriously consider caching results aggressively, or even checking with each of the vendors to see if they have an API or data export that you can query against instead.