I need a script that, when user closes the page, waits few seconds (without popups) and then closes the page.
I remember seeing somewhere here a way to do this, using ajax (if I remember correctly), by running a php file and waiting the answer before closing, but I can't find it anymore. The php file contained sleep-function.
Any help is greatly appreciated
(This is used mainly to fade out text. When user comes to site text fades in via css3 transition, and when he leaves page the text fades out. I just need time for fadeout. Yes, I know this is not user-friendly but I was specially asked to do it this way)
Your probably thinking of a synchronous ajax request (which blocks the UI):
window.addEventListener('unload',function()
{
var xhr = new XMLHttpRequest();
xhr.open('GET', 'script.php?when=unload',false);//<-- false makes request synchronous
xhr.send();
},false);
But there are other ways, check this answer
On the whole, I'd not do things like this. If a site attempted to deny me the option of closing the window when I feel like it, I'd never use/visit it again. That, and the fact that your JS code is still subject to how the browser implements it, and the browser can be controlled by the client's OS. If I close the browser application, a JS event has nothing to say in that matter, especially if I terminate the browser process (using kill -9, or ctrl+alt+del).
The very least you can do is offer the client a choice, to either force-quit, or wait, explaining why you'd rather the client waited a while:
window.addEventListener('beforeunload',function u(e)
{
var forceQuit = confirm('\tDo you wish to leave Now?\n
if you do, some changes you made won\'t be saved');
if (forceQuit)
{
return e;
}
//synchronous ajax result here, or:
e.returnValue = false;
e.cancelBubble = true;
if (e.preventDefault)
{
e.preventDefault();
e.stopPropagation();
}
setTimeout(function()
{//first, remove handler, so the beforeunload's behaviour is back to default
window.removeEventListener('beforeunload',u,false);
//dispatch new beforeunload event:
window.dispatchEvent( new Event('beforeunload'));
},5000);
},false);
Have a look at jquery unload. You can bind a delay-function to the unload-event.
Related
I am using Ajax and hash for navigation.
Is there a way to check if the window.location.hash changed like this?
http://example.com/blah#123 to http://example.com/blah#456
It works if I check it when the document loads.
But if I have #hash based navigation it doesn't work when I press the back button on the browser (so I jump from blah#456 to blah#123).
It shows inside the address box, but I can't catch it with JavaScript.
The only way to really do this (and is how the 'reallysimplehistory' does this), is by setting an interval that keeps checking the current hash, and comparing it against what it was before, we do this and let subscribers subscribe to a changed event that we fire if the hash changes.. its not perfect but browsers really don't support this event natively.
Update to keep this answer fresh:
If you are using jQuery (which today should be somewhat foundational for most) then a nice solution is to use the abstraction that jQuery gives you by using its events system to listen to hashchange events on the window object.
$(window).on('hashchange', function() {
//.. work ..
});
The nice thing here is you can write code that doesn't need to even worry about hashchange support, however you DO need to do some magic, in form of a somewhat lesser known jQuery feature jQuery special events.
With this feature you essentially get to run some setup code for any event, the first time somebody attempts to use the event in any way (such as binding to the event).
In this setup code you can check for native browser support and if the browser doesn't natively implement this, you can setup a single timer to poll for changes, and trigger the jQuery event.
This completely unbinds your code from needing to understand this support problem, the implementation of a special event of this kind is trivial (to get a simple 98% working version), but why do that when somebody else has already.
HTML5 specifies a hashchange event. This event is now supported by all modern browsers. Support was added in the following browser versions:
Internet Explorer 8
Firefox 3.6
Chrome 5
Safari 5
Opera 10.6
Note that in case of Internet Explorer 7 and Internet Explorer 9 the if statment will give true (for "onhashchange" in windows), but the window.onhashchange will never fire, so it's better to store hash and check it after every 100 millisecond whether it's changed or not for all versions of Internet Explorer.
if (("onhashchange" in window) && !($.browser.msie)) {
window.onhashchange = function () {
alert(window.location.hash);
}
// Or $(window).bind( 'hashchange',function(e) {
// alert(window.location.hash);
// });
}
else {
var prevHash = window.location.hash;
window.setInterval(function () {
if (window.location.hash != prevHash) {
prevHash = window.location.hash;
alert(window.location.hash);
}
}, 100);
}
EDIT -
Since jQuery 1.9, $.browser.msie is not supported. Source: http://api.jquery.com/jquery.browser/
There are a lot of tricks to deal with History and window.location.hash in IE browsers:
As original question said, if you go from page a.html#b to a.html#c, and then hit the back button, the browser doesn't know that page has changed. Let me say it with an example: window.location.href will be 'a.html#c', no matter if you are in a.html#b or a.html#c.
Actually, a.html#b and a.html#c are stored in history only if elements '<a name="#b">' and '<a name="#c">' exists previously in the page.
However, if you put an iframe inside a page, navigate from a.html#b to a.html#c in that iframe and then hit the back button, iframe.contentWindow.document.location.href changes as expected.
If you use 'document.domain=something' in your code, then you can't access to iframe.contentWindow.document.open()' (and many History Managers does that)
I know this isn't a real response, but maybe IE-History notes are useful to somebody.
Firefox has had an onhashchange event since 3.6. See window.onhashchange.
I was using this in a react application to make the URL display different parameters depending what view the user was on.
I watched the hash parameter using
window.addEventListener('hashchange', doSomethingWithChangeFunction);
Then
function doSomethingWithChangeFunction () {
let urlParam = window.location.hash; // Get new hash value
// ... Do something with new hash value
};
Worked a treat, works with forward and back browser buttons and also in browser history.
You could easily implement an observer (the "watch" method) on the "hash" property of "window.location" object.
Firefox has its own implementation for watching changes of object, but if you use some other implementation (such as Watch for object properties changes in JavaScript) - for other browsers, that will do the trick.
The code will look like this:
window.location.watch(
'hash',
function(id,oldVal,newVal){
console.log("the window's hash value has changed from "+oldval+" to "+newVal);
}
);
Then you can test it:
var myHashLink = "home";
window.location = window.location + "#" + myHashLink;
And of course that will trigger your observer function.
Another great implementation is jQuery History which will use the native onhashchange event if it is supported by the browser, if not it will use an iframe or interval appropriately for the browser to ensure all the expected functionality is successfully emulated. It also provides a nice interface to bind to certain states.
Another project worth noting as well is jQuery Ajaxy which is pretty much an extension for jQuery History to add ajax to the mix. As when you start using ajax with hashes it get's quite complicated!
var page_url = 'http://www.yoursite.com/'; // full path leading up to hash;
var current_url_w_hash = page_url + window.location.hash; // now you might have something like: http://www.yoursite.com/#123
function TrackHash() {
if (document.location != page_url + current_url_w_hash) {
window.location = document.location;
}
return false;
}
var RunTabs = setInterval(TrackHash, 200);
That's it... now, anytime you hit your back or forward buttons, the page will reload as per the new hash value.
I've been using path.js for my client side routing. I've found it to be quite succinct and lightweight (it's also been published to NPM too), and makes use of hash based navigation.
path.js NPM
path.js GitHub
SHORT and SIMPLE example
Click on buttons to change hash
window.onhashchange = () => console.log(`Hash changed -> ${window.location.hash}`)
<button onclick="window.location.hash=Math.random()">hash to Math.Random</button>
<button onclick="window.location.hash='ABC'">Hash to ABC</button>
<button onclick="window.location.hash='XYZ'">Hash to XYZ</button>
Is there a way to run a final JavaScript code when a user closes a browser window or refreshes the page?
I'm thinking of something similar to onload but more like onclose? Thanks.
I don't like the onbeforeunload method, which always yields to a confirmation box popping up (leave page/ stay on mozilla) or (reload/ don't reload on chrome). Is there a way to execute the code quietly?
There is both window.onbeforeunload and window.onunload, which are used differently depending on the browser. You can assign them either by setting the window properties to functions, or using the .addEventListener:
window.onbeforeunload = function(){
// Do something
}
// OR
window.addEventListener("beforeunload", function(e){
// Do something
}, false);
Usually, onbeforeunload is used if you need to stop the user from leaving the page (ex. the user is working on some unsaved data, so he/she should save before leaving). onunload isn't supported by Opera, as far as I know, but you could always set both.
Ok, I found a working solution for this, it consists of using the beforeunload event and then making the handler return null. This executes the wanted code without a confirmation box popping-up. It goes something like this:
window.onbeforeunload = closingCode;
function closingCode(){
// do something...
return null;
}
Sometimes you may want to let the server know that the user is leaving the page. This is useful, for example, to clean up unsaved images stored temporarily on the server, to mark that user as "offline", or to log when they are done their session.
Historically, you would send an AJAX request in the beforeunload function, however this has two problems. If you send an asynchronous request, there is no guarantee that the request would be executed correctly. If you send a synchronous request, it is more reliable, but the browser would hang until the request has finished. If this is a slow request, this would be a huge inconvenience to the user.
Later came navigator.sendBeacon(). By using the sendBeacon() method, the data is transmitted asynchronously to the web server when the User Agent has an opportunity to do so, without delaying the unload or affecting the performance of the next navigation. This solves all of the problems with submission of analytics data: the data is sent reliably, it's sent asynchronously, and it doesn't impact the loading of the next page.
Unless you are targeting only desktop users, sendBeacon() should not be used with unload or beforeunload since these do not reliably fire on mobile devices. Instead you can listen to the visibilitychange event. This event will fire every time your page is visible and the user switches tabs, switches apps, goes to the home screen, answers a phone call, navigates away from the page, closes the tab, refreshes, etc.
Here is an example of its usage:
document.addEventListener('visibilitychange', function() {
if (document.visibilityState == 'hidden') {
navigator.sendBeacon("/log.php", analyticsData);
}
});
When the user returns to the page, document.visibilityState will change to 'visible', so you can also handle that event as well.
sendBeacon() is supported in:
Edge 14
Firefox 31
Chrome 39
Safari 11.1
Opera 26
iOS Safari 11.4
It is NOT currently supported in:
Internet Explorer
Opera Mini
Here is a polyfill for sendBeacon() in case you need to add support for unsupported browsers. If the method is not available in the browser, it will send a synchronous AJAX request instead.
Update:
It might be worth mentioning that sendBeacon() only sends POST requests. If you need to send a request using any other method, an alternative would be to use the fetch API with the keepalive flag set to true, which causes it to behave the same way as sendBeacon(). Browser support for the fetch API is about the same.
fetch(url, {
method: ...,
body: ...,
headers: ...,
credentials: 'include',
mode: 'no-cors',
keepalive: true,
})
jQuery version:
$(window).unload(function(){
// Do Something
});
Update: jQuery 3:
$(window).on("unload", function(e) {
// Do Something
});
Thanks Garrett
The documentation here encourages listening to the onbeforeunload event and/or adding an event listener on window.
window.addEventListener('beforeunload', function(event) {
//do something here
}, false);
You can also just populate the .onunload or .onbeforeunload properties of window with a function or a function reference.
Though behaviour is not standardized across browsers, the function may return a value that the browser will display when confirming whether to leave the page.
You can use window.onbeforeunload.
window.onbeforeunload = confirmExit;
function confirmExit(){
alert("confirm exit is being called");
return false;
}
The event is called beforeunload, so you can assign a function to window.onbeforeunload.
Is there a way to execute the code quietly? (no popup)
I have used this successfully, where other methods (eg returning null or false) had issues. Tested on ie, Edge, Chrome, Opera.
window.addEventListener('beforeunload', function (e) {
// the absence of a returnValue property on the event will guarantee the browser unload happens
delete e['returnValue'];
// my code that silently runs goes here
});
The above code is pasted directly from Mozilla.org's onbeforeunload doc
Update: This doesn't appear to work on IOS Safari :( So not a total solution, but maybe it still helps someone.
I have read many similar questions concerning cancelling a POST request with jQuery, but none seem to be close to mine.
I have your everyday form that has a PHP-page as an action:
<form action="results.php">
<input name="my-input" type="text">
<input type="submit" value="submit">
</form>
Processing results.php on the server-side, based on the post information given in the form, takes a long time (30 seconds or even more and we expect an increase because our search space will increase as well in the coming weeks). We are accessing a Basex server (version 7.9, not upgradable) that contains all the data. A user-generated XPath code is submitted in a form, and the action url then sends the XPath code to the Basex server which returns the results. From a usability perspective, I already show a "loading" screen so users at least know that the results are being generated:
$("form").submit(function() {
$("#overlay").show();
});
<div id="overlay"><p>Results are being generated</p></div>
However, I would also want to give users the option to press a button to cancel the request and cancel the request when a user closes the page. Note that in the former case (on button click) this also means that the user should stay on the same page, can edit their input, and immediately re-submit their request. It is paramount that when they cancel the request, they can also immediately resend it: the server should really abort, and not finish the query before being able to process a new query.
I figured something like this:
$("form").submit(function() {
$("#overlay").show();
});
$("#overlay button").click(abortRequest);
$(window).unload(abortRequest);
function abortRequest() {
// abort correct request
}
<div id="overlay">
<p>Results are being generated</p>
<button>Cancel</button>
</div>
But as you can see, I am not entirely sure how to fill in abortRequest to make sure the post request is aborted, and terminated, so that a new query can be sent. Please fill in the blanks! Or would I need to .preventDefault() the form submission and instead do an ajax() call from jQuery?
As I said I also want to stop the process server-side, and from what I read I need exit() for this. But how can I exit another PHP function? For example, let's say that in results.php I have a processing script and I need to exit that script, would I do something like this?
<?php
if (isset($_POST['my-input'])) {
$input = $_POST['my-input'];
function processData() {
// A lot of processing
}
processData()
}
if (isset($_POST['terminate'])) {
function terminateProcess() {
// exit processData()
}
}
and then do a new ajax request when I need to terminate the process?
$("#overlay button").click(abortRequest);
$(window).unload(abortRequest);
function abortRequest() {
$.ajax({
url: 'results.php',
data: {terminate: true},
type: 'post',
success: function() {alert("terminated");});
});
}
I did some more research and I found this answer. It mentions connection_aborted() and also session_write_close() and I'm not entirely sure which is useful for me. I do use SESSION variables, but I don't need to write away values when the process is cancelled (though I would like to keep the SESSION variables active).
Would this be the way? And if so, how do I make one PHP function terminate the other?
I have also read into Websockets and it seems something that could work, but I don't like the hassle of setting up a Websocket server as this would require me to contact our IT guy who requires extensive testing on new packages. I'd rather keep it to PHP and JS, without third party libraries other than jQuery.
Considering most comments and answers suggest that what I want is not possible, I am also interested to hear alternatives. The first thing that comes to mind is paged Ajax calls (similar to many web pages that serve search results, images, what-have-you in an infinite scroll). A user is served a page with the X first results (e.g. 20), and when they click a button "show next 20 results" those are shown are appended. This process can continue until all results are shown. Because it is useful for users to get all results, I will also provide a "download all results" option. This will then take very long as well, but at least users should be able to go through the first results on the page itself. (The download button should thus not disrupt the Ajax paged loads.) It's just an idea, but I hope it gives some of you some inspiration.
On my understanding the key points are:
You cannot cancel a specific request if a form is submitted. Reasons are on client side you don't have anything so that you can identify the states of a form request (if it is posted, if it is processing, etc.). So only way to cancel it is to reset the $_POST variables and/or refresh the page. So connection will be broken and the previous request will not be completed.
On your alternative solution when you are sending another Ajax call with {terminate: true} the result.php can stop processing with a simple die(). But as it will be an async call -- you cannot map it with the previous form submit. So this will not practically work.
Probable solution: submit the form with Ajax. With jQuery ajax you will have an xhr object which you can abort() upon window unload.
UPDATE (upon the comment):
A synchronous request is when your page will block (all user actions) until the result is ready. Pressing a submit button in the form - do a synchronous call to server by submitting the form - by definition [https://www.w3.org/TR/html-markup/button.submit.html].
Now when user has pressed submit button the connection from browser to server is synchronous - so it will not be hampered until the result is there. So when other calls to server is made - during the submit process is going on - no reference of this operation is available for others - as it is not finished. It is the reason why sending termination call with Ajax will not work.
Thirdly: for your case you can consider the following code example:
HTML:
<form action="results.php">
<input name="my-input" type="text">
<input id="resultMaker" type="button" value="submit">
</form>
<div id="overlay">
<p>Results are being generated</p>
<button>Cancel</button>
</div>
JQUERY:
<script type="text/javascript">
var jqXhr = '';
$('#resultMaker').on('click', function(){
$("#overlay").show();
jqXhr = $.ajax({
url: 'results.php',
data: $('form').serialize(),
type: 'post',
success: function() {
$("#overlay").hide();
});
});
});
var abortRequest = function(){
if (jqXhr != '') {
jqXhr.abort();
}
};
$("#overlay button").on('click', abortRequest);
window.addEventListener('unload', abortRequest);
</script>
This is example code - i just have used your code examples and changed something here and there.
Himel Nag Rana demonstrated how to cancel a pending Ajax request.
Several factors may interfere and delay subsequent requests, as I have discussed earlier in another post.
TL;DR: 1. it is very inconvenient to try to detect the request was cancelled from within the long-running task itself and 2. as a workaround you should close the session (session_write_close()) as early as possible in your long-running task so as to not block subsequent requests.
connection_aborted() cannot be used. This function is supposed to be called periodically during a long task (typically, inside a loop). Unfortunately there is just one single significant, atomic operation in your case: the query to the data back end.
If you applied the procedures advised by Himel Nag Rana and myself, you should now be able to cancel the Ajax request and immediately allow a new requests to proceed. The only concern that remains is that the previous (cancelled) request may keep running in the background for a while (not blocking the user, just wasting resources on the server).
The problem could be rephrased to "how to abort a specific process from the outside".
As Christian Bonato rightfully advised, here is a possible implementation. For the sake of the demonstration I will rely on Symphony's Process component, but you can devise a simpler custom solution if you prefer.
The basic approach is:
Spawn a new process to run the query, save the PID in session. Wait for it to complete, then return the result to the client
If the client aborts, it signals the server to just kill the process.
<?php // query.php
use Symfony\Component\Process\PhpProcess;
session_start();
if(isset($_SESSION['queryPID'])) {
// A query is already running for this session
// As this should never happen, you may want to raise an error instead
// of just silently killing the previous query.
posix_kill($_SESSION['queryPID'], SIGKILL);
unset($_SESSION['queryPID']);
}
$queryString = parseRequest($_POST);
$process = new PhpProcess(sprintf(
'<?php $result = runQuery(%s); echo fetchResult($result);',
$queryString
));
$process->start();
$_SESSION['queryPID'] = $process->getPid();
session_write_close();
$process->wait();
$result = $process->getOutput();
echo formatResponse($result);
?>
<?php // abort.php
session_start();
if(isset($_SESSION['queryPID'])) {
$pid = $_SESSION['queryPID'];
posix_kill($pid, SIGKILL);
unset($pid);
echo "Query $pid has been aborted";
} else {
// there is nothing to abort, send a HTTP error code
header($_SERVER['SERVER_PROTOCOL'] . ' 599 No pending query', true, 599);
}
?>
// javascript
function abortRequest(pendingXHRRequest) {
pendingXHRRequest.abort();
$.ajax({
url: 'abort.php',
success: function() { alert("terminated"); });
});
}
Spawning a process and keeping track of it is genuinely tricky, this is why I advised using existing modules. Integrating just one Symfony component should be relatively easy via Composer: first install Composer, then the Process component (composer require symfony/process).
A manual implementation could look like this (beware, this is untested, incomplete and possibly unstable, but I trust you will get the idea):
<?php // query.php
session_start();
$queryString = parseRequest($_POST); // $queryString should be escaped via escapeshellarg()
$processHandler = popen("/path/to/php-cli/php asyncQuery.php $queryString", 'r');
// fetch the first line of output, PID expected
$pid = fgets($processHandler);
$_SESSION['queryPID'] = $pid;
session_write_close();
// fetch the rest of the output
while($line = fgets($processHandler)) {
echo $line; // or save this line for further processing, e.g. through json_encode()
}
fclose($processHandler);
?>
<?php // asyncQuery.php
// echo the current PID
echo getmypid() . PHP_EOL;
// then execute the query and echo the result
$result = runQuery($argv[1]);
echo fetchResult($result);
?>
With BaseX 8.4, a new RESTXQ annotation %rest:single was introduced, which allows you to cancel a running server-side request: http://docs.basex.org/wiki/RESTXQ#Query_Execution. It should solve at least some of the challenges you described.
The current way to only return chunks of the result is to pass on the index to the first and last result in your result, and to do the filtering in XQuery:
$results[position() = $start to $end]
By returning one more result than requested, the client will know that there will be more results. This may be helpful, because computing the total result size is often much more expensive than returning only the first results.
I hope I understood this correctly.
Instead of letting the browser "natively" submit the FORM, don't: write JS code that does this instead. In other words (I didn't test this; so interpret as pseudo-code):
<form action="results.php" onsubmit="return false;">
<input name="my-input" type="text">
<input type="submit" value="submit">
</form>
So, now, when the that "submit" button is clicked, nothing will happen.
Obviously, you want your form POSTed, so write JS to attach a click handler on that submit button, collect values from all input fields in the form (actually, it is NOT nearly as scary as it sounds; check out the link below), and send it to the server, while saving the reference to the request (check the 2nd link below), so that you can abort it (and maybe signal the server to quit also) when the cancel-button is clicked (alternatively, you can simply abandon it, by not caring about the results).
Submit a form using jQuery
Abort Ajax requests using jQuery
Alternatively, to make that HTML markup "clearer" relative to its functionality, consider not using FORM tag at all: otherwise, what I suggested makes its usage confusing (why it is there if it's not used; know I mean?). But, don't get distracted with this suggestion until you make it work the way you want; it's optional and a topic for another day (it might even relate to your changing architecture of the whole site).
HOWEVER, a thing to think about: what to do if the form-post already reached the server and server already started processing it and some "world" changes have already been made? Maybe your get-results routine doesn't change data, so then that's fine. But, this approach probably cannot be used with change-data POSTs with the expectation that "world" won't change if cancel-button is clicked.
I hope that helps :)
The user doesn't have to experience this synchronously.
Client posts a request
The server receives the client request and assigns an ID to it
The server "kicks off" the search and responds with a zero-data page and search ID
The client receives the "placeholder" page and starts checking if the results are ready based on the ID (with something like polling or websockets)
Once the search has completed, the server responds with the results next time it's polled (or notifies the client directly when using websockets)
This is fine when performance isn't quite the bottleneck and the nature of processing makes longer wait times acceptable. Think flight search aggregators that routinely run for 30-90 seconds, or report generators that have to be scheduled and run for even longer!
You can make the experience less frustrating if you don't block user interactions, keep them updated of search progress and start showing results as they come in if possible.
You must solve this conceptually first before writing any code. Here are some things that come to mind offhand:
What does it mean to free up resources on the server?
What constitutes to a graceful abort that will free up resources?
Is it enough to kill the PHP process waiting for the query result(s)? If so, the route suggested by RandomSeed could be interesting. Just keep in mind that it will only work on a single server. If you have multiple load balanced servers you won't have a way to kill a process on another server (not as easily at least).
Or do you need to cancel the database request from the database itself? In that case the answer suggested by Christian GrĂ¼n is of more interest.
Or is it that there is no graceful shutdown and you have to force everything to die? If so, this seems awfully hacky.
Not all clients are going to explicitly abort
Some clients are going to close the browser, but their last request won't come through; some clients will lose internet connection and leave the service hanging, etc. You are not guaranteed to get an "abort" request when a client disconnects or has gone away.
You have to decide whether to live with potentially unwanted behavior, or implement an additional active state tracking, e.g. client pinging server for keepalive.
Side notes
30 secs or greater query time is potentially long, is there a better tool for the job; so you won't have to solve this with a hack like this?
you are looking for features of a concurrent system, but you're not using a concurrent system; if you want concurrency use a better tool/environment for it, e.g. Erlang.
I am trying to make a chat room on my website, I am using php and mysql to store the messages and all the info. How could I automatically refresh the page every time someone updates the database? example:
If I am on my site, the messages show up on my screen but I can only see more recent messages after I refresh the page. Is there a way to make it real-time?
Also I do not know much javascript/ajax/jquery or any of that. Any help is appreciated!
There will be low amount of traffic on my site. Probably around 10-15 people at a time, if that even.
Your best bet is to make an AJAX request every sec or so and see if there are new messages.
You probably do not want to be reloading the page every time. My recommendation, and there are many ways to do this, is to make a ajax call every so often and check/pull the new information from the database.
I would research AJAX and do a tutorial.
This would be accomplished through ajax by calling a function and updating the div. I would not suggest making people refresh a page everytime they send a message it would get ugly. Another option would be using HTML5 web workers
http://msdn.microsoft.com/en-us/hh549259.aspx
You are going to need to learn AJAX in order to make this work well, and jQuery is probably the easiest way to do it. If we can assume that the DIV you want to update has the ID PonyRides, you would want to do:
$("#PonyRides").ajax({url: "/chat.php?getupdates=true"});
This will get the contents of chat.php and stick it into the #PonyRides DIV. This assumes that chat.php will get the contents of the database and format them into HTML.
The remaining challenge is to make it update whenever your database does, but the simplest way is just to reload the whole chat regardless of whether an update has been made or not.
That will impact performance, but if you have less than a hundred chatters you'll probably be fine. If you have more than that, you'd do well to sense inactivity and decrease the checking period, or only send updates instead of the whole chat. Those are more complicated topics, though, and you can build them in as needed once you get these basic concepts down.
To do this, simply wrap the ajax() call in an interval like so:
setInterval(function(){ //the following code runs repeatedly
$("#PonyRides").ajax({url: "/chat.php?getupdates=true"}); //update our chat div
},5000); //repeat every five seconds
The other, awful method would be to load chat in an iFrame, set to reload periodically using the meta refresh technique. This would be dreadful, and can only be recommended if you are trying for some reason to support incredibly old browsers.
You can use AJAX request to update the values
<script type='text/javascript'>
// function for making an object for making AJAX request
function getXMLHTTPRequest() {
try {
req = new XMLHttpRequest();
} catch(err1) {
try {
req = new ActiveXObject("Msxml2.XMLHTTP");
} catch (err2) {
try {
req = new ActiveXObject("Microsoft.XMLHTTP");
} catch (err3) {
req = false;
}
}
}
return req;
}
var http899 = getXMLHTTPRequest();
function searchFabIndia() {
var myurl = "http://my2nddomain.com/yebhi.php";
myRand = parseInt(Math.random()*999999999999999);
var modurl = myurl+"?rand="+myRand;
http899.open("GET", modurl, true);
http899.onreadystatechange = useHttpResponse899;
http899.send(null);
}
function useHttpResponse899() {
if (http899.readyState == 4) {
if(http899.status == 200) {
// do all processings with the obtained values / response here
// after doing the stuff, call fn again after 30 s say
setTimeout("searchFabIndia()", 30000);
}
}
}
</script>
<body onload='searchFabIndia();'>
I would suggest making an AJAX request to a file on your server which will update the database. If the update to the database is successful then return the message which was updated. Back on the client side you wait for the response and if you get one then append the message to the end of the content. This way you're loading all the messages every time (which would be expensive), you're only loading new messages.
There must be something similar to SignalR(.net) for php. It lets you add code when an event occurs, I think that is what you are looking for.
I'm writing some PHP that does a fair amount of processing and then generates reports of the results. Previously it would do a periodic flush() but we're moving to Zend Framework and can't do that anymore. Instead, I would like to have some kind of status display that updates while the report is generated. So I made a progress bar that loads in an iframe, added shared memory to the progress bar update action and the report generation action, and caused the output to load via xmlhttprequest. This all works fine. My issue is that the browser wants to do the two requests serially instead of in parallel, so it will request the progress bar and then BLOCK until the progress bar completes BEFORE it requests the actual output. This means that the process will never end since the real work never starts.
I've searched all morning for some way around this and came up empty-handed.
Is there some way to cause two connections, or am I just screwed?
My next action will be to break the processing apart some more and make the status updating action do the actual work, save the result, and then use the other action to dump it. This will be really painful and I'd like to avoid it.
Edit: Here is the javascript, as requested:
function startProgress()
{
var iFrame = document.createElement('iframe');
document.getElementsByTagName('body')[0].appendChild(iFrame);
iFrame.id = 'progressframe';
iFrame.src = '/report/progress';
}
function Zend_ProgressBar_Update(data)
{
document.getElementById('pg-percent').style.width = data.percent + '%';
document.getElementById('pg-text-1').innerHTML = data.text;
document.getElementById('pg-text-2').innerHTML = data.text;
}
function Zend_ProgressBar_Finish()
{
document.getElementById('pg-percent').style.width = '100%';
document.getElementById('pg-text-1').innerHTML = 'Report Completed';
document.getElementById('pg-text-2').innerHTML = 'Report Completed';
document.getElementById('progressbar').style.display = 'none'; // Hide it
}
function ajaxTimeout(){
xmlhttp.abort();
alert('Request timed out');
}
var xmlhttp;
var xmlhttpTimeout;
function loadResults(){
if (window.XMLHttpRequest){
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}else{
// code for IE6, IE5
xmlhttp=new ActiveXObject(\"Microsoft.XMLHTTP\");
}
xmlhttp.open(\"POST\",\"/report/output\",true);
xmlhttp.onreadystatechange=function(){
if (xmlhttp.readyState == 4 && xmlhttp.status == 200) {
clearTimeout(xmlhttpTimeout);
document.getElementById('report-output').innerHTML=xmlhttp.responseText;
}
}
var xmlhttpTimeout=setTimeout(ajaxTimeout,600000); // Ten minutes
xmlhttp.setRequestHeader('Content-Type','application/x-www-form-urlencoded');
xmlhttp.send('".file_get_contents("php://input")."');
}
This gets called from the following onload script:
onload="startProgress(); setTimeout(loadResults,1000);"
The issue is not in Javascript. If you put an alert() in there, the alert will be triggered at the right time, but the browser is delaying the second http transaction until the first completes.
Thank you everyone for your input.
I didn't come up with a satisfactory answer for this within the timeframe permitted by our development schedule. It appears that every common browser wants to re-use an existing connection to a site when doing multiple transactions with that site. Nothing I could come up with would cause the browser to initiate a parallel connection on demand. Any time there are two requests from the same server the client wants to do them in a serial fashion.
I ended up breaking the processing into parts and moving it into the status bar update action, saving the report output into a temporary file on the server, then causing the status bar finish function to initiate the xmlhttprequest to load the results. The output action simply spits out the contents of the temporary file and then deletes it.
Using two async ajaxes could do the trick. With the first ajax request you should start the process by calling the php-cli to do the actual work deep in the background (so it doesn't expire or cancel) and return the id of the process (task). Now when you have the process id, you can start the periodical ajax to display the process made.
Making a db table containing process_id, state, user would not be a bad thing. In this case even if the user would close the browser while the process is running, the process would continue until done. The user could revisit the page and see the percentage done, because the process running in cli would save the progress into the db table.
Make a system call to the php file and detach it?
ex:
exec('nohup php test.php > test.out 2> test.err < /dev/null &');
echo 'I am totally printing here';
test.php contains a sleep for 2 seconds and prints, but echo returns immediately.
Have it store the results in a file/database/whatever. It will act like a very dirty fork.
You could also do something similar with a CURL call I bet if you have issues executing.
Credit here for the code example from bmellink (mine was way worse than his).
If you are able to load the report in the iFrame, you can kind of reverse your logic (I have done this to track file uploads to PHP).
Load Report in iFrame (can be hidden or whatever you like).
Make ajax call to get progress (step 1 will have to log progress as others have mentioned).
When the progress reports loading complete, you may show the iframe or whatever is needed to complete.
Hope that helps. Just did a whole lot with iFrames, CORS, and Ajax calls to API's.