I have an Zend–based application that uses long polling. Basically it makes a HTTP POST request, which blocks the application until it either returns or times out after 20 seconds.
I have a need to make a second request (which is currently non-parallel), where unfortunately if the first request hangs, it ends up being 20 seconds (= timeout) before the second request executes.
What is the best way to make my application asynchronous, or at the very least do non-blocking HTTP request I/O?
If your both requests use session (session_start() call) and you don't close the session in long polling script, then the session is locked for other scripts using the same session for all the time the long polling runs. These scripts must therefore wait (i think they hang on session_start(), but not sure) for closing the session, by default this is done automatically at the end of a script.
So if you don't need session in long polling, don't start it or close it (call session_write_close()) before the code that runs for 20s in your case (i.e. before main iteration in long polling).
Hope this helps.
Mmmh, maybe you should add some more information to your questions.
If the 2 requests aren't related (i.e. the second one doesn't need the first one to be finished) you can perform several queries without waiting for the first one to finish. But of course you cannot do it without some Javascript.
For example you could use jQuery ajax function in asynchronous mode (by default it's asynchronous). You can chain several ajax calls in jQuery the second one will not wait for the first one to be finished (but be careful with ajax timeout settings).
Related
I've got a name lookup box that operates by your typical ajax requests. Here's the general flow of the Javascript that fires every time a letter is pressed:
If ajax request already open, then abort it.
If timeout already created, destroy it.
Set new timeout to run the following in half a second:
Send string to 'nameLookup.php' via ajax
Wait for response
Display results
The issue is that nameLookup.php is very resource heavy. In some cases up to 10,000 names are being pulled from an SQL database, decrypted, and compared against the string. Under normal circumstances requests can take anywhere from 5 to 60 seconds to return.
I completely understand that when you abort a request on the client side the server is still working on things and sends back the result. That it's just the client side that knows to ignore the response. But the server is getting so hung up on working on all of these requests.
So if you do:
Request 1
Abort Request 1
Request 2
Abort Request 2
Request 3
Wait for response from Request 3
The server is either not even working on Request 3 until it's finished with 1 and 2... or it's just so hung up on working on Request 1 and 2 that Request 3 is taking an extra long amount of time.
I need to know how to tell the server to stop working on Request 1 and 2 so I can free up resources for it to work on Request 3.
I'm using Javascript & jQuery on the client side. PHP/Apache and SQL on the server side.
Store a boolean value in the DB in a table, or in the session.
Have your resource intensive script check periodically that value to see if it should continue or not. If the DB says to stop, then your script cancels itself (by calling return; in the current function for example).
When you want to cancel, instead of calling abort();, make an AJAX request to set that value to false.
Next time the resource checks that value it will see that it has to stop.
Potential limitations:
1. Your script does not have a way of checking periodically the DB.
2. Based on how often the script checks the DB, it might take a few seconds to effectively kill the script.
I think there is something missing from this question. What are the triggers for doing the requests? You might be trying to solve the wrong problem.
Let me elaborate on that. If you lookup box is actually doing autocompletion of some kind and is doing a new search everytime the user presses a key, then you are going to have the issue you describe.
The solution in that case is not killing all the process. The solution lies in not starting them. So, you might make some decisions like not trying to search if there is only one character to search with - lets say we go for three. We then might say we want to wait until we can be reasonable sure the user has finished typing before sending off the request. Lets say we wait 1 second.
Now, someone looking for all the Paul's in you list of names will send off one search when they type 'pau' and pause for 1 second, instead of three searches for 'p' then 'pa' then 'pau'... so no need to kill anything.
I've come up with an awesome solution that I've tested and it's working beautifully. It's just a few lines of PHP code to put in whatever files are being resource intensive.
This solution utilizes the Process Identifier (PID) from the server. We can use two PHP function: posix_getpid() to get the current PID and posix_kill() to kill another PID. This also assumes that you already have called session_start() somewhere else.
Here's the code:
//if any existing PIDs to kill, go through each
if ($_SESSION['pid']) foreach ($_SESSION['pid'] as $i => $pid) {
//if posix_kill returns true, unset this PID from the session so we don't waste time killing it again
if(posix_kill($pid,0)) unset($_SESSION['pid'][$i]);
}
//now that all others are killed, we can store the current PID in the session
$_SESSION['pid'][]=posix_getpid();
//close the session now, otherwise the PID we just added won't actually be saved to the session until the process ends.
session_write_close();
Couple things to note:
posix_kill has two values. The first is the pid, and the second is supposed to be one of the signal constants from this list. Nothing there made any sense to me, other people seemed to have success just using 0, and when I use 0 it returns true. So whatever works!
calling session_write_close() before the resource intensive things start happening is crucial. Otherwise the new PID that has been saved to the session won't ACTUALLY be saved to the session until all of the page's processing is done. Which means the next process won't know to cancel the one that's still going on and taking forever.
I am making a notification system for my website. I want the logged in users to immediately noticed when a notification has made. As many people say, there're only a few ways of doing so.
One is writing some javascript code to ask the server "Are there any new notifications ?" at a given time interval. It's called "Polling" (I should be right).
Another is "Long Polling" or "Comet". As wikipedia says, long polling is similar to polling. Without asking everytime for new notifications, when new notifications are available, server sends them directly to the client.
So how can i use Long Polling with PHP ? (Don't need full source code, but a way of doing so)
What's its architecture/design really ?
The basic idea of long-polling is that you send a request which is then NOT responded or terminated by the server until some desired condition. I.e. server-side doesn't "finish" serving the request by sending the response. You can achieve this by keeping the execution in a loop on server-side.
Imagine that in each loop you do a database query or whatever is necessary for you to find out if the condition you need is now true. Only when it IS you break the loop and send the response to the client. When the client receives the response, it immediately re-sends the "long-polling" request so it wouldn't miss a next "notification".
A simplified example of the server-side PHP code for this could be:
// Set the loop to run 28 times, sleeping 2 seconds between each loop.
for($i = 1; $i < 29; $i++) {
// find out if the condition is satisfied.
// If YES, break the loop and send response
sleep(2);
}
// If nothing happened (the condition didn't satisfy) during the 28 loops,
// respond with a special response indicating no results. This helps avoiding
// problems of 'max_execution_time' reached. Still, the client should re-send the
// long-polling request even in this case.
You can use (or study) some existing implementations, like Ratchet. There are a few others.
Essentially, you need to avoid having apache or the web server handle the request. Just like you would with a node.js server, you can start PHP from the command line and use the server socket functions to create a server and use socket_select to handle communications.
It could technically work throught the web server by keeping a loop active. However, the memory overhead of keeping a php process active per HTTP connection is typically too high. Creating your own server allows you to share the memory between connections.
I used long polling for a chat application recently. After doing some research and playing it with a while here are some things I would recommend.
1) Don't long poll for more than about 20 seconds. Some browsers will timeout. I normally set my long poll to run about 20 seconds and send back an empty response at that point. Then you can use javascript to restart the long poll.
2) Every once in a while a browser will hang up. To help add a second level of error checking, I have a javascript timer run for 30 seconds and if no response has come in 30 seconds I abandon the ajax call and start it up again.
3) If you are using php make sure you use session_write_close()
4) If you are using ajax with Jquery you may need to use abort()
You can find your answer here. More detail here . And you should remember to use $.ajaxSetup({ cache:false }); when working with jquery.
Problem
I have a long running import job which I start with an ajax request, it could take some minutes until the request is finished. While this first ajax request is running, I want to have a look at the server to know how far the import is gone, this second request will be done every 2 seconds or so.
When I use the Ext.Ajax method the requests seems to be chained - the first ajax request (import) runs until it is finished, just then the second (import update) is fired.
I saw that Ext.Ajax is singleton, so maybe thats the reason. So I tried to create my own Connection objects with Ext.create('Ext.data.Connection') but it doesn't work.
My current request chain is:
first request - start
first request - end
second request - start
second request - end
But it should be:
first request - start
second request - start
second request - end
...maybe more second requests
first request - end
Question
The browser should be able to handle multiple request, there must be a limitation inside ExtJS but I didn't find it?
Update 2011-10-16
Answer
The problem wasn't ExtJS - sorry! It was PHP, my first script works with the session and the second script tried to access the session as well. And because PHP sessions are file based, the session file was locked from the first request script and the second request script had to wait until the first release the session lock.
I solved this with this little piece of code I added to my import process (the first script) after every x row:
$id = session_id();
session_write_close();
sleep(1);
session_start($id);
So it stops and reloads the session and the other script was able to hook in and get the session information.
Singleton or non-singleton doesn't even change the way Ext.Ajax works. I think this could be due to the coding (did you wait for the calls to finish?)
Afaik, I never have this problem before when I do multiple calls. The only thing that is hogging the calls is the server (PHP), which doesn't support parallel processing and causes delays, and generate a pattern like this
Call 1 - start
Call 2 - start
Call 1 get processed in the server and Call 2 get queued up
Call 1 - finished
Call 2 get processed in server
Call 2 - finished
It could be disastrous if Call 1 requires more time to process than Call 2.
EDIT:
I have written this little demo just for you to feel how does it works. Check it out :) Spent me half an hour lol!
morning
I have some doubts about the the way php works. I cant find the answer anywhere on books so I thought to hit the stack ;)
so here it goes:
lets assume we have one single server with php+apache installed. Here are my beliefs:
1 - php can handle one request at a time. Doesn't matter if apache can handle more than 1 thread at a time because eventually the invoked php interpreter is single threaded.
2 - from belief 1 follows that I believe if the server receives 4 calls at the same very time these calls are queued up and executed 1 at a time. Who makes the request last gets the response last.
3 - from 1 and 2 follows that if I cron-call a url corresponding to a script that does some heavy-lifting/time consuming stuff I slow down the server up to the moment the script returns.
Whats true? whats false?
cheers
My crystal ball suggests that you are using PHP sessions and you have having simultaneous requests (either iframes or AJAX) getting queued. The problem is that the default session handler uses files and session_start() locks the data file. You should read your session data quickily and then call session_write_close() to release the file.
I see no reason why would PHP be not able to handle multiple requests at the same time. That said, it may be semi-true for handling requests of single client, depending on the type of script.
Many scripts use sessions. When session_start() is called, session is being opened and locked. When execution of script ends, session is being closed and unlocked (this can be done manually). When there are multiple requests for the same session, first requests opens and locks session, and the second request has to wait until session is unlocked. This might make an impression that multiple PHP scripts cannot be executed at the same time, but that's true (partly) only for requests that use the same session (in other words - requests from the same browser). Requests from two clients (browsers) may be processed parallelly as long as they don't use resources (files, DB tables etc) that are being locked/unlocked in other requests.
Do all comet style applications require a loop somewhere in the application on the serverside to detect updates/changes? If no, please could you explain how the logic behind a loopless comet style application would work?
This kind of application will always require a loop, you need to periodically check for new data etc. Of course you can make the "loop" non-blocking by using an even-loop based approach, but in the end there's still a loop somewhere.
Just think about it for a moment, how would you make it work without a loop? I sure can't imagine a way that doesn't utilize a loop somewhere.
Short answer is, no, not all require a loop on the serverside.
Instead you can use long-polling AJAX calls from the browser to request data,
at which the server simply responds with the data and the browser waits until the response is gotten before sending a new request.
The solution could be stream_set_blocking. Use any possible blocking resource to be suspended by OS and wait for appropriate interruption.
Client side:
Ajax call to endpoint script (timeout for ajax e.g. 30 seconds - after 30 seconds initiate another one - after 30 seconds you will get response from server - script execution time reached)
If you will get response during 30 seconds handle response (async) and open new connection (as in comet done - I saw it in cometD client)
Server setup:
Setup apache timeouts (between request and data sent to 30-31 second), this is so apache will allow you to wait so much
set apache to allow lot of child instances (concurrent users * 1.5), but you need to be sure that you have enough memory for this amount of apache instances (+ memory used by php children)
Script one:
execution_time = 28
set shutdown_function in order to send response (timeout, but formatted and understandable for ajax if You need it)
you need to open file, empty one
enable blocking mode using stream_set_blocking for file stream
try read from file and you will get suspended until other process will write to file or timeout be reached.
As soon as script gets content in file written from other process it will get back and will send response. (this will trigger another ajax call and another slept process)
Worst thing is that you need to think how to get multiple reader scripts reading from same bus (file) without disturbing each other.
Also there could be that timeout will be exactly at that time when message will be written into bus.
(hope that this solution is not as bad as my English)