Multiple simultaneous running ajax requests in ExtJS 4 - php

Problem
I have a long running import job which I start with an ajax request, it could take some minutes until the request is finished. While this first ajax request is running, I want to have a look at the server to know how far the import is gone, this second request will be done every 2 seconds or so.
When I use the Ext.Ajax method the requests seems to be chained - the first ajax request (import) runs until it is finished, just then the second (import update) is fired.
I saw that Ext.Ajax is singleton, so maybe thats the reason. So I tried to create my own Connection objects with Ext.create('Ext.data.Connection') but it doesn't work.
My current request chain is:
first request - start
first request - end
second request - start
second request - end
But it should be:
first request - start
second request - start
second request - end
...maybe more second requests
first request - end
Question
The browser should be able to handle multiple request, there must be a limitation inside ExtJS but I didn't find it?
Update 2011-10-16
Answer
The problem wasn't ExtJS - sorry! It was PHP, my first script works with the session and the second script tried to access the session as well. And because PHP sessions are file based, the session file was locked from the first request script and the second request script had to wait until the first release the session lock.
I solved this with this little piece of code I added to my import process (the first script) after every x row:
$id = session_id();
session_write_close();
sleep(1);
session_start($id);
So it stops and reloads the session and the other script was able to hook in and get the session information.

Singleton or non-singleton doesn't even change the way Ext.Ajax works. I think this could be due to the coding (did you wait for the calls to finish?)
Afaik, I never have this problem before when I do multiple calls. The only thing that is hogging the calls is the server (PHP), which doesn't support parallel processing and causes delays, and generate a pattern like this
Call 1 - start
Call 2 - start
Call 1 get processed in the server and Call 2 get queued up
Call 1 - finished
Call 2 get processed in server
Call 2 - finished
It could be disastrous if Call 1 requires more time to process than Call 2.
EDIT:
I have written this little demo just for you to feel how does it works. Check it out :) Spent me half an hour lol!

Related

PHP requests one by one or simultaneously

I have got some questions about PHP and how requests work under the hood.
1) Let's say I wrote my PHP application and upload it to server. Now there's a function that I wrote and if user goes to that route which executes that function, something happens.
Question is: if one user makes the request and another user also makes the request, does the second user have to wait until first user's request is done? (by saying request is done, I mean until the function I wrote gets executed through the end). Is this the correct guess or it doesn't matter which function gets executed. Until the request isn't done, the second request never starts?
2) I have my PHP application. Imagine two persons make the request at the same time which writes data to the database (not writing, but updating). Let's say I used load balancers. if one user makes request to balancer1 and another user makes request to balancer2, what I want to do is if first user's call updates the database, second user's request has to stop immediately (it shouldn't be updated).
The scenario is that I have jwt token in my database which is used to do requests on third party tool. it has expiration 1 hour. Let's say 1 hour has passed. If one user makes the call to update the token and in the way second user also makes the call to update the token, what will happen is second user will update the token and first user's token will be invalid, which is bad.
PHP can handle multiple requests at the same time, but requests from the same user will be processed one by one if the user's PHP session is locked by the first request. Second request will be proceeded when session would be closed.
For example, if you run a PHP script with sleep(30) in one browser tab:
<?
session_start();
sleep(30);
And another script in another tab:
<?
session_start();
echo 'hello';
The second script won't be executed until the first one is finished.
It's important to note this behavior because sessions are used in almost every app.
If you have a route which is served by a controller function, for each request there is a separate instantiation of the controller. For example: user A and user B request same route laravel.com/stackoverflow, the controller is ready to respond to each request, independent of how many users are requesting at the same time. You can consider similar as a principle of processes for any service. For example, Laravel is running on PHP. So PHP makes process threads each time we need PHP to process any script. Similarly Laravel instantiates the controller for each request.
For same user sending multiple requests, it will be still processed like point 1.
If you want to process particular requests one by one, you can queue the jobs. For example let us say you want to process a payment. You have 5 requests happening. So the controller will be taking all requests simultaneously but the controller function can dispatch a queued job and those are processed one by one.
Considering two persons try to request same route which has an DB update function, you can read a nice article here about optimistic and pessimistic locking.
I should be voting to close this - its way too broad....but I'll give it a go.
If the requests depend on a resource which can only perform one task at a time, then they cannot "run" concurrently. It is quite possible you may have a single CPU core, or a single disk - however at the level of the HTTP request (in the absence of code to apply mutex locking) they will appear to run at the same time - that is what multi-tasking is all about. The execution thread will often be delayed waiting for something else to happen and at that point the OS task scheduler will check to see if there are any other tasks waiting to be run. You can easily test this yourself:
<?php
$started=time();
sleep(20);
print "Ran for " . (time() - $started) " seconds";
(try accessing this in different browser windows around the same time - or in 2 iframes on the same window)
Compare this with:
<?php
$started=time();
$fh=fopen("/tmp/concurency_test", "w");
flock($fh, LOCK_EX);
sleep(20);
flock($fh, LOCK_UN);
print "Ran for " . (time() - $started) " seconds";
This also demonstrates just one of the reasons why you should not use flat files for storing data on your server. Note that the default session handler in PHP uses file based locking for the duration the session data is open by the script.
Databases employ a variety of strategies to avoid reverting to single operation queuing - most commonly versioning. That does not address the problem you describe: 2 clients should never be using the same session token - that is why the session token is seperate from the credentials in a well designed system.

Stopping requests server-side

I've got a name lookup box that operates by your typical ajax requests. Here's the general flow of the Javascript that fires every time a letter is pressed:
If ajax request already open, then abort it.
If timeout already created, destroy it.
Set new timeout to run the following in half a second:
Send string to 'nameLookup.php' via ajax
Wait for response
Display results
The issue is that nameLookup.php is very resource heavy. In some cases up to 10,000 names are being pulled from an SQL database, decrypted, and compared against the string. Under normal circumstances requests can take anywhere from 5 to 60 seconds to return.
I completely understand that when you abort a request on the client side the server is still working on things and sends back the result. That it's just the client side that knows to ignore the response. But the server is getting so hung up on working on all of these requests.
So if you do:
Request 1
Abort Request 1
Request 2
Abort Request 2
Request 3
Wait for response from Request 3
The server is either not even working on Request 3 until it's finished with 1 and 2... or it's just so hung up on working on Request 1 and 2 that Request 3 is taking an extra long amount of time.
I need to know how to tell the server to stop working on Request 1 and 2 so I can free up resources for it to work on Request 3.
I'm using Javascript & jQuery on the client side. PHP/Apache and SQL on the server side.
Store a boolean value in the DB in a table, or in the session.
Have your resource intensive script check periodically that value to see if it should continue or not. If the DB says to stop, then your script cancels itself (by calling return; in the current function for example).
When you want to cancel, instead of calling abort();, make an AJAX request to set that value to false.
Next time the resource checks that value it will see that it has to stop.
Potential limitations:
1. Your script does not have a way of checking periodically the DB.
2. Based on how often the script checks the DB, it might take a few seconds to effectively kill the script.
I think there is something missing from this question. What are the triggers for doing the requests? You might be trying to solve the wrong problem.
Let me elaborate on that. If you lookup box is actually doing autocompletion of some kind and is doing a new search everytime the user presses a key, then you are going to have the issue you describe.
The solution in that case is not killing all the process. The solution lies in not starting them. So, you might make some decisions like not trying to search if there is only one character to search with - lets say we go for three. We then might say we want to wait until we can be reasonable sure the user has finished typing before sending off the request. Lets say we wait 1 second.
Now, someone looking for all the Paul's in you list of names will send off one search when they type 'pau' and pause for 1 second, instead of three searches for 'p' then 'pa' then 'pau'... so no need to kill anything.
I've come up with an awesome solution that I've tested and it's working beautifully. It's just a few lines of PHP code to put in whatever files are being resource intensive.
This solution utilizes the Process Identifier (PID) from the server. We can use two PHP function: posix_getpid() to get the current PID and posix_kill() to kill another PID. This also assumes that you already have called session_start() somewhere else.
Here's the code:
//if any existing PIDs to kill, go through each
if ($_SESSION['pid']) foreach ($_SESSION['pid'] as $i => $pid) {
//if posix_kill returns true, unset this PID from the session so we don't waste time killing it again
if(posix_kill($pid,0)) unset($_SESSION['pid'][$i]);
}
//now that all others are killed, we can store the current PID in the session
$_SESSION['pid'][]=posix_getpid();
//close the session now, otherwise the PID we just added won't actually be saved to the session until the process ends.
session_write_close();
Couple things to note:
posix_kill has two values. The first is the pid, and the second is supposed to be one of the signal constants from this list. Nothing there made any sense to me, other people seemed to have success just using 0, and when I use 0 it returns true. So whatever works!
calling session_write_close() before the resource intensive things start happening is crucial. Otherwise the new PID that has been saved to the session won't ACTUALLY be saved to the session until all of the page's processing is done. Which means the next process won't know to cancel the one that's still going on and taking forever.

How should I make a long PHP request via AJAX, periodically check for status updates, and close the script if the request cancels?

Part of the PHP web app I'm developing needs to do the following:
Make an AJAX request to a PHP script, which could potentially take from one second to one hour, and display the output on the page when finished.
Periodically update a loading bar on the web page, defined by a status variable in the long running PHP script.
Allow the long running PHP script to detect if the AJAX request is cancelled, so it can shut down properly and in a timely fashion.
My current solution:
client.php: Creates an AJAX request to request.php, followed by one request per second to status.php until the initial request is complete. Generates and passes along a unique identifier (uid) in case multiple instances of the app are running.
request.php: Each time progress is made, saves the current progress percentage to $_SESSION["progressBar"][uid]. (It must run session_start() and session_write_close() each time.) When finished, returns the data that client.php needs.
status.php: Runs session_start(), returns $_SESSION["progressBar"][uid], and runs session_write_close().
Where it falls short:
My solution fulfills my first two requirements. For the third, I would like to use connection_aborted() in request.php to know if the request is cancelled. BUT, the docs say:
PHP will not detect that the user has aborted the connection until an attempt is made to send information to the client. Simply using an echo statement does not guarantee that information is sent, see flush().
I could simply give meaningless output, but PHP must send a cookie every time I call session_start(). I want to use the same session, BUT the docs say:
When using session cookies, specifying an id for session_id() will always send a new cookie when session_start() is called, regardless of if the current session id is identical to the one being set.
My ideas for solutions, none of which I'm happy with:
A status database, or writing to temp files, or a task management system. This just seems more complicated than what I need!
A custom session handler. This is basically the same as the above solution.
Stream both progress data and result data in one request. This solves everything, but I would essentially be re-implementing AJAX. That can't be right.
Please tell me I'm missing something! Why doesn't PHP know immediately when a connection terminates? Why must PHP resend the cookie, even when it is exactly the same? An answer to any of these questions will be a big help!
My sincere thanks.
Why not set a second session variable, consisting of the unique request identifier and an access timestamp, from status.php.
If the client is closed it stops getting updates from status.php and the session variable stops being updated, which triggers a clean close in request.php if the variable isn't updated in a certain amount of time.

php, multithreading and other doubts

morning
I have some doubts about the the way php works. I cant find the answer anywhere on books so I thought to hit the stack ;)
so here it goes:
lets assume we have one single server with php+apache installed. Here are my beliefs:
1 - php can handle one request at a time. Doesn't matter if apache can handle more than 1 thread at a time because eventually the invoked php interpreter is single threaded.
2 - from belief 1 follows that I believe if the server receives 4 calls at the same very time these calls are queued up and executed 1 at a time. Who makes the request last gets the response last.
3 - from 1 and 2 follows that if I cron-call a url corresponding to a script that does some heavy-lifting/time consuming stuff I slow down the server up to the moment the script returns.
Whats true? whats false?
cheers
My crystal ball suggests that you are using PHP sessions and you have having simultaneous requests (either iframes or AJAX) getting queued. The problem is that the default session handler uses files and session_start() locks the data file. You should read your session data quickily and then call session_write_close() to release the file.
I see no reason why would PHP be not able to handle multiple requests at the same time. That said, it may be semi-true for handling requests of single client, depending on the type of script.
Many scripts use sessions. When session_start() is called, session is being opened and locked. When execution of script ends, session is being closed and unlocked (this can be done manually). When there are multiple requests for the same session, first requests opens and locks session, and the second request has to wait until session is unlocked. This might make an impression that multiple PHP scripts cannot be executed at the same time, but that's true (partly) only for requests that use the same session (in other words - requests from the same browser). Requests from two clients (browsers) may be processed parallelly as long as they don't use resources (files, DB tables etc) that are being locked/unlocked in other requests.

AJAX (prototype/php) running 2 ajax process hangs until first one is finished

This question is a followup to my previous one: Previous Questions.
So I setup my page to initiate an ajax call to initiate processing some records. And after each record it updates a row in another table to keep track of the status of this process. After that first ajax call is made, I have another start up. It's a Ajax.PeriodicalUpdater and it's set to hit a file which simply queries that row in the db and returns the status of the original process.
So this works perfectly fine... as long as the file that provides the status updates is outside my current app. If I put the file inside of my app, then it doesn't work right. If I watch firebug, the PeriodicalUpdater call doesn't get anything back until the original ajax call finishes, it just hangs out so it's as if the file is hung and not returning anything.
This whole app is running inside just a basic framework we are using. Nothing crazy, just handles routing, and basic template aspects etc... So all of these functions/files are inside this app and all these ajax calls are being routed through this.
What could be causing something like this?
Can this be due to the limit of concurrent connections supported by a browser to a particular domain?
This is caused by PHP session serialization. The session data is locked until the PHP process for each request has finished writing to it, so further requests in the same session will queue until the lock is released.
If your AJAX requests need access to session state, read out the information you need and then use session_write_close() as early in your code as possible to release those locks.

Categories