Processing large data sets via AJAX brings no speed benefits - php

I have several time consuming database queries to run. Each has been built to be triggered from an option chosen on a web page. I thought I was being quite cunning by firing off the work via several AJAX requests.
I presumed that multiple requests would be split over multiple processes/threads meaning the work would be completed relatively quickly for the user.
However, the requests seem to be processed in serial, meaning that no speed benefit is felt by the user.
Worse still, AJAX requests to update the page also wait in line, meaning they fail to respond until the previous requests have all completed.
I have read that this may be caused by the PHP sessions being locked.
What is the usual approach for this kind of issue?
Is there a way to force AJAX requests to work asynchronously?
Can I stop PHP from locking the sessions?
Should I use a seperate process via cron to fire background workings?
Thanks!
NB This project has been built using the symfony framework.
AJAX uses jQuery
// Get the content
$.get('/ajax/itemInformation/slug/'+slug, function(data) {
$('#modal-more-information').html(data);
});

If you are using sessions at all during any of the given AJAX requests, they will effectively execute serially, in order of request. This is due to locking of the session data file at the operating system level. The key to getting those requests to be asynchronous is to close (or never start) the session as quickly as possible.
You can use session_write_close (docs) to close the session as soon as possible. I like to use a couple of helper functions for this, the set_session_var function below will open the session, write the var, then close the session - in and out as quickly as possible. When the page loads, you can call session_start to get the $_SESSION variable populated, then immediately call session_write_close. From then on out, only use the set function below to write.
The get function is completely optional, since you could simply refer to the $_SESSION global, but I like to use this because it provides for a default value and I can have one less ternary in the main body of the code.
function get_session_var($key=false, $default=null) {
if ($key == false || strlen($key) < 0)
return false;
if (isset($_SESSION[$key]))
$ret = $_SESSION[$key];
else
$ret = $default;
return $ret;
}
function set_session_var($key=false, $value=null) {
if ($key == false || strlen($key) < 0)
return false;
session_start();
if ($value === null)
unset($_SESSION[$key]);
else
$_SESSION[$key] = $value;
session_write_close();
}
Be aware that there are a whole new set of considerations once the AJAX requests are truly asynchronous. Now you have to watch out for race conditions (you have to be wary of one request setting a variable that can impact another request) - for you see, with the sessions closed, one request's changes to $_SESSION will not be visible to another request until it rebuilds the values. You can help avoid this by "rebuilding" the $_SESSION variable immediately before a critical use:
function rebuild_session() {
session_start();
session_write_close();
}
... but this is still susceptible to a race condition.

Related

Is there a simple solution for concurrent requests in PHP?

My script sometimes receives 2 identical requests at the same time (difference in milliseconds) from an external system.
The script, upon incoming request, makes a request to the external system, checks for the existence of an entry there, and if not, creates it.
The problem is that with simultaneous requests, the check for uniqueness fails and as a result 2 records are created.
I tried to do a random sleep but it didn't work.
$sleep = rand(1,5); sleep($sleep);
I would suggest using a fast caching system, like memcached or redis.
Have a check if the system is busy
If not busy, make system busy by adding a flag in cache
Process the request.
Unflag the busy flag.
While processing, if another request comes, checks if busy in memcache/redis. If system busy, just don't do anything.
I'm going to try some pseudo code here:
function processData($requestData)
{
$isSystemBusy = Cache::get('systemBusy');
if $isSystemBusy == true {
exit();
}
Cache::set('systemBusy', true);
//do your logic here
Cache::set('systemBusy', false);
}
Solution was to write lock file with ID:
$tmp_file = __DIR__.'/tmp/'.$origDealId.'.lock';
if (file_exists($tmp_file)) {
// duplicate request
return null;
} else {
// do something
}

Stop PHP script execution through externally updated variable in Laravel 9

I'm trying to stop the execution of an php script in runtime triggered by the user.
My planned approach is to make an ajax call to the server, setting a session variable which is then looked up by the first already running script every now and then to determine if the process should stop or continue running. Hope that makes sense:)
I already found some examples of people doing the same here or with files instead of sessions here.
Now my problem is that I would like to use the session for my approach instead of the solution doing it with temporary files and the mentioned approach with sessions doesn't seem to work for me in Laravel 9.
The result I'm looking for:
start first php script (runs maybe 30 seconds)
make ajax call to server & set session variable stop_execution = true
the first php script which is still running detects the change in stop_execution === true & stops execution.
The behaviour I get:
start first php script (runs maybe 30 seconds)
make ajax call to server & set session variable stop_execution = true
the first php script which is still running doesn't detect the change in stop_execution === true & runs until it finishes by itself.
the next time I run the first php script again it immediately detects the change in stop_execution === true & stops execution.
My thought on why this is happening is that the session variables doesn't get refreshed inside the first script after checking them for the first time. Maybe there is a way to force pull all new changes from the session variables while the first script is running? Did somebody have the same issues with Laravel? It seems like this is working with session variables when not using Laravel. I think it has something to do with Laravel and how the sessions are handled.
I would appreciate any advice 😊
Thanks a lot!
Code:
First script executed at the beginning
private function longRunningFunction()
{
// check session variable every iteration and exit while loop if === true
while ($condition === true && ! session('cancel_computation')) {
// do stuff ...
}
// reset session variable to false after finishing
session(['cancel_computation' => false]);
return $result;
}
Second script executed on ajax call
public function cancelComputationFunction()
{
// set session variable to be true
session(['cancel_computation' => true]);
}
I would not advice you to use sessions for this.
They are initialized on script start and I have never seen somebody re-fetch them in the same script.
Nor am I able to find such functionality when researching online.
What you could do though, is to utitlize the Cache facade.
It is very well suited for what you want and it very lightweight no matter which driver you choose to use under the hood in Laravel.
So instead of:
public function cancelComputationFunction()
{
// set session variable to be true
session(['cancel_computation' => true]);
}
You could do something like this:
public function cancelComputationFunction()
{
// set cache variable to be true
Cache::put('cancel_computation_' . session()->getId(), true);
}
And likewise inside the long-running part of the script.

Using PHP Sessions for High Traffic with AJAX and CodeIgniter

I was reading the CodeIgniter 3 documentation on using sessions and high traffic with AJAX, and it recommends using session_write_close().
My application has functions that write and read directly from $_SESSION, as an example below.
function setSession($index, $value) {
$_SESSION[$index] = $value;
}
function getSession($index) {
if (isset($_SESSION[$index]) {
return $_SESSION[$index];
} else {
return FALSE;
}
}
From what I understand from the documentation these two functions should always have session_start() at the beginning and session_write_close() at the end, correct?
That way they will avoid block the session and even the other requests being faster, since the session is unblocked.
Another issue that I found strange, CodeIgniter already has by default a session_start() function that is automatically triggered at the beginning of execution, without intervention from my code.
What will happen to this default session_start() if my functions were to call it every time something was fetched from the session?

PHP: How to check if session_start will block or make it time out

In a certain instance I want to cancel calls of users that already have an open session.
I use session_start to make sure, a logged in user can only execute one request at a time and that works fine. But all subsequent calls will simply block indefinitely until all previous calls went through which is unsatisfying in certain circumstances like misbehaving users.
Normally all blocking calls I know have a timeout parameter you give with them. Is there something like this for start_session?
Or is there a call in the spirit of session_opened_by_other_script that I can do before calling session_start?
For now my solution is to check if there is already a lock on the session file using exec an shell scripting. I don't recommend anyone using it who does not fully understand it.
Basically it tries to get a lock on the session file for the specified timeout value using flock. If it fails to do so it exists with 408 Request timeout. (or 429 Too many requests, if available)
For this to work you need to...
know your session ID at that point in time
have file based sessions
Note, that this is not atomic. It still can happen that multiple requests end up waiting in session_start. But it should be a rare event. Most calls should be canceled correctly, which was my agenda.
class Session {
static public function openWhenClosed() {
if (session_status() == PHP_SESSION_NONE) {
$sessionId = session_id();
if ($sessionId == null)
$sessionId = $_COOKIE[session_name()];
if ($sessionId != null) {
$sessFile = session_save_path()."/sess_".$sessionId;
if (file_exists($sessFile)) {
$timeout = 30; //How long to try to get hold of the session
$fd = 9; //File descriptor to use to try locking the session file.
/*
* This 'trick' is not atomic!!
* After exec returned and session_start() is called there is a time window
* where it can happen that other waiting calls get a successful lock and also
* proceed and get then blocked by session_start(). The longer the sleep value
* the less likely this is to happen. But also the longer the extra delay
* for the call
*/
$sleep = "0.01"; //10ms
//Check if session file is already locked by trying to get a lock on it.
//If it is, try again for $timeout seconds every $sleep seconds
exec("
exec $fd>>$sessFile;
while [ \$SECONDS -lt $timeout ]; do
flock -n $fd;
if [ \$? -eq 0 ]; then exit 0; fi;
sleep $sleep;
done;
exit 1;
", $null, $timedOut);
if ($timedOut) {
http_response_code(408); //408: Request Timeout. Or even better 429 if your apache supports it
die("Request canceled because another request is still running");
}
}
}
session_start();
}
}
}
Additional thoughts:
It is tempting to use flock -w <timeout> but that way far more
waiting in line calls will manage to use the time between exec and
start_session to obtain a lock and end up blocking in
session_start
If you use the browser for testing this, be aware that most browsers do command queing and reuse a limited amount of connectiosn. So they do not start sending your request before others finish. This can lead to seemingly strange results if you are not aware of this. You can more reliably test using several parallel wget commands.
I do not recommend to activate this for normal browser request. As mentioned in 2) this is already handled by the browser anyway in most cases. I only use it to protect my API against rouge implementations that do not wait for an answer before sending the next request.
The performance hit was negligible in my tests for my overall load. But I would advice to test in your environment yourself using microtime() calls

cakephp comet usleep blocks everything

Below is the code that i am end up with using successful comet implementation.
$lastmodif = isset($this->params['form']['timestamp']) ? $this->params['form']['timestamp'] : 0;
$currentmodif = $already_updated[0]['Update']['lastmodified'];
while ($currentmodif <= $lastmodif)
{
usleep(5000000);
clearstatcache();
$already_updated_new = $this->Update->find('all',array
(
'conditions' => array
(
'Update.receiver_id' => $this->Auth->user('id'),
'Update.table_name' => "request_responses"
)
));
$currentmodif = $already_updated_new[0]['Update']['lastmodified'];
}
$already_updated[0]['Update']['lastmodified'] is the query result for get last updated timestamp of table.
In above code $lastmodif and $currentmodif is the timestamp that is being passed after every successful comet response.
But now problem is that when i am clicking on other links on same page nothing happens but after wait for so long its redirecting.
i think usleep is blocking other HTTP request.
i am using mysql and cakephp please guys guide me what should i do in order to solve this issue.
I have tried to flush when page is called but it shows can not modify header error as output is already sent.
Thanks.
I've met similar situation several times. It looks like Session is blocked by your sleeping script.
How to solve it in CakePHP:
call session_write_close(); at the start of your script.
There is no way to do that via Cake's Session Component or Helper
Note: If something inside script uses session - Cake will reopen session and hang all requests that use same session again. In this case you will need to close session before sleep or before any operations that take a lot of time to be finished
If your script uses sessions then you could notice such behavior. PHP locks the session file until the script completes.
This means that once a script starts a session, any other script that attempts to start a session using same session id is blocked until the previous script releases the lock (or terminates).
The workaround for this is to unlock the session before any lengthy process:
call session_start()
read/write any session variables
call session_write_close()
do lengthy processing
Yes, the usleep is blocking further requests. Depending on your hosting environment, you probably have a limited amount of processes available. I assume you have multiple users in your chat -> they all issue blocking processes unless none is available, that's why your other "links" timeout.
I would suggest to implement the wait on the client-browser side, eg
setTimeout(function() {
fetchAndPrintTheNewChats();
}, 50000000);
Any approach to do this within your PHP code will result in the same problem.
Can you share what version of cakephp you are using in case someone else who comes along might have a solution?
Cake has a session component: http://book.cakephp.org/2.0/en/core-libraries/components/sessions.html
and a session helper: http://book.cakephp.org/2.0/en/core-libraries/helpers/session.html

Categories