cakephp comet usleep blocks everything - php

Below is the code that i am end up with using successful comet implementation.
$lastmodif = isset($this->params['form']['timestamp']) ? $this->params['form']['timestamp'] : 0;
$currentmodif = $already_updated[0]['Update']['lastmodified'];
while ($currentmodif <= $lastmodif)
{
usleep(5000000);
clearstatcache();
$already_updated_new = $this->Update->find('all',array
(
'conditions' => array
(
'Update.receiver_id' => $this->Auth->user('id'),
'Update.table_name' => "request_responses"
)
));
$currentmodif = $already_updated_new[0]['Update']['lastmodified'];
}
$already_updated[0]['Update']['lastmodified'] is the query result for get last updated timestamp of table.
In above code $lastmodif and $currentmodif is the timestamp that is being passed after every successful comet response.
But now problem is that when i am clicking on other links on same page nothing happens but after wait for so long its redirecting.
i think usleep is blocking other HTTP request.
i am using mysql and cakephp please guys guide me what should i do in order to solve this issue.
I have tried to flush when page is called but it shows can not modify header error as output is already sent.
Thanks.

I've met similar situation several times. It looks like Session is blocked by your sleeping script.
How to solve it in CakePHP:
call session_write_close(); at the start of your script.
There is no way to do that via Cake's Session Component or Helper
Note: If something inside script uses session - Cake will reopen session and hang all requests that use same session again. In this case you will need to close session before sleep or before any operations that take a lot of time to be finished

If your script uses sessions then you could notice such behavior. PHP locks the session file until the script completes.
This means that once a script starts a session, any other script that attempts to start a session using same session id is blocked until the previous script releases the lock (or terminates).
The workaround for this is to unlock the session before any lengthy process:
call session_start()
read/write any session variables
call session_write_close()
do lengthy processing

Yes, the usleep is blocking further requests. Depending on your hosting environment, you probably have a limited amount of processes available. I assume you have multiple users in your chat -> they all issue blocking processes unless none is available, that's why your other "links" timeout.
I would suggest to implement the wait on the client-browser side, eg
setTimeout(function() {
fetchAndPrintTheNewChats();
}, 50000000);
Any approach to do this within your PHP code will result in the same problem.

Can you share what version of cakephp you are using in case someone else who comes along might have a solution?
Cake has a session component: http://book.cakephp.org/2.0/en/core-libraries/components/sessions.html
and a session helper: http://book.cakephp.org/2.0/en/core-libraries/helpers/session.html

Related

Best way to guarantee a job is been executed

I have a script that is running continuously in the server, in this case a PHP script, like:
php path/to/my/index.php.
It's been executed, and when it's done, it's executed again, and again, forever.
I'm looking for the best way to be notified if that event stop running(been executed).
There are many reasons why it stops been called, like server memory, new deployment, human error... etc.
I just want to be notified(email, sms, slack...) if that script was not executed for certain amount of time(like 1 hour, 1 day, etc...)
My server is Ubuntu living in AWS.
An idea:
I was thinking on having an index in REDIS/MEMCACHED/ETC with a TTL. Every time the script run, renovate that TTL for this index.
If the script stop working for that TTL time, this index will expire. I just need a way to trigger a notification when that expiration happen, but looks like REDIS/MEMCACHED are not prepared for that
register_shutdown_function might help, but might not... https://www.php.net/manual/en/function.register-shutdown-function.php
I can't say i've ever seen a script that needs to run indefinitely in PHP. Perhaps there is another way to solve the problem you are after?
Update - Following your redis idea, I'd look at keyspace notifications. https://redis.io/topics/notifications
I've not tested the idea since I'm not actually a redis user. But it may be possible to subscribe to capture the expiration event (perhaps from another server?) and generate your notification.
There's no 'best' way to do this. Ultimately, what works best will boil down to the specific workflow you're supporting.
tl;dr version: Find what constitutes success and record the most recent time it happened. Use that for your notification trigger in another script.
Long version:
That said, persistent storage with a separate watcher is probably the most straight-forward way to do this. Record the last successful run, and then check it with a cron job every so often.
For what it's worth, for scripts like this I generally monitor exit codes or logs produced by the script in question. This isolates the error notification process from the script itself so a flaw in the script (hopefully) doesn't hamper the notification.
For a barebones example, say we have a script to invoke the actual script... (This is very much untested pseudo-code)
<?php
//Run and record.
exec("php path/to/my/index.php", $output, $return_code);
//$return_code will be 255 on fatal errors. You can use other return codes
//with exit in your called script to report other fail states.
if($return_code == 0) {
file_put_contents('/path/to/folder/last_success.txt', time());
} else {
file_put_contents('/path/to/folder/error_report.json', json_encode([
'return_code' => $return_code,
'time' => time(),
'output' => implode("\n", $output),
//assuming here that error output isn't silently logged somewhere already.
], JSON_PRETTY_PRINT));
}
And then a watcher.php that monitors these files on a cron job.
<?php
//Notify us immediately on failure maybe?
//If you have a lot of transient failures it may make more sense to
//aggregate and them in a single report at a specific time instead.
if(is_file('/path/to/folder/error_report.json')) {
//Mail details stored in JSON here.
//rename file so it's recorded, but we don't receive it again.
rename('/path/to/folder/error_report.json', '/path/to/folder/error_report.json'.'-sent-'.date('Y-m-d-H-i-s'));
} else {
if(is_file('/path/to/folder/last_success.txt')) {
$last_success = intval(file_get_contents('/path/to/folder/last_success.txt'));
if(strtotime('-24 hours') > $last_success) {
//Our script hasn't run in 24 hours, let someone know.
}
} else {
//No successful run recorded. Might want to put code here if that's unexpected.
}
}
Notes: There are some caveats to the specific approach displayed above. A script can fail in a non-fatal way and if you're not checking for it this example could record that as a successful run. For example, permissions errors causing warnings but the script still runs it's full course and exits normally without hitting an exit call with a specific return code. Our example invoker here would log that as a successful run - even though it isn't.
Another option is to log success from your script and only check for error exits from the invoker.

PHP: How to check if session_start will block or make it time out

In a certain instance I want to cancel calls of users that already have an open session.
I use session_start to make sure, a logged in user can only execute one request at a time and that works fine. But all subsequent calls will simply block indefinitely until all previous calls went through which is unsatisfying in certain circumstances like misbehaving users.
Normally all blocking calls I know have a timeout parameter you give with them. Is there something like this for start_session?
Or is there a call in the spirit of session_opened_by_other_script that I can do before calling session_start?
For now my solution is to check if there is already a lock on the session file using exec an shell scripting. I don't recommend anyone using it who does not fully understand it.
Basically it tries to get a lock on the session file for the specified timeout value using flock. If it fails to do so it exists with 408 Request timeout. (or 429 Too many requests, if available)
For this to work you need to...
know your session ID at that point in time
have file based sessions
Note, that this is not atomic. It still can happen that multiple requests end up waiting in session_start. But it should be a rare event. Most calls should be canceled correctly, which was my agenda.
class Session {
static public function openWhenClosed() {
if (session_status() == PHP_SESSION_NONE) {
$sessionId = session_id();
if ($sessionId == null)
$sessionId = $_COOKIE[session_name()];
if ($sessionId != null) {
$sessFile = session_save_path()."/sess_".$sessionId;
if (file_exists($sessFile)) {
$timeout = 30; //How long to try to get hold of the session
$fd = 9; //File descriptor to use to try locking the session file.
/*
* This 'trick' is not atomic!!
* After exec returned and session_start() is called there is a time window
* where it can happen that other waiting calls get a successful lock and also
* proceed and get then blocked by session_start(). The longer the sleep value
* the less likely this is to happen. But also the longer the extra delay
* for the call
*/
$sleep = "0.01"; //10ms
//Check if session file is already locked by trying to get a lock on it.
//If it is, try again for $timeout seconds every $sleep seconds
exec("
exec $fd>>$sessFile;
while [ \$SECONDS -lt $timeout ]; do
flock -n $fd;
if [ \$? -eq 0 ]; then exit 0; fi;
sleep $sleep;
done;
exit 1;
", $null, $timedOut);
if ($timedOut) {
http_response_code(408); //408: Request Timeout. Or even better 429 if your apache supports it
die("Request canceled because another request is still running");
}
}
}
session_start();
}
}
}
Additional thoughts:
It is tempting to use flock -w <timeout> but that way far more
waiting in line calls will manage to use the time between exec and
start_session to obtain a lock and end up blocking in
session_start
If you use the browser for testing this, be aware that most browsers do command queing and reuse a limited amount of connectiosn. So they do not start sending your request before others finish. This can lead to seemingly strange results if you are not aware of this. You can more reliably test using several parallel wget commands.
I do not recommend to activate this for normal browser request. As mentioned in 2) this is already handled by the browser anyway in most cases. I only use it to protect my API against rouge implementations that do not wait for an answer before sending the next request.
The performance hit was negligible in my tests for my overall load. But I would advice to test in your environment yourself using microtime() calls

PHP ajax multiple calls

I have been looking for several answers around the web and here, but I could not find one that solved my problem.
I am making several JQuery ajax calls to the same PHP script. In a first place, I was seeing each call beeing executed only after the previous was done. I changed this by adding session_write_close() to the beginning of the script, to prevent PHP from locking the session to the other ajax calls. I am not editing the $_SESSION variable in the script, only reading from it.
Now the behaviour is better, but instead of having all my requests starting simultaneously, they go by block, as you can see on the image:
What should I do to get all my requests starting at the same moment and actually beeing executed without any link with the other requests ?
For better clarity, here is my js code:
var promises = [];
listMenu.forEach(function(menu) {
var res = sendMenu(menu);//AJAX CALL
promises.push(res);
});
$.when.apply(null, promises).done(function() {
$('#ajaxSpinner').hide();
listMenu = null;
});
My PHP script is just inserting/updating data, and start with:
<?php
session_start();
session_write_close();
//execution
I guess I am doing things the wrong way. Thank you in advance for you precious help !!
Thomas
This is probably a browser limitation, there is a maximum number of concurrent connections to a single server per browser instance. In Chrome this has been 6, which reflects the size of the blocks shown in your screenshot. Though this is from 09, I believe it's still relevant: https://bugs.chromium.org/p/chromium/issues/detail?id=12066

Using server-sent events and php sessions

I'm using in my project server-sent events where the JS is calling a PHP page, say eventserver.php which consists basically of an infinite loop which checks the existence of an event in a $_SESSION variable.
On my first implementation this lead my website to hung up because the eventserver took the lock on the session and did not release it until the timeout expired; however, I managed to resolve this issue by locking/unlocking the session by using session_write_lock() and
session_start() continuously in the loop.
This is actually causing a lot of PHP warnings (on Apache error.log) saying that "cannot send session cache limiter - headers already sent", "cannot send session cookies" and so on.
Posting some code here
session_start();
header('Cache-Control: no-cache');
header('Content-Type: text/event-stream');
class EventServer
{
public function WaitForEvents( $eventType )
{
// ... do stuff
while( true )
{
// lock the session to this instance
session_start();
// ...check/output the event
ob_flush();
flush();
// unlock the session
session_write_close();
sleep( 1 );
}
}
}
Why is this happening?
I am doing the same thing as the OP and ran into the same issue. Some of these answers don't understand how eventSource should work. My code is identical to yours and uses a session variable to know what view the user is on which drives what data to return in the event of a server trigger. It's part of a realtime collaboration app.
I simply prepended an # to the session_start() to suppress the warnings in the log. Not really a fix, but it keeps the log from filling up.
Alternatively, not sure how well it would work for your application, but you could use ajax to write the session variable you are monitoring to the database, then your eventSource script can monitor for a change in the DB instead of having to start sessions.
This is not a good idea. HTTP is a request-response protocol so if you want server-client communication to be bi-directional you will need to look into websockets or something similar. There are also things like "long polling" and "heart beating"
If you want an event loop try something like servlets in apache tomcat.
You will grapple for hours with issues because of your design.
Also check out ajax if you just want to shoot messages from javascript to PHP.
Make sure you know an overview of the tech stack you are working with :)
You don't need an infinite loop with SSE. The EventSource keeps an open connection to the server and any update on the server side data will be read by the client.
Check out basic usage of SSE here
It's probably because you start the session twice in your code. Don't restart the session at the beginning of the loop, but after the sleep().

Zend strange behavior: can not process 2nd request when 1st request still running

I'm building a chat function using Zend Framework.
In javascript, I use ajax to request to http://mydomain.com/chat/pull with function pullAction like this
public function pullAction() {
while ( true ) {
try {
$chat = Eezy_Chat::getNewMessage();
if($chat){
$chat->printMessage();
break;
}
sleep ( 1 ); // sleep 1 secound between each loop
} catch ( Zend_Db_Adapter_Exception $ex ) {
if ($ex->getCode () == 2006) { // reconnect db if timeout
$dbAdapter = Zend_Db_Table::getDefaultAdapter ();
$dbAdapter->closeConnection ();
$dbAdapter->getConnection ();
}
}
}
}
This action will running until other user send some message.
But while this request is running, I can not go to any other page on my site. All of them wait for http://mydomain.com/chat/pull to finished it execution.
I searching for a solution all over Google but still not found.
Thank for your help.
This sounds like Session locking.
When you use Sessions stored on the file system, PHP will lock the session file on each request and only give it free when that request is through. While the file is locked, any other requests wanting to access that file will hang and wait.
Since your chat script will loop forever, checking for new messages, the session file will be locked forever, too, preventing the same user from accessing different sections of the site requiring session access as well.
A solution is to load all the Session Data required to fulfill a Request into memory and then use Zend_Session::writeClose as soon as possible to release the lock.

Categories