I started using push in HTML5 using the JavaScript EventSource object. I was totally happy with a working solution in PHP:
$time = 0;
while(true) {
if(connection_status() != CONNECTION_NORMAL) {
mysql_close()
break;
}
$result = mysql_query("SELECT `id` FROM `table` WHERE UNIX_TIMESTAMP(`lastUpdate`) > '".$time."'");
while($row = mysql_fetch_array($result)) {
echo "data:".$row["id"].PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$time = time();
sleep(1);
}
But suddenly my WebApp wasn't reachable anymore with an MySQL error "too many connections".
It turned out that the MySQL connection doesn't close after closing the event source in JavaScript:
window.onload = function() {
sse = new EventSource("push.php");
sse.onmessage = function(event) {
console.log(event.data.split(":")[1]);
}
}
window.onbeforeunload = function() {
sse.close();
}
So I guess that the PHP script does not stop to execute. Is there any way to call in function (like die();) before the clients connection disconnects? Why doesn't my script terminate after calling .close(); on the EventSource?!
Thanks for help! —
First off: when you wrap your code in a huge while(true) loop, your script will never terminate. The connection with the DB would have been closed when your script "ran out of code to execute", but since you've written a deadlock, that's not going to happen... ever.It's not an EventSource issue, that merely does what it's supposed to do. It honestly, truly, and sadly is your fault.
The thing is: a user connects to your site, and the EventSource object is instantiated. A connection to the server is established and a request for the return value of push.php is requested. Your server does as requested, and runs the script, that -again- is nothing but a deadlock. There are no errors, so it can just keep on running, for as long as the php.ini allows it to run. The .close() method does cancel the stream of output (or rather it should), but your script is so busy either performing its infinite loop, or sleeping. Assuming the script to be stopped because a client disconnects is like assuming that any client could interfere with what the server does. Security-wise this would be the worst that could happen. Now, to answer your actual question: What causes the issue, how to fix it?The answer: HEADERS
Look at this little PHP example: the script isn't kept alive server-side (by infinite loops), but by setting the correct headers.
Just as a side-note: Please, ditch mysql_* ASAP, it's being deprecated (finally), use PDO or mysqli_* istead. It's not that hard, honestly.
I had exactly the same issue and as far as I understood it the reason was that apache didn't terminate the php script until I tried to write data trough the closed socket connection again.
I didn't have any changes in my database so there was no output.
To fix this I simply echo a event source comment in my while loop like:
echo ": heartbeat\n\n";
ob_flush();
flush();
This causes the script to be terminated if the socket connection is closed.
you might want to try to add this to the loop:
if(connection_status() != CONNECTION_NORMAL)
{
break;
}
but php should stop when the client disconnected.
1.
window.onbeforeunload = function() {
sse.close();
}
this is not required, EventSource will be closed at page unloading time.
even if you can not find a way to detect disconnected client on PHP, you can just stop execution after N seconds, EventSource will reconnect automatically.
http://www.php.net/manual/ru/function.connection-aborted.php
comments on this page says, that u should use "flush" before connection_status
anyway, you should drop connection after N seconds, because u cant detect "bad" disconnects.
3.
echo "retry:1000\n"; // use this to tell EventSource the reconnection delay in ms
$time = 0;
while(true) {
if(connection_status() != CONNECTION_NORMAL) {
mysql_close()
break;
}
$result = mysql_query("SELECT id FROM table WHERE UNIX_TIMESTAMP(lastUpdate) > '".$time."'");
while($row = mysql_fetch_array($result)) {
$rect[] = $row;
}
for($i-0;$i echo "data:".implode('',$disp)."\n\n"; $time = time(); sleep(1);
}
As Elias Van Ootegem pointed out, your script never terminates and hence you have a bunch of MySQL connections active. More concerningly, you have a bunch of resources being used in the form of PHP loops running with no end in sight!
Unfortunately, the solution to keeping the is not "headers". In the example at this link referenced by Elias, the php script terminates, and the client javascript actually has to re-open a connection. You can test this by using the following code
source.addEventListener('open', function(e) {
// Connection was opened.
console.log("Opening new connection");
}, false);
If you follow his example on github, in implementation the author actually employs a loop that terminates after X seconds. See the do loop and comments at bottom of the script.
The solution is to give your loop a lifespan.
By defining a limit to the loop, IE X itterations or after X amount of time to trigger the end of the loop or die();, you will make it so that if the client disconnects there is a limit to how long a script will remain active. If the client is still there after X amount of time, they will simply re connect.
Related
When trying to write to the client, the message is getting buffered, and in some cases, it's not being written at all.
CURRENT STATUS:
When I telnet into the server, the Server Ready: message is readily printed as expected.
When I send random data (other than "close"), the server's terminal nicely shows progress every second, but the clients output waits until after all the sleeping, and then prints it all at once.
Most importantly, when sending "close", it just waits the obligatory second, and then closes without ANY writeout in the client.
GOAL:
My main goal is for a quick message to be written to the client prior closing a connection.
CODE:
// server.php
$loop = React\EventLoop\Factory::create();
$socket = new React\Socket\Server($loop);
$socket->on('connection', function ($conn)
{
$conn->write("Server ready:\n");
$conn->on('data', function ($data) use ($conn)
{
$data = trim($data);
if( $data == 'close')
{
$conn->write("Bye\n");
sleep(1);
$conn->close();
}
for ($i = 1; $i<5; $i++) {
$conn->write(". ");
echo '. ';
sleep(1);
}
$conn->write(".\n");
echo ".\n";
$conn->write("You said \"".$data."\"\n");
});
});
$socket->listen(1337, '127.0.0.1');
$loop->run();
SUMMARY:
Why can't I get anything written to the client before closing?
The problem your are encountering is because you are forgetting about the event loop that drives ReactPHP. I ran into this issue recently when building a server and after following the code around, I found 2 things out that should help you solve your problem.
If you close the connection after writing to it, it simply closes the connection before it can write. Solving this issue can help you fix the next issue... The correct call for writing something to the client, THEN closing the connection is called $conn->end('msg'); If you follow this chain of code the behaviour becomes clear; First it basically writes to the connection just as if you ran $conn->write('msg'); but then it registers a new event handler for the full-drain event, this handler simple calls $conn->close();; the full-drain event is only dispatched when the write buffer is completely emptied. So you can see that the use of end, simply waits to write before it closes the connection.
The drain and full-drain are only dispatched after writing to a stream. full-drain occurs when the write buffer is completely empty. drain is dispatched after the write buffer is emptied past its softLimit which by default is 2048 bytes.
The reason your writes are not making it through is because $conn->write('msg') only adds the string to the write buffer; it does not actually write. The event loop needs to run before it will be given time to write. Your use of sleep() is incorrect with react because this blocks the call at that line. In react you don't want to block any code from executing. If you are done a task, then let your function return, and code execution returns to the react main event loop. If you wish to have things run on certain intervals, or simply on the next iteration of the loop, you can schedule callbacks for the main event loop with the use of $loop->addTimer($seconds, $callback), $loop->addPeriodicTimer($seconds, $callback), $loop->nextTick($callback) or $loop->futureTick($callback)
Ultimately it is because you are programming without acknowledging that we are still in a blocking thread. Anything your code that blocks, blocks the entire react event loop, in turn blocking everything that react does for you. Give up processing time back to the loop to ensure it can do the reads/writes that you have queued up. You only need on iteration of the loop to occur for the write buffer to begin emptying (depending on the size of the buffer it may or may not write it all out)
If you're goal here is just to fix the connection closing bit, switch your call to $conn->end('msg') instead of the write -> close. However I believe that the other loop you have printing the dots also does not behave in the way that I think you expect/desire it to work. As it's purpose is not as clear, if you can tell me what your goal was for it I can possibly help you restructure that step as well.
I have a very expensive query which gets executed from php and it can take a while to execute. Is there a way, in php, to detect if a user disconnects prior to the query being done and cancel it?
A possible solution is to use pg_send_query(), that function sends a query to the database and returns immediatly without blocking. Then you can poll to see if the user disconnected before the query finished. See this:
ignore_user_abort(false);
$db = pg_connect(DATABASE_DSN);
pg_send_query($db, "SELECT pg_sleep(10000)"); // long running query
while(pg_connection_busy($db)) {
/* You need to output something to the browser so PHP can know when the
browser disconnected. The 0 character will be ignored.
*/
echo chr(0);
/* Need to do both flushes to make sure the chr is sent to the browser */
ob_flush();
flush();
usleep(100000); // 0.1s to avoid starving the CPU
if(connection_status() != CONNECTION_NORMAL || connection_aborted()) {
// Browser disconnected, cleanup an die
pg_cancel_query($db);
pg_query($db, "ROLLBACK");
pg_close($db);
die();
}
}
// At this point the query finished and you can continue fetching the rows
This approach works but has a big problem: you really need to send something to the browser to detect the browser disconnection. If you don't, connection_status() and connection_aborted() will not work. This seems to be an old PHP bug, see here: https://bugs.php.net/bug.php?id=30301
So this method doesn't work when, for example, you query Postgres in the middle of a PDF generation routine. In that case the needed chr(0) will break the generated binary file.
You would want to use connection_aborted to detect if the user has disconnected it returns 1 if the client has disconnected otherwise it returns 0. There is some documentation here, however its usage is self documenting and you should have no problem using it.
I have the following code:
ignore_user_abort(true);
while(!connection_aborted()) {
// do stuff
}
and according to the PHP documentation, this should run until the connection is closed, but for some reason, it doesn't, instead it keeps running until the script times out. I've looked around online and some recommended adding
echo chr(0);
flush();
into the loop, but that doesn't seem to do anything either. Even worse, if I just leave it as
while(true) {
// do stuff
}
PHP still continues to run the script after the client disconnects. Does anyone know how to get this working? Is there a php.ini setting that I'm missing somewhere?
If it matters, I'm running PHP 5.3.5. Thanks in advance!
I'm a bit late to this party, but I just had this problem and got to the bottom of it. There are multiple things going on here -- a few of them mentioned here:
PHP doesn't detect connection abort at all
The gist of it: In order for connection_aborted() to work, PHP needs to attempt to send data to the client.
Output Buffers
As noted, PHP will not detect the connection is dead until it tries to actually send data to the client. This is not as simple as doing an echo, because echo sends the data to any output buffers that may exist, and PHP will not attempt a real send until those buffers are full enough. I will not go into the details of output buffering, but it's worth mentioning that there can be multiple nested buffers.
At any rate, if you'd like to test connection_abort(), you must first end all buffers:
while (ob_get_level()){ ob_end_clean(); }
Now, anytime you want to test if the connection is aborted, you must attempt to send data to the client:
echo "Something.";
flush();
// returns expected value...
// ... but only if ignore_user_abort is false!
connection_aborted();
Ignore User Abort
This is a very important setting that determines what PHP will do when the above flush() is called, and the user has aborted the connection (eg: hit the STOP button in their browser).
If true, the script will run along merrily. flush() will do essentially nothing.
If false, as is the default setting, execution will immediately stop in the following manner:
If PHP is not already shutting down, it will begin its shutdown
process.
If PHP is already shutting down, it will exit whatever shutdown
function it is in and move on to the next.
Destructors
If you'd like to do stuff when the user aborts the connection, you need to do three things:
Detect the user aborted the connection. This means you have to attempt to flush to the user periodically, as described further above. Clear all output buffers, echo, flush.
a. If ignore_connection_aborted is true, you need to manually test connection_aborted() after each flush.
b. If ignore_connection_aborted is false, a call to flush will cause the shutdown process to begin. You must then be especially careful not to cause flush from within your shutdown functions, or else PHP will immediate cease execution of that function and move on to the next shutdown function.
Putting it all together
Putting this all together, lets make an example that detects the user hitting "STOP" and does stuff.
class DestructTester {
private $fileHandle;
public function __construct($fileHandle){
// fileHandle that we log to
$this->fileHandle = $fileHandle;
// call $this->onShutdown() when PHP is shutting down.
register_shutdown_function(array($this, "onShutdown"));
}
public function onShutdown() {
$isAborted = connection_aborted();
fwrite($this->fileHandle, "PHP is shutting down. isAborted: $isAborted\n");
// NOTE
// If connection_aborted() AND ignore_user_abort = false, PHP will immediately terminate
// this function when it encounters flush. This means your shutdown functions can end
// prematurely if: connection is aborted, ignore_user_abort=false, and you try to flush().
echo "Test.";
flush();
fwrite($this->fileHandle, "This was written after a flush.\n");
}
public function __destruct() {
$isAborted = connection_aborted();
fwrite($this->fileHandle, "DestructTester is getting destructed. isAborted: $isAborted\n");
}
}
// Create a DestructTester
// It'll log to our file on PHP shutdown and __destruct().
$fileHandle = fopen("/path/to/destruct-tester-log.txt", "a+");
fwrite($fileHandle, "---BEGINNING TEST---\n");
$dt = new DestructTester($fileHandle);
// Set this value to see how the logs end up changing
// ignore_user_abort(true);
// Remove any buffers so that PHP attempts to send data on flush();
while (ob_get_level()){
ob_get_contents();
ob_end_clean();
}
// Let's loop for 10 seconds
// If ignore_user_abort=true:
// This will continue to run regardless.
// If ignore_user_abort=false:
// This will immediate terminate when the user disconnects and PHP tries to flush();
// PHP will begin its shutdown process.
// In either case, connection_aborted() should subsequently return "true" after the user
// has disconnected (hit STOP button in browser), AND after PHP has attempted to flush().
$numSleeps = 0;
while ($numSleeps++ < 10) {
$connAbortedStr = connection_aborted() ? "YES" : "NO";
$str = "Slept $numSleeps times. Connection aborted: $connAbortedStr";
echo "$str<br>";
// If ignore_user_abort = false, script will terminate right here.
// Shutdown functions will being.
// Otherwise, script will continue for all 10 loops and then shutdown.
flush();
$connAbortedStr = connection_aborted() ? "YES" : "NO";
fwrite($fileHandle, "flush()'d $numSleeps times. Connection aborted is now: $connAbortedStr\n");
sleep(1);
}
echo "DONE SLEEPING!<br>";
die;
The comments explain everything. You can fiddle with ignore_user_abort and look at the logs to see how this changes things.
I hope this helps anyone having trouble with connection_abort, register_shutdown_function, and __destruct.
Try using ob_flush(); just before flush(); and some browsers just won't update the page before some data is added.
Try doing something like
<? php
// preceding scripts
ignore_user_abort(true);
$i = 0;
while(!connection_aborted())
{ $i++;
echo $i;
echo str_pad('',4096); // yes i know this will increase the overhead but that can be reduced afterwords
ob_flush();
flush();
usleep(30000); // see what happens when u run this on my WAMP this runs perfectly
}
// Ending scripts
?>
Google Chrome has issues with this code, actually; it doesn't support streaming very nicely.
Try:
ignore_user_abort(true);
echo "Testing connection handling";
while (1) {
if (connection_status() != CONNECTION_NORMAL)
break;
sleep(1);
echo "test";
flush();
}
Buffering seems to cause issues depending on your server settings.
I tried disabling the buffer with ob_end_clean but that wasn't enough, I had to send some data to cause the buffer to fully flush out. Here is the final code that ended up working for me.
set_time_limit(0); // run the delay as long as the user stays connected
ignore_user_abort(false);
ob_end_clean();
echo "\n";
while ($delay-- > 0 && !connection_aborted())
{
echo str_repeat("\r", 1000) . "<!--sleep-->\n";
flush();
sleep(1);
}
ob_start();
I am new to this site, so I really hope I will provide all the necessary information regarding my question.
I've been trying to create a "new message arrived notification" using long polling. Currently I am initiating the polling request by window.onLoad event of each page in my site.
On the server side I have an infinite loop:
while(1){
if(NewMessageArrived($current_user))break;
sleep(10);
}
echo $newMessageCount;
On the client side I have the following (simplified) ajax functions:
poll_new_messages(){
xmlhttp=GetXmlHttpObject();
//...
xmlhttp.onreadystatechange=got_new_message_count;
//...
xmlhttp.send();
}
got_new_message_count(){
if (xmlhttp.readyState==4){
updateMessageCount(xmlhttp.responseText);
//...
poll_new_messages();
}
}
The problem is that with each page load, the above loop starts again. The result is multiple infinite loops for each user that eventually make my server hang.
*The NewMessageArived() function queries MySQL DB for new unread messages.
*At the beginning of the php script I run start_session() in order to obtain the $current_user value.
I am currently the only user of this site so it is easy for me to debug this behavior by writing time() to a file inside this loop. What I see is that the file is being written more often than once in 10 seconds, but it starts only when I go from page to page.
Please let me know if any additional information might help.
Thank you.
I think I found a solution to my problem. I would appreciate if anyone could tell, if this is the technique that is being used in COMET and how scalable this solution.
I used a user based semaphore like this:
$sem_id = sem_get($current_user);
sem_acquire($sem_id);
while(1){
if(NewMessageArrived($current_user))break;
sleep(10);
}
sem_release($sem_id);
echo $newMessageCount;
It seems common for long-polling requests to timeout after 30 seconds. So in your while loop you could echo 'CLOSE' after 30 seconds.
while(!$new_message && $timer < 30){
$new_message = NewMessageArrived($current_user);
if(!$new_message) {
sleep(10);
$timer += 10;
}
}
if($newMessageCount) {
echo $newMessageCount;
} else {
echo 'CLOSE';
}
In the Javascript, you can listen for the CLOSE.
function poll_new_messages(){
xmlhttp=GetXmlHttpObject();
//...
xmlhttp.onreadystatechange=got_new_message_count;
//...
xmlhttp.send();
}
function got_new_message_count(){
if (xmlhttp.readyState==4){
if(xmlhttp.responseText != 'CLOSE') {
updateMessageCount(xmlhttp.responseText);
}
//...
poll_new_messages();
}
}
Now, the PHP will return a response within 30 seconds, no matter what. If you use stays on the page, and you receive a CLOSE, you just don't update the count on the page, and re-ask.
If the user moves to a new page, your PHP instance will stop the loop regardless within 30 seconds, and return a response. Being on a new page though, the XHR that cared about that connection no longer exists, so it won't start up another loop.
You might try checking connection_aborted() periodically. Note that connection_aborted() might not pick up on the fact that the connection has in fact been aborted until you've written some output and done a flush().
In fact, just producing some output periodically may be sufficient for php to notice the connection close itself, and automatically kill your script.
I know this is a bit generic, but I'm sure you'll understand my explanation. Here is the situation:
The following code is executed every 10 minutes. Variable "var_x" is always read/written to an external text file when its refereed to.
if ( var_x != 1 )
{
var_x = 1;
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
var_x = 0;
}
else
{
// exit script as it's already running.
}
The problem is: if I simulate a hardware failure (do a hard reset when the script is executing) then the main script logic will never execute again because "var_x" will always be "1". (I already have logic to work out the restore point).
Thanks.
You should lock and unlock files with flock:
$fp = fopen($your_file);
if (flock($fp, LOCK_EX)) { )
{
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
flock($fp, LOCK_UN);
}
else
{
// exit script as it's already running.
}
Edit:
As flock seems not to work correctly on Windows machines, you have to resort to other solutions. From the top of my head an idea for a possible solution:
Instead of writing 1 to var_x, write the process ID retrieved via getmypid. When a new instance of the script reads the file, it should then lookup for a running process with this ID, and if the process is a PHP script. Of course, this can still go wrong, as there is the possibility of another PHP script obtaining the same PID after a hardware failure, so the solution is far from optimal.
Don't you think this would be better solved using file locks? (When the reset occurs file locks are reset as well)
http://php.net/flock
It sounds like you're doing some kind of manual semaphore for process management.
Rather than writing to a file, perhaps you should use an environment variable instead. That way, in the event of failure, your script will not have a closed semaphore when you restore.