When trying to write to the client, the message is getting buffered, and in some cases, it's not being written at all.
CURRENT STATUS:
When I telnet into the server, the Server Ready: message is readily printed as expected.
When I send random data (other than "close"), the server's terminal nicely shows progress every second, but the clients output waits until after all the sleeping, and then prints it all at once.
Most importantly, when sending "close", it just waits the obligatory second, and then closes without ANY writeout in the client.
GOAL:
My main goal is for a quick message to be written to the client prior closing a connection.
CODE:
// server.php
$loop = React\EventLoop\Factory::create();
$socket = new React\Socket\Server($loop);
$socket->on('connection', function ($conn)
{
$conn->write("Server ready:\n");
$conn->on('data', function ($data) use ($conn)
{
$data = trim($data);
if( $data == 'close')
{
$conn->write("Bye\n");
sleep(1);
$conn->close();
}
for ($i = 1; $i<5; $i++) {
$conn->write(". ");
echo '. ';
sleep(1);
}
$conn->write(".\n");
echo ".\n";
$conn->write("You said \"".$data."\"\n");
});
});
$socket->listen(1337, '127.0.0.1');
$loop->run();
SUMMARY:
Why can't I get anything written to the client before closing?
The problem your are encountering is because you are forgetting about the event loop that drives ReactPHP. I ran into this issue recently when building a server and after following the code around, I found 2 things out that should help you solve your problem.
If you close the connection after writing to it, it simply closes the connection before it can write. Solving this issue can help you fix the next issue... The correct call for writing something to the client, THEN closing the connection is called $conn->end('msg'); If you follow this chain of code the behaviour becomes clear; First it basically writes to the connection just as if you ran $conn->write('msg'); but then it registers a new event handler for the full-drain event, this handler simple calls $conn->close();; the full-drain event is only dispatched when the write buffer is completely emptied. So you can see that the use of end, simply waits to write before it closes the connection.
The drain and full-drain are only dispatched after writing to a stream. full-drain occurs when the write buffer is completely empty. drain is dispatched after the write buffer is emptied past its softLimit which by default is 2048 bytes.
The reason your writes are not making it through is because $conn->write('msg') only adds the string to the write buffer; it does not actually write. The event loop needs to run before it will be given time to write. Your use of sleep() is incorrect with react because this blocks the call at that line. In react you don't want to block any code from executing. If you are done a task, then let your function return, and code execution returns to the react main event loop. If you wish to have things run on certain intervals, or simply on the next iteration of the loop, you can schedule callbacks for the main event loop with the use of $loop->addTimer($seconds, $callback), $loop->addPeriodicTimer($seconds, $callback), $loop->nextTick($callback) or $loop->futureTick($callback)
Ultimately it is because you are programming without acknowledging that we are still in a blocking thread. Anything your code that blocks, blocks the entire react event loop, in turn blocking everything that react does for you. Give up processing time back to the loop to ensure it can do the reads/writes that you have queued up. You only need on iteration of the loop to occur for the write buffer to begin emptying (depending on the size of the buffer it may or may not write it all out)
If you're goal here is just to fix the connection closing bit, switch your call to $conn->end('msg') instead of the write -> close. However I believe that the other loop you have printing the dots also does not behave in the way that I think you expect/desire it to work. As it's purpose is not as clear, if you can tell me what your goal was for it I can possibly help you restructure that step as well.
Related
I have a script that is running continuously in the server, in this case a PHP script, like:
php path/to/my/index.php.
It's been executed, and when it's done, it's executed again, and again, forever.
I'm looking for the best way to be notified if that event stop running(been executed).
There are many reasons why it stops been called, like server memory, new deployment, human error... etc.
I just want to be notified(email, sms, slack...) if that script was not executed for certain amount of time(like 1 hour, 1 day, etc...)
My server is Ubuntu living in AWS.
An idea:
I was thinking on having an index in REDIS/MEMCACHED/ETC with a TTL. Every time the script run, renovate that TTL for this index.
If the script stop working for that TTL time, this index will expire. I just need a way to trigger a notification when that expiration happen, but looks like REDIS/MEMCACHED are not prepared for that
register_shutdown_function might help, but might not... https://www.php.net/manual/en/function.register-shutdown-function.php
I can't say i've ever seen a script that needs to run indefinitely in PHP. Perhaps there is another way to solve the problem you are after?
Update - Following your redis idea, I'd look at keyspace notifications. https://redis.io/topics/notifications
I've not tested the idea since I'm not actually a redis user. But it may be possible to subscribe to capture the expiration event (perhaps from another server?) and generate your notification.
There's no 'best' way to do this. Ultimately, what works best will boil down to the specific workflow you're supporting.
tl;dr version: Find what constitutes success and record the most recent time it happened. Use that for your notification trigger in another script.
Long version:
That said, persistent storage with a separate watcher is probably the most straight-forward way to do this. Record the last successful run, and then check it with a cron job every so often.
For what it's worth, for scripts like this I generally monitor exit codes or logs produced by the script in question. This isolates the error notification process from the script itself so a flaw in the script (hopefully) doesn't hamper the notification.
For a barebones example, say we have a script to invoke the actual script... (This is very much untested pseudo-code)
<?php
//Run and record.
exec("php path/to/my/index.php", $output, $return_code);
//$return_code will be 255 on fatal errors. You can use other return codes
//with exit in your called script to report other fail states.
if($return_code == 0) {
file_put_contents('/path/to/folder/last_success.txt', time());
} else {
file_put_contents('/path/to/folder/error_report.json', json_encode([
'return_code' => $return_code,
'time' => time(),
'output' => implode("\n", $output),
//assuming here that error output isn't silently logged somewhere already.
], JSON_PRETTY_PRINT));
}
And then a watcher.php that monitors these files on a cron job.
<?php
//Notify us immediately on failure maybe?
//If you have a lot of transient failures it may make more sense to
//aggregate and them in a single report at a specific time instead.
if(is_file('/path/to/folder/error_report.json')) {
//Mail details stored in JSON here.
//rename file so it's recorded, but we don't receive it again.
rename('/path/to/folder/error_report.json', '/path/to/folder/error_report.json'.'-sent-'.date('Y-m-d-H-i-s'));
} else {
if(is_file('/path/to/folder/last_success.txt')) {
$last_success = intval(file_get_contents('/path/to/folder/last_success.txt'));
if(strtotime('-24 hours') > $last_success) {
//Our script hasn't run in 24 hours, let someone know.
}
} else {
//No successful run recorded. Might want to put code here if that's unexpected.
}
}
Notes: There are some caveats to the specific approach displayed above. A script can fail in a non-fatal way and if you're not checking for it this example could record that as a successful run. For example, permissions errors causing warnings but the script still runs it's full course and exits normally without hitting an exit call with a specific return code. Our example invoker here would log that as a successful run - even though it isn't.
Another option is to log success from your script and only check for error exits from the invoker.
I'm using popen with fgets to read the output of tcpdump asynchronously.
The below code should be run in the command line, not with apache and viewing it in your browser.
$handle = popen('tcpdump -nnX', 'r');
while (true) {
$output = fgets($handle);
print $output . "\n";
}
The problem arises when I try to output this information via websockets.
Websockets also use an infinite loop (for managing its sockets, ticks, and messages).
It looks something like:
while (true) {
#socket_select($read,$write,$except,1);
foreach ($read as $socket) {
if ($socket == $this->master) {
$client = socket_accept($socket);
...
I send data through the websocket with $websocket->sendToAll($message);.
I can't put the while loops one after the other because it will only run whichever loop I put first, while (true) { A() }; while (true) { B() }; B() will never be called
I can't merge the while loops, because the websockets slows down the reading of popen, and vise versa. while (true) { A(); B(); } if B is taking a long time to finish, A will be slow to run.
What can I do in this situation? I'm open to the idea of threads, communication between forked scripts, or anything else.
This is the classic scenario for Producer-Consumer problem. It's just that you've got two of them. You can break down the problem to understand it easier.
WebSocket Consumer: This code will send data through WebSocket. You can consider this a separate thread in which data is dequeued from Q1 (just a name) and sent.
WebSocket Producer: Once some data arrives at at the WebSocket gate, it is enqueued into a buffer. It's just that this is not the same queue as above. Let's name it Q2. This needs to be a separate thread as well, and this thread goes to sleep once it enqueues the data and signals the appropriate consumer.
HDD Consumer: This code will do the same as WebSocket Consumer, the only difference is that it will store the data on a hard disk instead of WebSocket. It will have its own thread and works with Q2.
HDD Producer: I'm sure you can guess what this does. This code will read data off the hard disk and put it in Q1 queue. Like all the producers it needs to signal its consumers informing them of a new item in queue.
Now getting back to your code, PHP is not suitable for multi-thread programming even though it's completely possible. That's why you can not find that many examples for it. But if you insist, here are what you'll need:
PHP's Thread class
PHP's Mutex class. This class will help you prevent multiple threads to access the same data at the same time.
Something call Signaling which I can not find in PHP! It is used
to tell other threads that some data in queue is ready to be
consumed, or in other words, it will wake up the consumer thread
when it has something to do.
Final word is that in a proper multi thread software you won't be using sleep function to lower system's load / preventing system crash. Multi-thread programming is all about signaling and conversation between threads.
How about wscat? The following command line:
$ printf "hello\\nbye\\n^C" | wscat -c ws://echo.websocket.org
sends the two lines below to ws://echo.websocket.org.
hello
bye
Note that ^C in the command line is a Control-C (not a two-letter combination of ^ and C).
The problem is that for a long process the PHP script keeps on executing whether or not the client browser is currently connected or not. Is there any possibility that if the client has terminated the Ajax call to a script then the script also terminates on server?
As pointed out by #metadings php does have a function to check for connection abort named connection_aborted(). It will return 1 if connection is terminated otherwise 0.
In a long server side process the user may need to know if the client
is disconnected from the server or he has closed the browser then the
server can safely shutdown the process.
Especially in a case where the application uses php sessions then if we left the long process running even after the client is disconnected then the server will get unresponsive for this session. And any other request from the same client will wait until the earlier process executes completely. The reason for this situation is that the session file is locked when the process is running. You can however intentioanlly call session_write_close() method to unlock it. But this is not feasible in all scenarios, may be one need to write something to session at the end of the process.
Now if we only call connection_aborted() in a loop then it will always
return 0 whether the connection is closed or not.
0 means that the connection is not aborted. It is misleading. However, after re-search and experiments if have discovered that the output buffer in php is the reason.
First of all in order to check for aborts the developer in a loop must send some output to the client by echoing some text. For example:
print " ";
As the process is still running, the output will not be sent to the client. Now to send output we then need to flush the output buffer.
flush ();
ob_flush ();
And then if we check for aborts then it will give correct results.
if (connection_aborted () != 0) {
die();
}
Following is the working example, this will work even if you are using PHP session:
session_start ();
ignore_user_abort ( TRUE );
file_put_contents ( "con-status.txt", "Process started..\n\n" );
for($i = 1; $i <= 15; $i ++) {
print " ";
file_put_contents ( "con-status.txt", "Running process unit $i \n", FILE_APPEND );
sleep ( 1 );
// Send output to client
flush ();
ob_flush ();
// Check for connection abort
if (connection_aborted () != 0) {
file_put_contents ( "con-status.txt", "\nUser terminated the process", FILE_APPEND );
die ();
}
}
file_put_contents ( "con-status.txt", "\nAll units completed.", FILE_APPEND );
EDIT 07-APR-2017
If someone is using Fast-Cgi on Windows then he can actually terminate
the CGI thread from memory when the connection is aborted using following code:
if (connection_aborted () != 0) {
apache_child_terminate();
exit;
}
Check out PHP connection_aborted() function. While doing your processing, you can sometimes check for the aborted connection to gracefully cancel the progress, as one would do in an interactive threading model.
One way I've found which is the easiest way to handle this time-out issue is as such:
1: set a value on the server as 'processing'. Start an independent thread to do the processing.
2: The initial ajax call returns a success
3: The javascript on the page goes into 'waiting mode' which sends a new ajax request every 10 or 30 or 60 seconds or five or ten minutes or whatever (depending on your situation) to find out whether the value on the server is still set to 'processing'.
4: The independent thread completes. It sets the value on the server to 'done'.
5: The javascript on the page makes its next waiting-mode query, and returns 'done' and the appropriate data.
4b: If an obscene amount of time goes by without a 'done', it registers as a failure. How much time is obscene depends upon your situation. Send an ajax call updating the value from 'processing' to 'cancel'.
5b: The independent thread periodically checks the status to make sure it's still set to 'processing'. If it sees a mode-shift to 'cancel' it cancels itself.
This is what you're looking for:
http://php.net/manual/en/function.ignore-user-abort.php
Stopping a script in the middle of execution can lead to unexpected results, so caveat emptor
I started using push in HTML5 using the JavaScript EventSource object. I was totally happy with a working solution in PHP:
$time = 0;
while(true) {
if(connection_status() != CONNECTION_NORMAL) {
mysql_close()
break;
}
$result = mysql_query("SELECT `id` FROM `table` WHERE UNIX_TIMESTAMP(`lastUpdate`) > '".$time."'");
while($row = mysql_fetch_array($result)) {
echo "data:".$row["id"].PHP_EOL;
echo PHP_EOL;
ob_flush();
flush();
}
$time = time();
sleep(1);
}
But suddenly my WebApp wasn't reachable anymore with an MySQL error "too many connections".
It turned out that the MySQL connection doesn't close after closing the event source in JavaScript:
window.onload = function() {
sse = new EventSource("push.php");
sse.onmessage = function(event) {
console.log(event.data.split(":")[1]);
}
}
window.onbeforeunload = function() {
sse.close();
}
So I guess that the PHP script does not stop to execute. Is there any way to call in function (like die();) before the clients connection disconnects? Why doesn't my script terminate after calling .close(); on the EventSource?!
Thanks for help! —
First off: when you wrap your code in a huge while(true) loop, your script will never terminate. The connection with the DB would have been closed when your script "ran out of code to execute", but since you've written a deadlock, that's not going to happen... ever.It's not an EventSource issue, that merely does what it's supposed to do. It honestly, truly, and sadly is your fault.
The thing is: a user connects to your site, and the EventSource object is instantiated. A connection to the server is established and a request for the return value of push.php is requested. Your server does as requested, and runs the script, that -again- is nothing but a deadlock. There are no errors, so it can just keep on running, for as long as the php.ini allows it to run. The .close() method does cancel the stream of output (or rather it should), but your script is so busy either performing its infinite loop, or sleeping. Assuming the script to be stopped because a client disconnects is like assuming that any client could interfere with what the server does. Security-wise this would be the worst that could happen. Now, to answer your actual question: What causes the issue, how to fix it?The answer: HEADERS
Look at this little PHP example: the script isn't kept alive server-side (by infinite loops), but by setting the correct headers.
Just as a side-note: Please, ditch mysql_* ASAP, it's being deprecated (finally), use PDO or mysqli_* istead. It's not that hard, honestly.
I had exactly the same issue and as far as I understood it the reason was that apache didn't terminate the php script until I tried to write data trough the closed socket connection again.
I didn't have any changes in my database so there was no output.
To fix this I simply echo a event source comment in my while loop like:
echo ": heartbeat\n\n";
ob_flush();
flush();
This causes the script to be terminated if the socket connection is closed.
you might want to try to add this to the loop:
if(connection_status() != CONNECTION_NORMAL)
{
break;
}
but php should stop when the client disconnected.
1.
window.onbeforeunload = function() {
sse.close();
}
this is not required, EventSource will be closed at page unloading time.
even if you can not find a way to detect disconnected client on PHP, you can just stop execution after N seconds, EventSource will reconnect automatically.
http://www.php.net/manual/ru/function.connection-aborted.php
comments on this page says, that u should use "flush" before connection_status
anyway, you should drop connection after N seconds, because u cant detect "bad" disconnects.
3.
echo "retry:1000\n"; // use this to tell EventSource the reconnection delay in ms
$time = 0;
while(true) {
if(connection_status() != CONNECTION_NORMAL) {
mysql_close()
break;
}
$result = mysql_query("SELECT id FROM table WHERE UNIX_TIMESTAMP(lastUpdate) > '".$time."'");
while($row = mysql_fetch_array($result)) {
$rect[] = $row;
}
for($i-0;$i echo "data:".implode('',$disp)."\n\n"; $time = time(); sleep(1);
}
As Elias Van Ootegem pointed out, your script never terminates and hence you have a bunch of MySQL connections active. More concerningly, you have a bunch of resources being used in the form of PHP loops running with no end in sight!
Unfortunately, the solution to keeping the is not "headers". In the example at this link referenced by Elias, the php script terminates, and the client javascript actually has to re-open a connection. You can test this by using the following code
source.addEventListener('open', function(e) {
// Connection was opened.
console.log("Opening new connection");
}, false);
If you follow his example on github, in implementation the author actually employs a loop that terminates after X seconds. See the do loop and comments at bottom of the script.
The solution is to give your loop a lifespan.
By defining a limit to the loop, IE X itterations or after X amount of time to trigger the end of the loop or die();, you will make it so that if the client disconnects there is a limit to how long a script will remain active. If the client is still there after X amount of time, they will simply re connect.
I know this is a bit generic, but I'm sure you'll understand my explanation. Here is the situation:
The following code is executed every 10 minutes. Variable "var_x" is always read/written to an external text file when its refereed to.
if ( var_x != 1 )
{
var_x = 1;
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
var_x = 0;
}
else
{
// exit script as it's already running.
}
The problem is: if I simulate a hardware failure (do a hard reset when the script is executing) then the main script logic will never execute again because "var_x" will always be "1". (I already have logic to work out the restore point).
Thanks.
You should lock and unlock files with flock:
$fp = fopen($your_file);
if (flock($fp, LOCK_EX)) { )
{
//
// here is where the main body of the script is.
// it can take hours to completely execute.
//
flock($fp, LOCK_UN);
}
else
{
// exit script as it's already running.
}
Edit:
As flock seems not to work correctly on Windows machines, you have to resort to other solutions. From the top of my head an idea for a possible solution:
Instead of writing 1 to var_x, write the process ID retrieved via getmypid. When a new instance of the script reads the file, it should then lookup for a running process with this ID, and if the process is a PHP script. Of course, this can still go wrong, as there is the possibility of another PHP script obtaining the same PID after a hardware failure, so the solution is far from optimal.
Don't you think this would be better solved using file locks? (When the reset occurs file locks are reset as well)
http://php.net/flock
It sounds like you're doing some kind of manual semaphore for process management.
Rather than writing to a file, perhaps you should use an environment variable instead. That way, in the event of failure, your script will not have a closed semaphore when you restore.