PHP sockets stdin appending to data received - php

I am using sockets in PHP to create a simple command line based chat. It works ok, but there is one main issue that is making it almost unusable. When there are multiple people in the chat and one person is typing a message and the other sends a message the person typing the message gets the message received appended to what they are typing. Is there anyway around this? I'm using stdin and stream select. Here is a piece from the client:
$uin = fopen("php://stdin", "r");
while (true) {
$r = array($socket, $uin);
$w = NULL;
$e = NULL;
if (0 < stream_select($r, $w, $e, 0)) {
foreach ($r as $i => $fd) {
if ($fd == $uin) {
$text = (fgets($uin));
fwrite($socket, $text);
} else {
$text = fgets($socket);
print $text;
}
}
}
}
All help is appreciated! Thanks!

The code outputs a message to stdout everytime a full string is waiting in $socket.
The only way to get around that is to put the text to a variable ($outtext) in stead of printing it. Then you can display it whenever you are ready to read it, such as before writing to the outgoing socket...
$uin = fopen("php://stdin", "r");
while (true) {
$r = array($socket, $uin);
$w = NULL;
$e = NULL;
$outtext = '';
if (0 < stream_select($r, $w, $e, 0)) {
foreach ($r as $i => $fd) {
if ($fd == $uin) {
$text = (fgets($uin));
print $outtext;
$outtext = '';
fwrite($socket, $text);
} else {
$text = fgets($socket);
$outtext .= $text;
}
}
}
}
The downside being that it will only display incoming text when you press enter. The only way around that would be to use something other than fgets().
I'm assuming this is just an experiment - event driven programming with Node.js or similar would be much better for this type of thing.

Related

What should be the convenient memory limit to php read 1Gb file?

I have a 1Gb file1 that must be read. I choose php to filter and change some lines and than create another file2 with these changes. The code is good. If I read 50M file it works good. The code generate a file2 whit all changes, as expected. But when I try to run the 1Gb file, the file2 is not created and I get a error message from browser like this:
The connection to localhost was interrupted.
Check your Internet connection
Check any cables and reboot any routers, modems, or other network devices you may be using.
Allow Chrome to access the network in your firewall or antivirus settings.
If it is already listed as a program allowed to access the network, try removing it from the list and adding it again.
If you use a proxy server...
Check your proxy settings or contact your network administrator to make sure the proxy server is working. If you don't believe you should be using a proxy server: Go to the Chrome menu > Settings > Show advanced settings... > Change proxy settings... > LAN Settings and deselect "Use a proxy server for your LAN".
If I return and run the small file it works good again.
I already sit the php memeory to 2040M ini_set('memory_limit', '2048M') but I dont know if it is enough or f it is possible.
So, how should be a convenient memory for this issue?
NOTE: Server is apache, win7, i7 8cores 64bits, 16G RAM.
I think the code is not important but someone ask to the see it:
ini_set('memory_limit', '2048M');#Set the memory limit
$new_dbName="au_site";
$patern[0]="/di_site/";
$replace[0]=$new_dbName;
#xdebug_break();
$dirBase=dirname(__FILE__);
$dir2 = new DirectoryIterator(dirname(__FILE__));
#xdebug_break();
foreach ($dir2 as $fileinfo) {
if (!$fileinfo->isDot() && $fileinfo->isFile()) {
$str = $fileinfo->getFilename();
if (preg_match("/\.sql/i", $str)) {
#xdebug_break();
$i=1;
if(!($handle= fopen("$str", "r"))){
die("Cannot open the file");
}
else{
while (!feof($handle)) {
#xdebug_break();
$line=trim(fgets($handle), "\t\n\r\0\x0B");
$firstChar = substr($line, 0,1) ;
$ord = ord($firstChar);
if(ord($firstChar)<>45){
if (preg_match("/di_site/", $line)) {
xdebug_break();
$chn=preg_replace($patern, $replace, $line);
$line=$chn;
}
#echo $line."<br>";
$sql.=$line."\n";
}
}
xdebug_break();
$newDBsql=$dirBase."/".$new_dbName.".sql";
if(!$handle = fopen($newDBsql,"w")){
die("Can not open the file");
}
else{
fwrite($handle, $sql);
fclose($handle);
}
}
}
}
}
Instead of building up the whole file contents you're going to write (which takes a lot of memory), consider writing a stream filter instead.
A stream filter operates on a single buffered read from the underlying stream, typically around 8kB of data. The following example code defines such a filter, it splits each bucket into separate lines and calls your code to make changes to it.
<?php
class myfilter extends \php_user_filter
{
private $buffer; // internal buffer to create data buckets with
private $pattern = ['/di_site/'];
private $replace = ['au_site'];
function filter($in, $out, &$consumed, $closing)
{
while ($bucket = stream_bucket_make_writeable($in)) {
$parts = preg_split('/(\n|\r\n)/', $bucket->data, -1, PREG_SPLIT_DELIM_CAPTURE);
$buffer = '';
// each line spans two array elements
for ($i = 0, $n = count($parts); $i + 1 < $n; $i += 2) {
$line = $parts[$i] . $parts[$i + 1];
$buffer .= $this->treat_line($line);
$consumed += strlen($line);
}
stream_bucket_append($out, stream_bucket_new($this->stream, $buffer));
}
return PSFS_PASS_ON;
}
/** THIS IS YOUR CODE **/
function treat_line($line)
{
$line = trim($line, "\t\n\r\0\x0B");
$firstChar = substr($line, 0,1) ;
if (ord($firstChar)<>45) {
if (preg_match("/di_site/", $line)) {
$line = preg_replace($this->pattern, $this->replace, $line);
}
}
return $line . "\n";
}
function onCreate()
{
$this->buffer = fopen('php://memory', 'r+');
}
function onClose()
{
fclose($this->buffer);
}
}
stream_filter_register("myfilter", "myfilter");
// open input and attach filter
$in = fopen(__FILE__, 'r');
stream_filter_prepend($in, 'myfilter');
// open output stream and start copying
$out = fopen('php://stdout', 'w');
stream_copy_to_stream($in, $out);
fclose($out);
fclose($in);

Tailing Log File and Write results to new file

I'm not sure how to word this so I'll type it out and then edit and answer any questions that come up..
Currently on my local network device (PHP4 based) I'm using this to tail a live system log file: http://commavee.com/2007/04/13/ajax-logfile-tailer-viewer/
This works well and every 1 second it loads an external page (logfile.php) that does a tail -n 100 logfile.log The script doesn't do any buffering so the results it displayes onscreen are the last 100 lines from the log file.
The logfile.php contains :
<? // logtail.php $cmd = "tail -10 /path/to/your/logs/some.log"; exec("$cmd 2>&1", $output);
foreach($output as $outputline) {
echo ("$outputline\n");
}
?>
This part is working well.
I have adapted the logfile.php page to write the $outputline to a new text file, simply using fwrite($fp,$outputline."\n");
Whilst this works I am having issues with duplication in the new file that is created.
Obviously each time tail -n 100 is run produces results, the next time it runs it could produce some of the same lines, as this repeats I can end up with multiple lines of duplication in the new text file.
I can't directly compare the line I'm about to write to previous lines as there could be identical matches.
Is there any way I can compare this current block of 100 lines with the previous block and then only write the lines that are not matching.. Again possible issue that block A & B will contain identical lines that are needed...
Is it possible to update logfile.php to note the position it last tooked at in my logfile and then only read the next 100 lines from there and write those to the new file ?
The log file could be upto 500MB so I don't want to read it all in each time..
Any advice or suggestions welcome..
Thanks
UPDATE # 16:30
I've sort of got this working using :
$file = "/logs/syst.log";
$handle = fopen($file, "r");
if(isset($_SESSION['ftell'])) {
clearstatcache();
fseek($handle, $_SESSION['ftell']);
while ($buffer = fgets($handle)) {
echo $buffer."<br/>";
#ob_flush(); #flush();
}
fclose($handle);
#$_SESSION['ftell'] = ftell($handle);
} else {
fseek($handle, -1024, SEEK_END);
fclose($handle);
#$_SESSION['ftell'] = ftell($handle);
}
This seems to work, but it loads the entire file first and then just the updates.
How would I get it start with the last 50 lines and then just the updates ?
Thanks :)
UPDATE 04/06/2013
Whilst this works it's very slow with large files.
I've tried this code and it seems faster, but it doesn't just read from where it left off.
function last_lines($path, $line_count, $block_size = 512){
$lines = array();
// we will always have a fragment of a non-complete line
// keep this in here till we have our next entire line.
$leftover = "";
$fh = fopen($path, 'r');
// go to the end of the file
fseek($fh, 0, SEEK_END);
do{
// need to know whether we can actually go back
// $block_size bytes
$can_read = $block_size;
if(ftell($fh) < $block_size){
$can_read = ftell($fh);
}
// go back as many bytes as we can
// read them to $data and then move the file pointer
// back to where we were.
fseek($fh, -$can_read, SEEK_CUR);
$data = fread($fh, $can_read);
$data .= $leftover;
fseek($fh, -$can_read, SEEK_CUR);
// split lines by \n. Then reverse them,
// now the last line is most likely not a complete
// line which is why we do not directly add it, but
// append it to the data read the next time.
$split_data = array_reverse(explode("\n", $data));
$new_lines = array_slice($split_data, 0, -1);
$lines = array_merge($lines, $new_lines);
$leftover = $split_data[count($split_data) - 1];
}
while(count($lines) < $line_count && ftell($fh) != 0);
if(ftell($fh) == 0){
$lines[] = $leftover;
}
fclose($fh);
// Usually, we will read too many lines, correct that here.
return array_slice($lines, 0, $line_count);
}
Any way this can be amend so it will read from the last known position.. ?
Thanks
Introduction
You can tail a file by tracking the last position;
Example
$file = __DIR__ . "/a.log";
$tail = new TailLog($file);
$data = $tail->tail(100) ;
// Save $data to new file
TailLog is a simple class i wrote for this task here is a simple example to show its actually tailing the file
Simple Test
$file = __DIR__ . "/a.log";
$tail = new TailLog($file);
// Some Random Data
$data = array_chunk(range("a", "z"), 3);
// Write Log
file_put_contents($file, implode("\n", array_shift($data)));
// First Tail (2) Run
print_r($tail->tail(2));
// Run Tail (2) Again
print_r($tail->tail(2));
// Write Another data to Log
file_put_contents($file, "\n" . implode("\n", array_shift($data)), FILE_APPEND);
// Call Tail Again after writing Data
print_r($tail->tail(2));
// See the full content
print_r(file_get_contents($file));
Output
// First Tail (2) Run
Array
(
[0] => c
[1] => b
)
// Run Tail (2) Again
Array
(
)
// Call Tail Again after writing Data
Array
(
[0] => f
[1] => e
)
// See the full content
a
b
c
d
e
f
Real Time Tailing
while(true) {
$data = $tail->tail(100);
// write data to another file
sleep(5);
}
Note: Tailing 100 lines does not mean it would always return 100 lines. It would return new lines added 100 is just the maximum number of lines to return. This might not be efficient where you have heavy logging of more than 100 line per sec is there is any
Tail Class
class TailLog {
private $file;
private $data;
private $timeout = 5;
private $lock;
function __construct($file) {
$this->file = $file;
$this->lock = new TailLock($file);
}
public function tail($lines) {
$pos = - 2;
$t = $lines;
$fp = fopen($this->file, "r");
$break = false;
$line = "";
$text = array();
while($t > 0) {
$c = "";
// Seach for End of line
while($c != "\n" && $c != PHP_EOL) {
if (fseek($fp, $pos, SEEK_END) == - 1) {
$break = true;
break;
}
if (ftell($fp) < $this->lock->getPosition()) {
break;
}
$c = fgetc($fp);
$pos --;
}
if (ftell($fp) < $this->lock->getPosition()) {
break;
}
$t --;
$break && rewind($fp);
$text[$lines - $t - 1] = fgets($fp);
if ($break) {
break;
}
}
// Move to end
fseek($fp, 0, SEEK_END);
// Save Position
$this->lock->save(ftell($fp));
// Close File
fclose($fp);
return array_map("trim", $text);
}
}
Tail Lock
class TailLock {
private $file;
private $lock;
private $data;
function __construct($file) {
$this->file = $file;
$this->lock = $file . ".tail";
touch($this->lock);
if (! is_file($this->lock))
throw new Exception("can't Create Lock File");
$this->data = json_decode(file_get_contents($this->lock));
// Check if file is valida json
// Check if Data in the original files as not be delete
// You expect data to increate not decrease
if (! $this->data || $this->data->size > filesize($this->file)) {
$this->reset($file);
}
}
function getPosition() {
return $this->data->position;
}
function reset() {
$this->data = new stdClass();
$this->data->size = filesize($this->file);
$this->data->modification = filemtime($this->file);
$this->data->position = 0;
$this->update();
}
function save($pos) {
$this->data = new stdClass();
$this->data->size = filesize($this->file);
$this->data->modification = filemtime($this->file);
$this->data->position = $pos;
$this->update();
}
function update() {
return file_put_contents($this->lock, json_encode($this->data, 128));
}
}
Not really clear on how you want to use the output but would something like this work ....
$dat = file_get_contents("tracker.dat");
$fp = fopen("/logs/syst.log", "r");
fseek($fp, $dat, SEEK_SET);
ob_start();
// alternatively you can do a while fgets if you want to interpret the file or do something
fpassthru($fp);
$pos = ftell($fp);
fclose($fp);
echo nl2br(ob_get_clean());
file_put_contents("tracker.dat", ftell($fp));
tracker.dat is just a text file that contains where the read position position was from the previous run. I'm just seeking to that position and piping the rest to the output buffer.
Use tail -c <number of bytes, instead of number of lines, and then check the file size. The rough idea is:
$old_file_size = 0;
$max_bytes = 512;
function last_lines($path) {
$new_file_size = filesize($path);
$pending_bytes = $new_file_size - $old_file_size;
if ($pending_bytes > $max_bytes) $pending_bytes = $max_bytes;
exec("tail -c " + $pending_bytes + " /path/to/your_log", $output);
$old_file_size = $new_file_size;
return $output;
}
The advantage is that you can do away with all the special processing stuff, and get good performance. The disadvantage is that you have to manually split the output into lines, and probably you could end up with unfinished lines. But this isn't a big deal, you can easily work around by omitting the last line alone from the output (and appropriately subtracting the last line number of bytes from old_file_size).

How to read multiple lines from socket stream?

I want to retrieve email from gmails' imap server but the problem is that the responses from the server are multiple lines long (as demonstrated here) and fgets only retrieves one line.
I've tried using fgets, fread, socket_read but none of them work so either i'm using the wrong method or using the methods incorrectly. I also tried this tutorial but it didn't work either. I would appreciate if someone could help me with this.
Thanks and i'm really sorry if this is an amateur question.
Code:
<?php
$stuff = fsockopen('ssl://imap.gmail.com',993);
$reply = fgets($stuff,4096);
echo 'connection: '.$reply.'<br/>';
$request = fputs($stuff,"a1 LOGIN MyUserName Password\r\n");
$receive = socket_read($stuff, 4096);
echo 'login: '.$receive.'<br/>';
$request = fputs($stuff,"a2 EXAMINE INBOX\r\n");
$reply = '';
while(!feof($stuff))
$reply .= fread($stuff, 4096);
echo $reply;
/*
$request = fputs($stuff,'a3 FETCH 1 BODY[]\r\n');
$reply = fgets($stuff);
echo $reply;
*/
?>
Max's answer below works. This is my implementation of it.
private function Response($instructionNumber)
{
$end_of_response = false;
while (!$end_of_response)
{
$line = fgets($this->connection,self::responseSize);
$response .= $line.'<br/>';
if(preg_match("/$instructionNumber (OK|NO|BAD)/", $response,$responseCode))
$end_of_response = true;
}
return array('code' => $responseCode[1],
'response'=>$response);
}
Generally, you know to stop reading when you get the OK/BAD/NO response for the tag you sent. If you send a1 LOGIN ... you stop when you get a1 OK/BAD/NO ....
It's been a while since I wrote PHP, and I don't know that much about IMAP, but if it's anything like NNTP, your code would look a bit like this (wrote it in the SO editor, might be bugged) :
$buffer = '';
function read_line($socket) {
global $buffer;
while (strpos($buffer, "\n") === false)
$buffer .= fread($socket, 1024);
$lineEnd = strpos($buffer, "\n");
$line = substr($buffer, 0, $lineEnd-1);
$buffer = substr($buffer, $lineEnd);
return $line;
}
function send_line($socket, $line) {
fwrite($socket, $line);
}
$socket = fsockopen('ssl://imap.gmail.com',993);
$welcome = read_line($socket);
send_line("a1 LOGIN MyUserName Password\r\n");
$reply = read_line($socket);
send_line("a2 EXAMINE INBOX\r\n");
while (($reply = trim(read_line($socket))) != '.') {
echo $reply.PHP_EOL;
}
echo "Done";
The basic concepts are :
Always buffer all incoming data. PHP doesn't handle lines very well, so do the splitting yourself.
Don't randomly read everything, but know what to expect. You expect one welcome line, LOGIN has one reply, and EXAMINE INBOX keeps outputting data until there's a single dot, so immediately stop reading once you see that.
You'll most likely want a simple function to take care of the reading. You could even write another function to make it easy:
function read_block($socket) {
$block = '';
while ('.' != trim($reply = read_line($socket)) {
$block .= $reply;
}
return $block;
}

How to check whether stream has any data?

This what I'm trying to do:
$output = '';
$stream = popen("some-long-running-command 2>&1", 'r');
while (!feof($stream)) {
$meta = stream_get_meta_data($stream);
if ($meta['unread_bytes'] > 0) {
$line = fgets($stream);
$output .= $line;
}
echo ".";
}
$code = pclose($stream);
Looks like this code is not correct, since it gets stuck at the call to stream_get_meta_data(). What is the right way to check whether the stream has some data to read? The whole point here is to avoid locking at fgets().
The correct way to do this is with stream_select():
$stream = popen("some-long-running-command 2>&1", 'r');
while (!feof($stream)) {
$r = array($stream);
$w = $e = NULL;
if (stream_select($r, $w, $e, 1)) {
// there is data to be read
}
}
$code = pclose($stream);
One thing to note though (I'm not sure about this) is that it may be the feof() check that is "blocking" - it may be that the loop never ends because the child process does not close its STDOUT descriptor.

PHP: How to read a file live that is constantly being written to

I want to read a log file that is constantly being written to. It resides on the same server as the application. The catch is the file gets written to every few seconds, and I basically want to tail the file on the application in real-time.
Is this possible?
You need to loop with sleep:
$file='/home/user/youfile.txt';
$lastpos = 0;
while (true) {
usleep(300000); //0.3 s
clearstatcache(false, $file);
$len = filesize($file);
if ($len < $lastpos) {
//file deleted or reset
$lastpos = $len;
}
elseif ($len > $lastpos) {
$f = fopen($file, "rb");
if ($f === false)
die();
fseek($f, $lastpos);
while (!feof($f)) {
$buffer = fread($f, 4096);
echo $buffer;
flush();
}
$lastpos = ftell($f);
fclose($f);
}
}
(tested.. it works)
Yes, you need to sleep some time in the loop but you don't have to reopen the file. I was just looking for a similar problem. I wanted to read a file that might have been changed since last read.
The problem is that the resource has reached end of file (EOF). And does not continue to read. The solution is to reset the pointer with fseek($fh, ftell($fh)).
A complete program that waits for input in a text file might look like this one:
<?php
$fh = fopen('/var/log/system', 'r');
while (true) {
$line = fgets($fh);
if ($line !== false) {
// show the line or send it via email or to a websocket..
} else {
// sleep for 0.1 seconds (or more?)
usleep(0.1 * 1000000);
fseek($fh, ftell($fh));
}
}
For example :
$log_file = '/tmp/test/log_file.log';
$f = fopen($log_file, 'a+');
$fr = fopen($log_file, 'r' );
for ( $i = 1; $i < 10; $i++ )
{
fprintf($f, "Line: %u\n", $i);
sleep(2);
echo fread($fr, 1024) . "\n";
}
fclose($fr);
fclose($f);
//Or if you want use tail
$f = fopen($log_file, 'a+');
for ( $i = 1; $i < 10; $i++ )
{
fprintf($f, "Line: %u\n", $i);
sleep(2);
$result = array();
exec( 'tail -n 1 ' . $log_file, $result );
echo "\n".$result[0];
}
fclose($f);
you can close the file handle when it is not used(once a portion of data has been written). or you can use a buffer to store the data and put it to the file only when it's full. this way you won't have the file open all the time.
if you want to get everything that is written to the file as soon as it is written there, you might need to extend the code, writing the data, so that it would output to other places too(screen, some variable, other file...)
<?php
$fp = fopen('/var/log/syslog', 'r');// Read only
while (true) {
$line = stream_get_line($fp, 1024 * 1024, "\n");// Full line found ? (searches for a line break)
if ($line === false) {
usleep(100000);// 100ms
continue;
}
echo 'line:' . $line . PHP_EOL;
}
// -- Code impossible to reach --
// fclose($fp);
Just an idea..
Did you think of using the *nix tail command? execute the command from php (with a param that will return a certain number of lines) and process the results in your php script.

Categories