I want to play video from a remote server. so I write this function.
$remoteFile = 'blabla.com/video_5GB.mp4';
play($remoteFile);
function play($url){
ini_set('memory_limit', '1024M');
set_time_limit(3600);
ob_start();
if (isset($_SERVER['HTTP_RANGE'])) $opts['http']['header'] = "Range: " . $_SERVER['HTTP_RANGE'];
$opts['http']['method'] = "HEAD";
$conh = stream_context_create($opts);
$opts['http']['method'] = "GET";
$cong = stream_context_create($opts);
$out[] = file_get_contents($url, false, $conh);
$out[] = $httap_response_header;
ob_end_clean();
array_map("header", $http_response_header);
readfile($url, false, $cong);
}
The above function works very well in playing videos. But I don't want to burden the remote server
My question is how can I cache video files every 5 hours to my server. if possible, the cache folder contains small files (5MB / 10MB) from remote video
As mentioned in my comment, the following code has been tested only on a small selection of MP4 files. It could probably do with some more work but it does fill your immediate needs as it is.
It uses exec() to spawn a separate process that generates the cache files when they are needed, i.e. on the first request or after 5 hours. Each video must have its own cache folder because the cached chunks are simply called 1, 2, 3, etc. Please see additional comments in the code.
play.php - This is the script that will be called by the users from the browser
<?php
ini_set('memory_limit', '1024M');
set_time_limit(3600);
$remoteFile = 'blabla.com/video_5GB.mp4';
play($remoteFile);
/**
* #param string $url
*
* This will serve the video from the remote url
*/
function playFromRemote($url)
{
ob_start();
$opts = array();
if(isset($_SERVER['HTTP_RANGE']))
{
$opts['http']['header'] = "Range: ".$_SERVER['HTTP_RANGE'];
}
$opts['http']['method'] = "HEAD";
$conh = stream_context_create($opts);
$opts['http']['method'] = "GET";
$cong = stream_context_create($opts);
$out[] = file_get_contents($url, false, $conh);
$out[] = $http_response_header;
ob_end_clean();
$fh = fopen('response.log', 'a');
if($fh !== false)
{
fwrite($fh, print_r($http_response_header, true)."\n\n\n\n");
fclose($fh);
}
array_map("header", $http_response_header);
readfile($url, false, $cong);
}
/**
* #param string $cacheFolder Directory in which to find the cached chunk files
* #param string $url
*
* This will serve the video from the cache, it uses a "completed.log" file which holds the byte ranges of each chunk
* this makes it easier to locate the first chunk of a range request. The file is generated by the cache script
*/
function playFromCache($cacheFolder, $url)
{
$bytesFrom = 0;
$bytesTo = 0;
if(isset($_SERVER['HTTP_RANGE']))
{
//the client asked for a specific range, extract those from the http_range server var
//can take the form "bytes=123-567" or just a from "bytes=123-"
$matches = array();
if(preg_match('/^bytes=(\d+)-(\d+)?$/', $_SERVER['HTTP_RANGE'], $matches))
{
$bytesFrom = intval($matches[1]);
if(!empty($matches[2]))
{
$bytesTo = intval($matches[2]);
}
}
}
//completed log is a json_encoded file containing an array or byte ranges that directly
//correspond with the chunk files generated by the cache script
$log = json_decode(file_get_contents($cacheFolder.DIRECTORY_SEPARATOR.'completed.log'));
$totalBytes = 0;
$chunk = 0;
foreach($log as $ind => $bytes)
{
//find the first chunk file we need to open
if($bytes[0] <= $bytesFrom && $bytes[1] > $bytesFrom)
{
$chunk = $ind + 1;
}
//and while we are at it save the last byte range "to" which is the total number of bytes of all the chunk files
$totalBytes = $bytes[1];
}
if($bytesTo === 0)
{
if($totalBytes === 0)
{
//if we get here then something is wrong with the cache, revert to serving from the remote
playFromRemote($url);
return;
}
$bytesTo = $totalBytes - 1;
}
//calculate how many bytes will be returned in this request
$contentLength = $bytesTo - $bytesFrom + 1;
//send some headers - I have hardcoded MP4 here because that is all I have developed with
//if you are using different video formats then testing and changes will no doubt be required
header('Content-Type: video/mp4');
header('Content-Length: '.$contentLength);
header('Accept-Ranges: bytes');
//Send a header so we can recognise that the content was indeed served by the cache
header('X-Cached-Date: '.(date('Y-m-d H:i:s', filemtime($cacheFolder.DIRECTORY_SEPARATOR.'completed.log'))));
if($bytesFrom > 0)
{
//We are sending back a range so it needs a header and the http response must be 206: Partial Content
header(sprintf('content-range: bytes %s-%s/%s', $bytesFrom, $bytesTo, $totalBytes));
http_response_code(206);
}
$bytesSent = 0;
while(is_file($cacheFolder.DIRECTORY_SEPARATOR.$chunk) && $bytesSent < $contentLength)
{
$cfh = fopen($cacheFolder.DIRECTORY_SEPARATOR.$chunk, 'rb');
if($cfh !== false)
{
//if we are fetching a range then we might need to seek the correct starting point in the first chunk we look at
//this check will be performed on all chunks but only the first one should need seeking so no harm done
if($log[$chunk - 1][0] < $bytesFrom)
{
fseek($cfh, $bytesFrom - $log[$chunk - 1][0]);
}
//read and send data until the end of the file or we have sent what was requested
while(!feof($cfh) && $bytesSent < $contentLength)
{
$data = fread($cfh, 1024);
//check we are not going to be sending too much back and if we are then truncate the data to the correct length
if($bytesSent + strlen($data) > $contentLength)
{
$data = substr($data, 0, $contentLength - $bytesSent);
}
$bytesSent += strlen($data);
echo $data;
}
fclose($cfh);
}
//move to the next chunk
$chunk ++;
}
}
function play($url)
{
//I have chosen a simple way to make a folder name, this can be improved any way you need
//IMPORTANT: Each video must have its own cache folder
$cacheFolder = sha1($url);
if(!is_dir($cacheFolder))
{
mkdir($cacheFolder, 0755, true);
}
//First check if we are currently in the process of generating the cache and so just play from remote
if(is_file($cacheFolder.DIRECTORY_SEPARATOR.'caching.log'))
{
playFromRemote($url);
}
//Otherwise check if we have never completed the cache or it was completed 5 hours ago and if so spawn a process to generate the cache
elseif(!is_file($cacheFolder.DIRECTORY_SEPARATOR.'completed.log') || filemtime($cacheFolder.DIRECTORY_SEPARATOR.'completed.log') + (5 * 60 * 60) < time())
{
//fork the caching to a separate process - the & echo $! at the end causes the process to run as a background task
//and print the process ID returning immediately
//The cache script can be anywhere, pass the location to sprintf in the first position
//A base64 encoded url is passed in as argument 1, sprintf second position
$cmd = sprintf('php %scache.php %s & echo $!', __DIR__.DIRECTORY_SEPARATOR, base64_encode($url));
$pid = exec($cmd);
//with that started we need to serve the request from the remote url
playFromRemote($url);
}
else
{
//if we got this far then we have a completed cache so serve from there
playFromCache($cacheFolder, $url);
}
}
cache.php - This script will be called by play.php via exec()
<?php
//This script expects as argument 1 a base64 encoded url
if(count($argv)!==2)
{
die('Invalid Request!');
}
$url = base64_decode($argv[1]);
//make sure to use the same method of obtaining the cache folder name as the main play script
//or change the code to pass it in as an argument
$cacheFolder = sha1($url);
if(!is_dir($cacheFolder))
{
die('Invalid Arguments!');
}
//double check it is not already running
if(is_file($cacheFolder.DIRECTORY_SEPARATOR.'caching.log'))
{
die('Already Running');
}
//create a file so we know this has started, the file will be removed at the end of the script
file_put_contents($cacheFolder.DIRECTORY_SEPARATOR.'caching.log', date('d/m/Y H:i:s'));
//get rid of the old completed log
if(is_file($cacheFolder.DIRECTORY_SEPARATOR.'completed.log'))
{
unlink($cacheFolder.DIRECTORY_SEPARATOR.'completed.log');
}
$bytesFrom = 0;
$bytesWritten = 0;
$totalBytes = 0;
//this is the size of the chunk files, currently 10MB
$maxSizeInBytes = 10 * 1024 * 1024;
$chunk = 1;
//open the url for binary reading and first chunk for binary writing
$fh = fopen($url, 'rb');
$cfh = fopen($cacheFolder.DIRECTORY_SEPARATOR.$chunk, 'wb');
if($fh !== false && $cfh!==false)
{
$log = array();
while(!feof($fh))
{
$data = fread($fh, 1024);
fwrite($cfh, $data);
$totalBytes += strlen($data); //use actual length here
$bytesWritten += strlen($data);
//if we are on or passed the chunk size then close the chunk and open a new one
//keeping a log of the byte range of the chunk
if($bytesWritten>=$maxSizeInBytes)
{
$log[$chunk-1] = array($bytesFrom,$totalBytes);
$bytesFrom = $totalBytes;
fclose($cfh);
$chunk++;
$bytesWritten = 0;
$cfh = fopen($cacheFolder.DIRECTORY_SEPARATOR.$chunk, 'wb');
}
}
fclose($fh);
$log[$chunk-1] = array($bytesFrom,$totalBytes);
fclose($cfh);
//write the completed log. This is a json encoded string of the chunk byte ranges and will be used
//by the play script to quickly locate the starting chunk of a range request
file_put_contents($cacheFolder.DIRECTORY_SEPARATOR.'completed.log', json_encode($log));
//finally remove the caching log so the play script doesn't think the process is still running
unlink($cacheFolder.DIRECTORY_SEPARATOR.'caching.log');
}
Related
I use a PHP script to upload some daily videos to a Youtube channel (based on this code sample: https://developers.google.com/youtube/v3/code_samples/php#resumable_uploads)
The problem is after this loop:
// Read the media file and upload it chunk by chunk.
$status = false;
$handle = fopen($videoPath, "rb");
while (!$status && !feof($handle)) {
$chunk = fread($handle, $chunkSizeBytes);
$status = $media->nextChunk($chunk);
}
Normally the $status variable has the video id ($status['id']) after the upload is complete but since mid January all the uploads failed with one of this errors:
$status variable value stills as "false"
Inside the catch, the Google_Service_Exception message is like "A service error occurred: Error calling PUT https://www.googleapis.com/upload/youtube/v3/videos?part=status%2Csnippet&uploadType=resumable&upload_id=xxx: (400) Invalid request. The number of bytes uploaded is required to be equal or greater than 262144, except for the final request (it's recommended to be the exact multiple of 262144). The received request contained nnn bytes, which does not meet this requirement.", where nnn is less than 262144 and seems to be the last request.
When I access the Youtube channel I can see the new videos with a status "Preparing upload" or stuck with a fixed percentage.
My code has not changed for months but now I can't upload any new video.
Anyone can please help me to know what's wrong? Thanks in advance!
The solution proposed by #pom (thanks by the way) doesn't really solve this issue if you need to implement a progress indicator.
I'm facing the same problem than #astor; after calling ->nextChunk for the final chunk i get:
The number of bytes uploaded is required to be equal or greater than
262144, except for the final request (it's recommended to be the exact
multiple of 262144). The received request contained 38099 bytes,
which does not meet this requirement.
See this log file :
The code is copy-paste from the google/google-api-php-client doc.
The log file shows the size of the first chunk being a bit superior to the others (except the last one) which i couldn't explain. Another "strange" thing is that its size changes test after test.
The size of the last chunk seems correct, and at the end all bytes should have been uploaded.
However, uploading the remaining bytes in the last chunk ->nextChunk($chunk) throws this error.
One important precision is that my source file is on AWS S3. File operations (filesize, fread, fopen) are done with the Amazon S3 Stream Wrapper.
It may add some headers or IDK that causes the problem.
EDIT: I don't have such problem with local files
Has anyone run into the same problem? ?
...
$chunkSizeBytes = 5 * 1024 * 1024;
$client->setDefer(true);
$request = $service->files->create($file);
$media = new Google_Http_MediaFileUpload(
$client,
$request,
'text/plain',
null,
true,
$chunkSizeBytes
);
$media->setFileSize(filesize(TESTFILE));
// Upload the various chunks. $status will be false until the process is
// complete.
$status = false;
$handle = fopen(TESTFILE, "rb");
while (!$status && !feof($handle)) {
// read until you get $chunkSizeBytes from TESTFILE
// fread will never return more than 8192 bytes if the stream is read buffered and it does not represent a plain file
// An example of a read buffered file is when reading from a URL
$chunk = readVideoChunk($handle, $chunkSizeBytes);
$status = $media->nextChunk($chunk);
}
// The final value of $status will be the data from the API for the object
// that has been uploaded.
$result = false;
if ($status != false) {
$result = $status;
}
fclose($handle);
}
function readVideoChunk ($handle, $chunkSize)
{
$byteCount = 0;
$giantChunk = "";
while (!feof($handle)) {
// fread will never return more than 8192 bytes if the stream is read buffered and it does not represent a plain file
$chunk = fread($handle, 8192);
$byteCount += strlen($chunk);
$giantChunk .= $chunk;
if ($byteCount >= $chunkSize)
{
return $giantChunk;
}
}
return $giantChunk;
}
Maybe you can try like that and then let MediaFileUpload cut the chunk itself:
$media = new \Google_Http_MediaFileUpload(
$client,
$insertRequest,
'video/*',
file_get_contents($pathToFile), <== put file content instead of null
true,
$chunkSizeBytes
);
$media->setFileSize($size);
$status = false;
while (!$status) {
$status = $media->nextChunk();
}
in previous answer from eveyrat everything is okay except one thing, readVideoChunk() doesn't always return exact amount of bytes as it is required in chunk upload docs. Moreover google accepts only as much bytes as it was in the first chunk request. I fixed that issue by custom buffering of overlap bytes:
(consider following piece of the code as a method of a class, the class has to have $remaining property)
function readVideoChunk ($handle, $chunkSize)
{
$byteCount = 0;
$giantChunk = "";
while (!feof($handle)) {
// fread will never return more than 8192 bytes if the stream is read buffered and it does not represent a plain file
$chunk = fread($handle, 8192);
$byteCount += strlen($chunk);
$giantChunk .= $chunk;
if ($byteCount > $chunkSize)
{
break;
}
}
$chunks = str_split(($this->remaining . $giantChunk), $chunkSize);
if (count($chunks) > 1) {
$giantChunk = $chunks[0];
$this->remaining = $chunks[1];
}
else {
$giantChunk = $chunks[0];
$this->remaining = '';
}
return $giantChunk;
}
I have gz compressed file and uncompressing it to normal file so I use fwrite(). It works fine when do uncompress in a single PHP request.
Because of large compressed files, I do mind in PHP timeouts, I uncompress the file until 30 seconds and stop the process and store the current offset of gz file using gztell() then continue it form the next PHP request where I left.
I do gzseek() with current offset and continue the uncompression but gzread() continuously gives empty string
function gz_uncompress_file($source, $offset = 0){
$dest = str_replace('.gz', '', $source);
$fp_in = gzopen($source, 'rb');
if (empty($fp_in)) {
return 'Cannot open gzfile to uncompress sql';
}
$fp_out = fopen($dest, 'ab');
if (empty($fp_out)) {
return 'Cannot open temp file to uncompress sql';
}
gzseek($fp_in, $offset);
$break = false;
while (!gzeof($fp_in)){
$chunk_data = gzread($fp_in, 1024 * 512);
if (empty($chunk_data)) {
return 'empty string so stop the process';
}
fwrite($fp_out, $chunk_data);
//Clearning to save memory
unset($chunk_data);
if(!is_time_out()){
continue;
}
$break = true;
$offset = gztell($fp_in);
break;
}
fclose($fp_out);
gzclose($fp_in);
if ($break) {
//Saving offset
$this->set_option('sql_gz_uncompression_offset', $offset);
return 'continue_from_call';
}
echo "Un compression done";
}
I'm not sure how to word this so I'll type it out and then edit and answer any questions that come up..
Currently on my local network device (PHP4 based) I'm using this to tail a live system log file: http://commavee.com/2007/04/13/ajax-logfile-tailer-viewer/
This works well and every 1 second it loads an external page (logfile.php) that does a tail -n 100 logfile.log The script doesn't do any buffering so the results it displayes onscreen are the last 100 lines from the log file.
The logfile.php contains :
<? // logtail.php $cmd = "tail -10 /path/to/your/logs/some.log"; exec("$cmd 2>&1", $output);
foreach($output as $outputline) {
echo ("$outputline\n");
}
?>
This part is working well.
I have adapted the logfile.php page to write the $outputline to a new text file, simply using fwrite($fp,$outputline."\n");
Whilst this works I am having issues with duplication in the new file that is created.
Obviously each time tail -n 100 is run produces results, the next time it runs it could produce some of the same lines, as this repeats I can end up with multiple lines of duplication in the new text file.
I can't directly compare the line I'm about to write to previous lines as there could be identical matches.
Is there any way I can compare this current block of 100 lines with the previous block and then only write the lines that are not matching.. Again possible issue that block A & B will contain identical lines that are needed...
Is it possible to update logfile.php to note the position it last tooked at in my logfile and then only read the next 100 lines from there and write those to the new file ?
The log file could be upto 500MB so I don't want to read it all in each time..
Any advice or suggestions welcome..
Thanks
UPDATE # 16:30
I've sort of got this working using :
$file = "/logs/syst.log";
$handle = fopen($file, "r");
if(isset($_SESSION['ftell'])) {
clearstatcache();
fseek($handle, $_SESSION['ftell']);
while ($buffer = fgets($handle)) {
echo $buffer."<br/>";
#ob_flush(); #flush();
}
fclose($handle);
#$_SESSION['ftell'] = ftell($handle);
} else {
fseek($handle, -1024, SEEK_END);
fclose($handle);
#$_SESSION['ftell'] = ftell($handle);
}
This seems to work, but it loads the entire file first and then just the updates.
How would I get it start with the last 50 lines and then just the updates ?
Thanks :)
UPDATE 04/06/2013
Whilst this works it's very slow with large files.
I've tried this code and it seems faster, but it doesn't just read from where it left off.
function last_lines($path, $line_count, $block_size = 512){
$lines = array();
// we will always have a fragment of a non-complete line
// keep this in here till we have our next entire line.
$leftover = "";
$fh = fopen($path, 'r');
// go to the end of the file
fseek($fh, 0, SEEK_END);
do{
// need to know whether we can actually go back
// $block_size bytes
$can_read = $block_size;
if(ftell($fh) < $block_size){
$can_read = ftell($fh);
}
// go back as many bytes as we can
// read them to $data and then move the file pointer
// back to where we were.
fseek($fh, -$can_read, SEEK_CUR);
$data = fread($fh, $can_read);
$data .= $leftover;
fseek($fh, -$can_read, SEEK_CUR);
// split lines by \n. Then reverse them,
// now the last line is most likely not a complete
// line which is why we do not directly add it, but
// append it to the data read the next time.
$split_data = array_reverse(explode("\n", $data));
$new_lines = array_slice($split_data, 0, -1);
$lines = array_merge($lines, $new_lines);
$leftover = $split_data[count($split_data) - 1];
}
while(count($lines) < $line_count && ftell($fh) != 0);
if(ftell($fh) == 0){
$lines[] = $leftover;
}
fclose($fh);
// Usually, we will read too many lines, correct that here.
return array_slice($lines, 0, $line_count);
}
Any way this can be amend so it will read from the last known position.. ?
Thanks
Introduction
You can tail a file by tracking the last position;
Example
$file = __DIR__ . "/a.log";
$tail = new TailLog($file);
$data = $tail->tail(100) ;
// Save $data to new file
TailLog is a simple class i wrote for this task here is a simple example to show its actually tailing the file
Simple Test
$file = __DIR__ . "/a.log";
$tail = new TailLog($file);
// Some Random Data
$data = array_chunk(range("a", "z"), 3);
// Write Log
file_put_contents($file, implode("\n", array_shift($data)));
// First Tail (2) Run
print_r($tail->tail(2));
// Run Tail (2) Again
print_r($tail->tail(2));
// Write Another data to Log
file_put_contents($file, "\n" . implode("\n", array_shift($data)), FILE_APPEND);
// Call Tail Again after writing Data
print_r($tail->tail(2));
// See the full content
print_r(file_get_contents($file));
Output
// First Tail (2) Run
Array
(
[0] => c
[1] => b
)
// Run Tail (2) Again
Array
(
)
// Call Tail Again after writing Data
Array
(
[0] => f
[1] => e
)
// See the full content
a
b
c
d
e
f
Real Time Tailing
while(true) {
$data = $tail->tail(100);
// write data to another file
sleep(5);
}
Note: Tailing 100 lines does not mean it would always return 100 lines. It would return new lines added 100 is just the maximum number of lines to return. This might not be efficient where you have heavy logging of more than 100 line per sec is there is any
Tail Class
class TailLog {
private $file;
private $data;
private $timeout = 5;
private $lock;
function __construct($file) {
$this->file = $file;
$this->lock = new TailLock($file);
}
public function tail($lines) {
$pos = - 2;
$t = $lines;
$fp = fopen($this->file, "r");
$break = false;
$line = "";
$text = array();
while($t > 0) {
$c = "";
// Seach for End of line
while($c != "\n" && $c != PHP_EOL) {
if (fseek($fp, $pos, SEEK_END) == - 1) {
$break = true;
break;
}
if (ftell($fp) < $this->lock->getPosition()) {
break;
}
$c = fgetc($fp);
$pos --;
}
if (ftell($fp) < $this->lock->getPosition()) {
break;
}
$t --;
$break && rewind($fp);
$text[$lines - $t - 1] = fgets($fp);
if ($break) {
break;
}
}
// Move to end
fseek($fp, 0, SEEK_END);
// Save Position
$this->lock->save(ftell($fp));
// Close File
fclose($fp);
return array_map("trim", $text);
}
}
Tail Lock
class TailLock {
private $file;
private $lock;
private $data;
function __construct($file) {
$this->file = $file;
$this->lock = $file . ".tail";
touch($this->lock);
if (! is_file($this->lock))
throw new Exception("can't Create Lock File");
$this->data = json_decode(file_get_contents($this->lock));
// Check if file is valida json
// Check if Data in the original files as not be delete
// You expect data to increate not decrease
if (! $this->data || $this->data->size > filesize($this->file)) {
$this->reset($file);
}
}
function getPosition() {
return $this->data->position;
}
function reset() {
$this->data = new stdClass();
$this->data->size = filesize($this->file);
$this->data->modification = filemtime($this->file);
$this->data->position = 0;
$this->update();
}
function save($pos) {
$this->data = new stdClass();
$this->data->size = filesize($this->file);
$this->data->modification = filemtime($this->file);
$this->data->position = $pos;
$this->update();
}
function update() {
return file_put_contents($this->lock, json_encode($this->data, 128));
}
}
Not really clear on how you want to use the output but would something like this work ....
$dat = file_get_contents("tracker.dat");
$fp = fopen("/logs/syst.log", "r");
fseek($fp, $dat, SEEK_SET);
ob_start();
// alternatively you can do a while fgets if you want to interpret the file or do something
fpassthru($fp);
$pos = ftell($fp);
fclose($fp);
echo nl2br(ob_get_clean());
file_put_contents("tracker.dat", ftell($fp));
tracker.dat is just a text file that contains where the read position position was from the previous run. I'm just seeking to that position and piping the rest to the output buffer.
Use tail -c <number of bytes, instead of number of lines, and then check the file size. The rough idea is:
$old_file_size = 0;
$max_bytes = 512;
function last_lines($path) {
$new_file_size = filesize($path);
$pending_bytes = $new_file_size - $old_file_size;
if ($pending_bytes > $max_bytes) $pending_bytes = $max_bytes;
exec("tail -c " + $pending_bytes + " /path/to/your_log", $output);
$old_file_size = $new_file_size;
return $output;
}
The advantage is that you can do away with all the special processing stuff, and get good performance. The disadvantage is that you have to manually split the output into lines, and probably you could end up with unfinished lines. But this isn't a big deal, you can easily work around by omitting the last line alone from the output (and appropriately subtracting the last line number of bytes from old_file_size).
Can I read a file in PHP from my end, for example if I want to read last 10-20 lines?
And, as I read, if the size of the file is more than 10mbs I start getting errors.
How can I prevent this error?
For reading a normal file, we use the code :
if ($handle) {
while (($buffer = fgets($handle, 4096)) !== false) {
$i1++;
$content[$i1]=$buffer;
}
if (!feof($handle)) {
echo "Error: unexpected fgets() fail\n";
}
fclose($handle);
}
My file might go over 10mbs, but I just need to read the last few lines. How do I do it?
Thanks
You can use fopen and fseek to navigate in file backwards from end. For example
$fp = #fopen($file, "r");
$pos = -2;
while (fgetc($fp) != "\n") {
fseek($fp, $pos, SEEK_END);
$pos = $pos - 1;
}
$lastline = fgets($fp);
It's not pure PHP, but the common solution is to use the tac command which is the revert of cat and loads the file in reverse. Use exec() or passthru() to run it on the server and then read the results. Example usage:
<?php
$myfile = 'myfile.txt';
$command = "tac $myfile > /tmp/myfilereversed.txt";
exec($command);
$currentRow = 0;
$numRows = 20; // stops after this number of rows
$handle = fopen("/tmp/myfilereversed.txt", "r");
while (!feof($handle) && $currentRow <= $numRows) {
$currentRow++;
$buffer = fgets($handle, 4096);
echo $buffer."<br>";
}
fclose($handle);
?>
It depends how you interpret "can".
If you wonder whether you can do this directly (with PHP function) without reading the all the preceding lines, then the answer is: No, you cannot.
A line ending is an interpretation of the data and you can only know where they are, if you actually read the data.
If it is a really big file, I'd not do that though.
It would be better if you were to scan the file starting from the end, and gradually read blocks from the end to the file.
Update
Here's a PHP-only way to read the last n lines of a file without reading through all of it:
function last_lines($path, $line_count, $block_size = 512){
$lines = array();
// we will always have a fragment of a non-complete line
// keep this in here till we have our next entire line.
$leftover = "";
$fh = fopen($path, 'r');
// go to the end of the file
fseek($fh, 0, SEEK_END);
do{
// need to know whether we can actually go back
// $block_size bytes
$can_read = $block_size;
if(ftell($fh) < $block_size){
$can_read = ftell($fh);
}
// go back as many bytes as we can
// read them to $data and then move the file pointer
// back to where we were.
fseek($fh, -$can_read, SEEK_CUR);
$data = fread($fh, $can_read);
$data .= $leftover;
fseek($fh, -$can_read, SEEK_CUR);
// split lines by \n. Then reverse them,
// now the last line is most likely not a complete
// line which is why we do not directly add it, but
// append it to the data read the next time.
$split_data = array_reverse(explode("\n", $data));
$new_lines = array_slice($split_data, 0, -1);
$lines = array_merge($lines, $new_lines);
$leftover = $split_data[count($split_data) - 1];
}
while(count($lines) < $line_count && ftell($fh) != 0);
if(ftell($fh) == 0){
$lines[] = $leftover;
}
fclose($fh);
// Usually, we will read too many lines, correct that here.
return array_slice($lines, 0, $line_count);
}
Following snippet worked for me.
$file = popen("tac $filename",'r');
while ($line = fgets($file)) {
echo $line;
}
Reference: http://laughingmeme.org/2008/02/28/reading-a-file-backwards-in-php/
If your code is not working and reporting an error you should include the error in your posts!
The reason you are getting an error is because you are trying to store the entire contents of the file in PHP's memory space.
The most effiicent way to solve the problem would be as Greenisha suggests and seek to the end of the file then go back a bit. But Greenisha's mecanism for going back a bit is not very efficient.
Consider instead the method for getting the last few lines from a stream (i.e. where you can't seek):
while (($buffer = fgets($handle, 4096)) !== false) {
$i1++;
$content[$i1]=$buffer;
unset($content[$i1-$lines_to_keep]);
}
So if you know that your max line length is 4096, then you would:
if (4096*lines_to_keep<filesize($input_file)) {
fseek($fp, -4096*$lines_to_keep, SEEK_END);
}
Then apply the loop I described previously.
Since C has some more efficient methods for dealing with byte streams, the fastest solution (on a POSIX/Unix/Linux/BSD) system would be simply:
$last_lines=system("last -" . $lines_to_keep . " filename");
For Linux you can do
$linesToRead = 10;
exec("tail -n{$linesToRead} {$myFileName}" , $content);
You will get an array of lines in $content variable
Pure PHP solution
$f = fopen($myFileName, 'r');
$maxLineLength = 1000; // Real maximum length of your records
$linesToRead = 10;
fseek($f, -$maxLineLength*$linesToRead, SEEK_END); // Moves cursor back from the end of file
$res = array();
while (($buffer = fgets($f, $maxLineLength)) !== false) {
$res[] = $buffer;
}
$content = array_slice($res, -$linesToRead);
If you know about how long the lines are, you can avoid a lot of the black magic and just grab a chunk of the end of the file.
I needed the last 15 lines from a very large log file, and altogether they were about 3000 characters. So I just grab the last 8000 bytes to be safe, then read the file as normal and take what I need from the end.
$fh = fopen($file, "r");
fseek($fh, -8192, SEEK_END);
$lines = array();
while($lines[] = fgets($fh)) {}
This is possibly even more efficient than the highest rated answer, which reads the file character by character, compares each character, and splits based on newline characters.
Here is another solution. It doesn't have line length control in fgets(), you can add it.
/* Read file from end line by line */
$fp = fopen( dirname(__FILE__) . '\\some_file.txt', 'r');
$lines_read = 0;
$lines_to_read = 1000;
fseek($fp, 0, SEEK_END); //goto EOF
$eol_size = 2; // for windows is 2, rest is 1
$eol_char = "\r\n"; // mac=\r, unix=\n
while ($lines_read < $lines_to_read) {
if (ftell($fp)==0) break; //break on BOF (beginning...)
do {
fseek($fp, -1, SEEK_CUR); //seek 1 by 1 char from EOF
$eol = fgetc($fp) . fgetc($fp); //search for EOL (remove 1 fgetc if needed)
fseek($fp, -$eol_size, SEEK_CUR); //go back for EOL
} while ($eol != $eol_char && ftell($fp)>0 ); //check EOL and BOF
$position = ftell($fp); //save current position
if ($position != 0) fseek($fp, $eol_size, SEEK_CUR); //move for EOL
echo fgets($fp); //read LINE or do whatever is needed
fseek($fp, $position, SEEK_SET); //set current position
$lines_read++;
}
fclose($fp);
Well while searching for the same thing, I can across the following and thought it might be useful to others as well so sharing it here:
/* Read file from end line by line */
function tail_custom($filepath, $lines = 1, $adaptive = true) {
// Open file
$f = #fopen($filepath, "rb");
if ($f === false) return false;
// Sets buffer size, according to the number of lines to retrieve.
// This gives a performance boost when reading a few lines from the file.
if (!$adaptive) $buffer = 4096;
else $buffer = ($lines < 2 ? 64 : ($lines < 10 ? 512 : 4096));
// Jump to last character
fseek($f, -1, SEEK_END);
// Read it and adjust line number if necessary
// (Otherwise the result would be wrong if file doesn't end with a blank line)
if (fread($f, 1) != "\n") $lines -= 1;
// Start reading
$output = '';
$chunk = '';
// While we would like more
while (ftell($f) > 0 && $lines >= 0) {
// Figure out how far back we should jump
$seek = min(ftell($f), $buffer);
// Do the jump (backwards, relative to where we are)
fseek($f, -$seek, SEEK_CUR);
// Read a chunk and prepend it to our output
$output = ($chunk = fread($f, $seek)) . $output;
// Jump back to where we started reading
fseek($f, -mb_strlen($chunk, '8bit'), SEEK_CUR);
// Decrease our line counter
$lines -= substr_count($chunk, "\n");
}
// While we have too many lines
// (Because of buffer size we might have read too many)
while ($lines++ < 0) {
// Find first newline and remove all text before that
$output = substr($output, strpos($output, "\n") + 1);
}
// Close file and return
fclose($f);
return trim($output);
}
As Einstein said every thing should be made as simple as possible but no simpler. At this point you are in need of a data structure, a LIFO data structure or simply put a stack.
A more complete example of the "tail" suggestion above is provided here. This seems to be a simple and efficient method -- thank-you. Very large files should not be an issue and a temporary file is not required.
$out = array();
$ret = null;
// capture the last 30 files of the log file into a buffer
exec('tail -30 ' . $weatherLog, $buf, $ret);
if ( $ret == 0 ) {
// process the captured lines one at a time
foreach ($buf as $line) {
$n = sscanf($line, "%s temperature %f", $dt, $t);
if ( $n > 0 ) $temperature = $t;
$n = sscanf($line, "%s humidity %f", $dt, $h);
if ( $n > 0 ) $humidity = $h;
}
printf("<tr><th>Temperature</th><td>%0.1f</td></tr>\n",
$temperature);
printf("<tr><th>Humidity</th><td>%0.1f</td></tr>\n", $humidity);
}
else { # something bad happened }
In the above example, the code reads 30 lines of text output and displays the last temperature and humidity readings in the file (that's why the printf's are outside of the loop, in case you were wondering). The file is filled by an ESP32 which adds to the file every few minutes even when the sensor reports only nan. So thirty lines gets plenty of readings so it should never fail. Each reading includes the date and time so in the final version the output will include the time the reading was taken.
This is what I have so far:
<?php
$file = "18201010338AM16390621000846.png";
$test = file_get_contents($file, FILE_BINARY);
echo str_replace("\n","<br>",$test);
?>
The output is sorta what I want, but I really only need lines 3-7 (inclusively). This is what the output looks like now: http://silentnoobs.com/pbss/collector/test.php. I am trying to get the data from "PunkBuster Screenshot (±) AAO Bridge Crossing" to "Resulting: w=394 X h=196 sample=2". I think it'd be fairly straight forward to read through the file, and store each line in an array, line[0] would need to be "PunkBuster Screenshot (±) AAO Bridge Crossing", and so on. All those lines are subject to change, so I can't just search for something finite.
I've tried for a few days now, and it doesn't help much that I'm poor at php.
The PNG file format defines that a PNG document is split up into multiple chunks of data. You must therefore navigate your way to the chunk you desire.
The data you want to extract seem to be defined in a tEXt chunk. I've written the following class to allow you to extract chunks from PNG files.
class PNG_Reader
{
private $_chunks;
private $_fp;
function __construct($file) {
if (!file_exists($file)) {
throw new Exception('File does not exist');
}
$this->_chunks = array ();
// Open the file
$this->_fp = fopen($file, 'r');
if (!$this->_fp)
throw new Exception('Unable to open file');
// Read the magic bytes and verify
$header = fread($this->_fp, 8);
if ($header != "\x89PNG\x0d\x0a\x1a\x0a")
throw new Exception('Is not a valid PNG image');
// Loop through the chunks. Byte 0-3 is length, Byte 4-7 is type
$chunkHeader = fread($this->_fp, 8);
while ($chunkHeader) {
// Extract length and type from binary data
$chunk = #unpack('Nsize/a4type', $chunkHeader);
// Store position into internal array
if ($this->_chunks[$chunk['type']] === null)
$this->_chunks[$chunk['type']] = array ();
$this->_chunks[$chunk['type']][] = array (
'offset' => ftell($this->_fp),
'size' => $chunk['size']
);
// Skip to next chunk (over body and CRC)
fseek($this->_fp, $chunk['size'] + 4, SEEK_CUR);
// Read next chunk header
$chunkHeader = fread($this->_fp, 8);
}
}
function __destruct() { fclose($this->_fp); }
// Returns all chunks of said type
public function get_chunks($type) {
if ($this->_chunks[$type] === null)
return null;
$chunks = array ();
foreach ($this->_chunks[$type] as $chunk) {
if ($chunk['size'] > 0) {
fseek($this->_fp, $chunk['offset'], SEEK_SET);
$chunks[] = fread($this->_fp, $chunk['size']);
} else {
$chunks[] = '';
}
}
return $chunks;
}
}
You may use it as such to extract your desired tEXt chunk as such:
$file = '18201010338AM16390621000846.png';
$png = new PNG_Reader($file);
$rawTextData = $png->get_chunks('tEXt');
$metadata = array();
foreach($rawTextData as $data) {
$sections = explode("\0", $data);
if($sections > 1) {
$key = array_shift($sections);
$metadata[$key] = implode("\0", $sections);
} else {
$metadata[] = $data;
}
}
<?php
$fp = fopen('18201010338AM16390621000846.png', 'rb');
$sig = fread($fp, 8);
if ($sig != "\x89PNG\x0d\x0a\x1a\x0a")
{
print "Not a PNG image";
fclose($fp);
die();
}
while (!feof($fp))
{
$data = unpack('Nlength/a4type', fread($fp, 8));
if ($data['type'] == 'IEND') break;
if ($data['type'] == 'tEXt')
{
list($key, $val) = explode("\0", fread($fp, $data['length']));
echo "<h1>$key</h1>";
echo nl2br($val);
fseek($fp, 4, SEEK_CUR);
}
else
{
fseek($fp, $data['length'] + 4, SEEK_CUR);
}
}
fclose($fp);
?>
It assumes a basically well formed PNG file.
I found this problem a few days ago, so I made a library to extract the metadata (Exif, XMP and GPS) of a PNG in PHP, 100% native, I hope it helps. :) PNGMetadata
How about:
http://www.php.net/manual/en/function.getimagesize.php