A S0-logger comes with a csv file on a monthly base. The file is updated every 5 minutes and can be retrieved any moment. At the end of the month the file counts over 8500 rows. When a new month starts a new file is made.
Fileformat is like this:
Datum / Uhrzeit (UTC);Main meter - Sales office (kWh);Meter 9;Temperature - Server room;Supply conductor air conditioner
01.06.12 00:00:00;438.220;0.001;274;155
01.06.12 00:05:00;438.240;0.001;274;203
01.06.12 00:10:00;438.259;0.001;275;134
01.06.12 00:15:00;438.283;0.001;274;176
01.06.12 00:20:00;438.303;0.001;274;206
dd.mm.yy (This is european dateformat)
I want to split the monthly file into a daily file with filename yymmdd.csv and store these files for further use and processing. There is no use for the column names.
During the day, its data is updated every five minutes, but after a day is finished there is no need to reprocess this data, because nothing changes. I found out fgetcsv is the most appropriate method. But how to prevent the reprocessing of the data which is rather time consuming and unnecessary?
Assuming the monthly file is always appended to.
You could keep a small file named e.g. 2012-january.csv.ptr. This file keeps the last position in the file; if it's non-existent you start at the beginning.
At every successful read, you determine the file pointer using ftell(). When you reached the end, you write the last position inside the .ptr file.
When the .ptr file exists you seek back into the file using fseek() and then start processing as per normal.
Can you use any database? Simple table would do for data storage and later procesing.
Here is the code:
if ($fp = fopen('log.csv', 'r')) {
$line_number = 0;
while ($line = fgetcsv($fp, 0, ';')) {
if ($line_number++ == 0) {
continue;
}
$date = explode(' ', $line[0]);
$file = $date[0] .'.log';
file_put_contents(
'monthly/'. $file,
implode(';', $line) ."\n",
FILE_APPEND
);
}
fclose($fp);
}
It reads CSV file line by line, extracts date part from the first column and creates new file and appends data to it.
P.S.
folder "monthly" must be writable
Combining the answers I figured this out:
It works, but sometimes when running the script again I miss one line in the split files and I can't figure out why.
<?php
$fh = fopen('2012_06.csv.ptr', 'r');
$pos1 = fread( $fh, 8192 );
//echo $pos1 , 'a',"<BR>";
fclose ($fh);
if ($fp = fopen('2012_06.csv', 'r'))
fseek ($fp , $pos1 );
{
$line_number = 0;
while ($line = fgetcsv($fp, 0, ';')) {
if ($line_number++ == 0) {
continue;
}
$date = explode(' ', $line[0]);
$file = $date[0] .'.log';
file_put_contents(
'Monthly/'. $file,
implode(';', $line) ."\n",
FILE_APPEND
);
}
$pos2 = ftell ($fp);
//echo $pos2;
fclose($fp);
}
$fh = fopen('2012_06.csv.ptr', 'w');
fwrite ($fh, $pos2);
fclose ($fh);
?>
Related
I have a file with keywords on each line. The line starts with number then comma and after that the keyword (like comma separated csv but text file). The file looks like this
7,n00t
41,n01
13,n021
21,n02
18,n03
13,n04
15,n05
13,n06
18,n07
13,n08
14,n09
9,n0a
What I'm trying is to run whole file and hash only the keywords without the number before the comma.
What I'm tried is this.
$savePath = "test-file.txt";
$handle = fopen($savePath, "r+");
if ($handle) {
while (($line = fgets($handle)) !== false) {
$hash1 = substr($line, strpos($line, ",") + 1);
$hash2 = hash('ripemd160', $hash1);
fwrite($handle, $hash2);
}
fclose($handle);
} else {
echo "Can't open the file!";
}
It is working but the problem is that it is hashing the number before the comma and on most of the lines I get one string. This is the output
d743dcc66de14a3430d806aad64a67345fd0b23d0007
75f32ebf42e3ffd70fc3f63d3a61fc6af0075c24000088
7b816ac9cbe2da6a6643538564216e441f55fe9f6,00009
f0ba52b83ffac69fddd8786d6f48e7700562f0170b
def75b09e253faea412f67e67a535595b00366dce
c0da025038ed83c687ddc430da9846ecb97f3998l
c0da025038ed83c687ddc430da9846ecb97f39985,0000r
c12530b4b78bde7bc000e4f15a15bcea013eaf8c
9c1185a5c5e9fc54612808977ee8f548b2258d31,00010
efa60a26277fde0514aec5b51f560a4ba25be3c111
0e25f9d48d432ff5256e6da30ab644d1ca726cb700123
ad6d049d794146aca7a169fd6cb3086954bf2c63012
Should be
7,d743dcc66de14a3430d806aad64a67345fd0b23d0007
41,75f32ebf42e3ffd70fc3f63d3a61fc6af0075c24000088
13,7b816ac9cbe2da6a6643538564216e441f55fe9f6,00009
21,f0ba52b83ffac69fddd8786d6f48e7700562f0170b
18,def75b09e253faea412f67e67a535595b00366dce
13,c0da025038ed83c687ddc430da9846ecb97f3998l
15,c0da025038ed83c687ddc430da9846ecb97f39985,0000r
13,c12530b4b78bde7bc000e4f15a15bcea013eaf8c
18,9c1185a5c5e9fc54612808977ee8f548b2258d31,00010
13,efa60a26277fde0514aec5b51f560a4ba25be3c111
14,0e25f9d48d432ff5256e6da30ab644d1ca726cb700123
9,ad6d049d794146aca7a169fd6cb3086954bf2c63012
Any ideas what is the problem?
The thing is you are reading and writing to the file at the same time. This way, internal pointer is being juggled all the time. Instead, read all the lines, store the result in an array and fseek the file pointer to the beginning of the file again and keep writing the new lines one by one as shown below.
Snippet:
<?php
$handle = fopen("test-file.txt", "r+");
if (!$handle) {
throw new Exception("Can't open file!");
}
$newLines = [];
while (($line = fgets($handle)) !== false) {
$hash = hash('ripemd160', substr($line, strpos($line, ",") + 1));
$newLines[] = substr($line, 0, strpos($line, ",")) . "," . $hash;
}
fseek($handle, 0);
foreach($newLines as $line){
fwrite($handle, $line . "\n");
}
fclose($handle);
The trouble is twofold:
You're trying to write to the file you're reading, while you're reading it without even changing the position of the file pointer, and vice-versa for the reads.
You're trying to overwrite 3-4 bytes of data with 32 bytes of data, which ends up clobbering most of the rest of the input you're trying to read.
If you want to change a file like this you need to create a new file, write your data to that, and then rename the new file to the old one.
Also, use fgetcsv() and fputcsv() to read and write CSV files, otherwise you're going to wind up fighting with edge cases when your input data starts getting complex.
$savePath = "test-file.txt";
$in_h = fopen($savePath, "r+");
$out_h = fopen($savePath.'.new', 'r+');
if ($in_h) {
while (($line = fgetcsv($in_h)) !== false) {
$line[1] = hash('ripemd160', $line[1]);
fputcsv($out_h, $line);
}
fclose($in_h);
fclose($out_h);
rename($savePath.'.new', $savePath);
} else {
echo "Can't open the file!";
}
i got 35 second to execution this code. how to reduce the execution time? what should i change in this source code.
$file_handle = fopen("WMLG2_2017_07_11.log", "r");
while (!feof($file_handle)) {
$line = fgets($file_handle);
if (strpos($line, 'root#CLA-0 [WMLG2] >') !== false) {
$namafileA = explode('> ', $line);
$namafile = str_replace(' ', '_', $namafileA[1]);
$filenameExtension = $namafile.".txt";
$file = preg_replace('/[^A-Za-z0-9\-_.]/', '', $filenameExtension); // hapus special character kecuali "." dan "_"
} else {
$newfile = fopen("show_command_file_Tes2/$file", "a");
fwrite($newfile, $line);
}
}
fclose($file_handle);
I found some mistakes you did with the original code that could impact your performance, but I am not sure how much.
If I understand correctly, you are opening a log file and sorting the messages out to separate files.
You have not pasted an example from the log file, but I assume you have duplicate file targets, not every line of the log file has individual file targets.
Your code opens, but never closes the handles and it stays open during the script run. The file handles are not closing on outer-scope by garbage collector, you have to do it manually to release the resources.
Based on that you should store the file pointers (or at least close them) and re-use that handle what is already open. You are opening at least X line of handle during the execution and not closing it / reusing it where X is the line count in the file.
Other thing I noticed, your lines may be long ones, an that is a rare case where php's strpos() function could be slower than a regex matching the correct position of the string. Without the log file, I can't say for sure because preg_match() is pretty expensive function on simple / short strings (strpos() is way faster.)
If its a log file, most likely starts with that "root#CLA"... string, you should try to match that if you can specify the string position with ^ (begining of the string) or $ (end of string).
<?php
$file_handle = fopen("WMLG2_2017_07_11.log", "r");
//you 'll store your handles here
$targetHandles = [];
while (!feof($file_handle))
{
$line = fgets($file_handle);
if (strpos($line, 'root#CLA-0 [WMLG2] >') !== false)
{
$namafileA = explode('> ', $line);
$namafile = str_replace(' ', '_', $namafileA[1]);
$filenameExtension = $namafile . ".txt";
$file = preg_replace('/[^A-Za-z0-9\-_.]/', '', $filenameExtension); // hapus special character kecuali "." dan "_"
}
else
{
//no $file defined, most likely nothing to write yet
if (empty($file))
{
continue;
}
//if its not open, we'll make them open
if (empty($targetHandles[$file]))
{
$targetHandles[$file] = fopen("show_command_file_Tes2/$file", "a");
}
//writing the line to target
fwrite($targetHandles[$file], $line);
}
}
//you should close your handles every time
foreach ($targetHandles as $handle)
{
fclose($handle);
}
fclose($file_handle);
I am having this strange issue and can't figure it out.
On some websites I have this script works perfect... same code, same server settings...
With php, there is a simple page view hit counter that stores locally in a txt file.
Then I echo out the value on the footer copyright area of my websites to give the client a quick statistic... its pretty cool how fast it grows.
Anyway.. i have a client corner grill ny . com (seo purposes I added spaces )
On that website.. its been working great for years.
Now another website and a bunch more.. for example... savianos . com
This breaks.. and the text value is blank.
This is the counter.php code
<?php
session_start();
$counter_name = "counter/hits.txt";
//Check if a text file exists. If not create one and initialize it to zero.
if (!file_exists($counter_name)) {
$f = fopen($counter_name, "w");
fwrite($f,"0");
fclose($f);
}
// Read the current value of our counter file
$f = fopen($counter_name,"r");
$counterVal = fread($f, filesize($counter_name));
fclose($f);
// Has visitor been counted in this session?
// If not, increase counter value by one
if(!isset($_SESSION['hasVisited'])){
$_SESSION['hasVisited']="yes";
$counterVal++;
$f = fopen($counter_name, "w");
fwrite($f, $counterVal);
fclose($f);
}
?>
Now, if I add a value in the txt file.. like 1040... and go to the website it starts to work... then after a week or so I check it .. its blank again.
Any ideas?
I am thinking that this may be happening because the website might get a TON of views during dinner time friday night.. and the simple script can't handle it so.. while its trying to write a added a number it just breaks and go to blank.. and never starts back up again.
The structure is this.
/counter/ folder has
counter.php and a hits.txt file
Every page of the website the very first thing is
<?php include ('counter/counter.php'); ?>
and in the footer of the website we have
<?php echo $counterVal; ?>
Your code looks perfect, but let's understand the situation. You have a file which can be accessed concurrently for many users, because page visit can be done by multiple users on same time. This does't seem right you have to lock the file manipulation for another user while someone is modifying it, right?. Please have a look
Visits counter without database with PHP
It is most likely because you have two concurrent scripts that tried to open the file at one and one of them fail. You have to use flock() when there are multiple instances of the script that could operate at the same time. Counter are some of the heaviest things if you going to use file reading and writing. I wrote this wrapper to easily implement file locking.
If you want to check out one of my counters that in operation try http://ozlu.org. That dynamic counter image was self-built. The fileReadAll will read the entire file in one shot. The file writer only has two modes, write or append. You can pass the fileWriter an array or a string and it will write it to the file. The function will not add any \n to format your text so you would have to add that. The default mode for the fileWriteAll is w if you do not set the third argument.
function fileWriteAll($file, $content, $mode = "w"){
$mode = $mode === "w" || $mode === "a"? $mode : "w";
$FILE = fopen($file, $mode);
while (!flock($FILE, LOCK_EX)) { usleep(1); }
if( is_array($content) ){
for ($i = 0; $i < count($content); $i++){
fwrite($FILE, $content[$i]);
}
} else {
fwrite($FILE, $content);
}
flock($FILE, LOCK_UN);
fclose($FILE);
}
function fileReadAll($file){
$FILE = fopen($file, 'r');
while (!flock($FILE, LOCK_SH)) { usleep(1); }
$content = fread($FILE, filesize($file));
flock($FILE, LOCK_UN);
fclose($FILE);
return $content;
}
Your modified code:
session_start();
$counterName = './views.txt';
if (!file_exists($counterName)) {
$file = fopen($counterName, 'w');
fwrite($file, '0');
fclose($file);
}
$file = fopen($counterName, 'r');
$value = fread($file, filesize($counterName));
fclose($file);
if (! isset($_SESSION['visited'])) {
$_SESSION['visited'] = 'yes';
$value++;
$file = fopen($counterName, 'w');
fwrite($file, $value);
fclose($file);
}
session_unset();
echo $value;
Can I read a file in PHP from my end, for example if I want to read last 10-20 lines?
And, as I read, if the size of the file is more than 10mbs I start getting errors.
How can I prevent this error?
For reading a normal file, we use the code :
if ($handle) {
while (($buffer = fgets($handle, 4096)) !== false) {
$i1++;
$content[$i1]=$buffer;
}
if (!feof($handle)) {
echo "Error: unexpected fgets() fail\n";
}
fclose($handle);
}
My file might go over 10mbs, but I just need to read the last few lines. How do I do it?
Thanks
You can use fopen and fseek to navigate in file backwards from end. For example
$fp = #fopen($file, "r");
$pos = -2;
while (fgetc($fp) != "\n") {
fseek($fp, $pos, SEEK_END);
$pos = $pos - 1;
}
$lastline = fgets($fp);
It's not pure PHP, but the common solution is to use the tac command which is the revert of cat and loads the file in reverse. Use exec() or passthru() to run it on the server and then read the results. Example usage:
<?php
$myfile = 'myfile.txt';
$command = "tac $myfile > /tmp/myfilereversed.txt";
exec($command);
$currentRow = 0;
$numRows = 20; // stops after this number of rows
$handle = fopen("/tmp/myfilereversed.txt", "r");
while (!feof($handle) && $currentRow <= $numRows) {
$currentRow++;
$buffer = fgets($handle, 4096);
echo $buffer."<br>";
}
fclose($handle);
?>
It depends how you interpret "can".
If you wonder whether you can do this directly (with PHP function) without reading the all the preceding lines, then the answer is: No, you cannot.
A line ending is an interpretation of the data and you can only know where they are, if you actually read the data.
If it is a really big file, I'd not do that though.
It would be better if you were to scan the file starting from the end, and gradually read blocks from the end to the file.
Update
Here's a PHP-only way to read the last n lines of a file without reading through all of it:
function last_lines($path, $line_count, $block_size = 512){
$lines = array();
// we will always have a fragment of a non-complete line
// keep this in here till we have our next entire line.
$leftover = "";
$fh = fopen($path, 'r');
// go to the end of the file
fseek($fh, 0, SEEK_END);
do{
// need to know whether we can actually go back
// $block_size bytes
$can_read = $block_size;
if(ftell($fh) < $block_size){
$can_read = ftell($fh);
}
// go back as many bytes as we can
// read them to $data and then move the file pointer
// back to where we were.
fseek($fh, -$can_read, SEEK_CUR);
$data = fread($fh, $can_read);
$data .= $leftover;
fseek($fh, -$can_read, SEEK_CUR);
// split lines by \n. Then reverse them,
// now the last line is most likely not a complete
// line which is why we do not directly add it, but
// append it to the data read the next time.
$split_data = array_reverse(explode("\n", $data));
$new_lines = array_slice($split_data, 0, -1);
$lines = array_merge($lines, $new_lines);
$leftover = $split_data[count($split_data) - 1];
}
while(count($lines) < $line_count && ftell($fh) != 0);
if(ftell($fh) == 0){
$lines[] = $leftover;
}
fclose($fh);
// Usually, we will read too many lines, correct that here.
return array_slice($lines, 0, $line_count);
}
Following snippet worked for me.
$file = popen("tac $filename",'r');
while ($line = fgets($file)) {
echo $line;
}
Reference: http://laughingmeme.org/2008/02/28/reading-a-file-backwards-in-php/
If your code is not working and reporting an error you should include the error in your posts!
The reason you are getting an error is because you are trying to store the entire contents of the file in PHP's memory space.
The most effiicent way to solve the problem would be as Greenisha suggests and seek to the end of the file then go back a bit. But Greenisha's mecanism for going back a bit is not very efficient.
Consider instead the method for getting the last few lines from a stream (i.e. where you can't seek):
while (($buffer = fgets($handle, 4096)) !== false) {
$i1++;
$content[$i1]=$buffer;
unset($content[$i1-$lines_to_keep]);
}
So if you know that your max line length is 4096, then you would:
if (4096*lines_to_keep<filesize($input_file)) {
fseek($fp, -4096*$lines_to_keep, SEEK_END);
}
Then apply the loop I described previously.
Since C has some more efficient methods for dealing with byte streams, the fastest solution (on a POSIX/Unix/Linux/BSD) system would be simply:
$last_lines=system("last -" . $lines_to_keep . " filename");
For Linux you can do
$linesToRead = 10;
exec("tail -n{$linesToRead} {$myFileName}" , $content);
You will get an array of lines in $content variable
Pure PHP solution
$f = fopen($myFileName, 'r');
$maxLineLength = 1000; // Real maximum length of your records
$linesToRead = 10;
fseek($f, -$maxLineLength*$linesToRead, SEEK_END); // Moves cursor back from the end of file
$res = array();
while (($buffer = fgets($f, $maxLineLength)) !== false) {
$res[] = $buffer;
}
$content = array_slice($res, -$linesToRead);
If you know about how long the lines are, you can avoid a lot of the black magic and just grab a chunk of the end of the file.
I needed the last 15 lines from a very large log file, and altogether they were about 3000 characters. So I just grab the last 8000 bytes to be safe, then read the file as normal and take what I need from the end.
$fh = fopen($file, "r");
fseek($fh, -8192, SEEK_END);
$lines = array();
while($lines[] = fgets($fh)) {}
This is possibly even more efficient than the highest rated answer, which reads the file character by character, compares each character, and splits based on newline characters.
Here is another solution. It doesn't have line length control in fgets(), you can add it.
/* Read file from end line by line */
$fp = fopen( dirname(__FILE__) . '\\some_file.txt', 'r');
$lines_read = 0;
$lines_to_read = 1000;
fseek($fp, 0, SEEK_END); //goto EOF
$eol_size = 2; // for windows is 2, rest is 1
$eol_char = "\r\n"; // mac=\r, unix=\n
while ($lines_read < $lines_to_read) {
if (ftell($fp)==0) break; //break on BOF (beginning...)
do {
fseek($fp, -1, SEEK_CUR); //seek 1 by 1 char from EOF
$eol = fgetc($fp) . fgetc($fp); //search for EOL (remove 1 fgetc if needed)
fseek($fp, -$eol_size, SEEK_CUR); //go back for EOL
} while ($eol != $eol_char && ftell($fp)>0 ); //check EOL and BOF
$position = ftell($fp); //save current position
if ($position != 0) fseek($fp, $eol_size, SEEK_CUR); //move for EOL
echo fgets($fp); //read LINE or do whatever is needed
fseek($fp, $position, SEEK_SET); //set current position
$lines_read++;
}
fclose($fp);
Well while searching for the same thing, I can across the following and thought it might be useful to others as well so sharing it here:
/* Read file from end line by line */
function tail_custom($filepath, $lines = 1, $adaptive = true) {
// Open file
$f = #fopen($filepath, "rb");
if ($f === false) return false;
// Sets buffer size, according to the number of lines to retrieve.
// This gives a performance boost when reading a few lines from the file.
if (!$adaptive) $buffer = 4096;
else $buffer = ($lines < 2 ? 64 : ($lines < 10 ? 512 : 4096));
// Jump to last character
fseek($f, -1, SEEK_END);
// Read it and adjust line number if necessary
// (Otherwise the result would be wrong if file doesn't end with a blank line)
if (fread($f, 1) != "\n") $lines -= 1;
// Start reading
$output = '';
$chunk = '';
// While we would like more
while (ftell($f) > 0 && $lines >= 0) {
// Figure out how far back we should jump
$seek = min(ftell($f), $buffer);
// Do the jump (backwards, relative to where we are)
fseek($f, -$seek, SEEK_CUR);
// Read a chunk and prepend it to our output
$output = ($chunk = fread($f, $seek)) . $output;
// Jump back to where we started reading
fseek($f, -mb_strlen($chunk, '8bit'), SEEK_CUR);
// Decrease our line counter
$lines -= substr_count($chunk, "\n");
}
// While we have too many lines
// (Because of buffer size we might have read too many)
while ($lines++ < 0) {
// Find first newline and remove all text before that
$output = substr($output, strpos($output, "\n") + 1);
}
// Close file and return
fclose($f);
return trim($output);
}
As Einstein said every thing should be made as simple as possible but no simpler. At this point you are in need of a data structure, a LIFO data structure or simply put a stack.
A more complete example of the "tail" suggestion above is provided here. This seems to be a simple and efficient method -- thank-you. Very large files should not be an issue and a temporary file is not required.
$out = array();
$ret = null;
// capture the last 30 files of the log file into a buffer
exec('tail -30 ' . $weatherLog, $buf, $ret);
if ( $ret == 0 ) {
// process the captured lines one at a time
foreach ($buf as $line) {
$n = sscanf($line, "%s temperature %f", $dt, $t);
if ( $n > 0 ) $temperature = $t;
$n = sscanf($line, "%s humidity %f", $dt, $h);
if ( $n > 0 ) $humidity = $h;
}
printf("<tr><th>Temperature</th><td>%0.1f</td></tr>\n",
$temperature);
printf("<tr><th>Humidity</th><td>%0.1f</td></tr>\n", $humidity);
}
else { # something bad happened }
In the above example, the code reads 30 lines of text output and displays the last temperature and humidity readings in the file (that's why the printf's are outside of the loop, in case you were wondering). The file is filled by an ESP32 which adds to the file every few minutes even when the sensor reports only nan. So thirty lines gets plenty of readings so it should never fail. Each reading includes the date and time so in the final version the output will include the time the reading was taken.
I have a script which, each time is called, gets the first line of a file. Each line is known to be exactly of the same length (32 alphanumeric chars) and terminates with "\r\n".
After getting the first line, the script removes it.
This is done in this way:
$contents = file_get_contents($file));
$first_line = substr($contents, 0, 32);
file_put_contents($file, substr($contents, 32 + 2)); //+2 because we remove also the \r\n
Obviously it works, but I was wondering whether there is a smarter (or more efficient) way to do this?
In my simple solution I basically read and rewrite the entire file just to take and remove the first line.
I came up with this idea yesterday:
function read_and_delete_first_line($filename) {
$file = file($filename);
$output = $file[0];
unset($file[0]);
file_put_contents($filename, $file);
return $output;
}
There is no more efficient way to do this other than rewriting the file.
No need to create a second temporary file, nor put the whole file in memory:
if ($handle = fopen("file", "c+")) { // open the file in reading and editing mode
if (flock($handle, LOCK_EX)) { // lock the file, so no one can read or edit this file
while (($line = fgets($handle, 4096)) !== FALSE) {
if (!isset($write_position)) { // move the line to previous position, except the first line
$write_position = 0;
} else {
$read_position = ftell($handle); // get actual line
fseek($handle, $write_position); // move to previous position
fputs($handle, $line); // put actual line in previous position
fseek($handle, $read_position); // return to actual position
$write_position += strlen($line); // set write position to the next loop
}
}
fflush($handle); // write any pending change to file
ftruncate($handle, $write_position); // drop the repeated last line
flock($handle, LOCK_UN); // unlock the file
}
fclose($handle);
}
This will shift the first line of a file, you dont need to load the entire file in memory like you do using the 'file' function. Maybe for small files is a bit more slow than with 'file' (maybe but i bet is not) but is able to manage largest files without problems.
$firstline = false;
if($handle = fopen($logFile,'c+')){
if(!flock($handle,LOCK_EX)){fclose($handle);}
$offset = 0;
$len = filesize($logFile);
while(($line = fgets($handle,4096)) !== false){
if(!$firstline){$firstline = $line;$offset = strlen($firstline);continue;}
$pos = ftell($handle);
fseek($handle,$pos-strlen($line)-$offset);
fputs($handle,$line);
fseek($handle,$pos);
}
fflush($handle);
ftruncate($handle,($len-$offset));
flock($handle,LOCK_UN);
fclose($handle);
}
you can iterate the file , instead of putting them all in memory
$handle = fopen("file", "r");
$first = fgets($handle,2048); #get first line.
$outfile="temp";
$o = fopen($outfile,"w");
while (!feof($handle)) {
$buffer = fgets($handle,2048);
fwrite($o,$buffer);
}
fclose($handle);
fclose($o);
rename($outfile,$file);
I wouldn't usually recommend opening up a shell for this sort of thing, but if you're doing this infrequently on really large files, there's probably something to be said for:
$lines = `wc -l myfile` - 1;
`tail -n $lines myfile > newfile`;
It's simple, and it doesn't involve reading the whole file into memory.
I wouldn't recommend this for small files, or extremely frequent use though. The overhead's too high.
You could store positional info into the file itself. For example, the first 8 bytes of the file could store an integer. This integer is the byte offset of the first real line in the file.
So, you never delete lines anymore. Instead, deleting a line means altering the start position. fseek() to it and then read lines as normal.
The file will grow big eventually. You could periodically clean up the orphaned lines to reduce the file size.
But seriously, just use a database and don't do stuff like this.
Here's one way:
$contents = file($file, FILE_IGNORE_NEW_LINES);
$first_line = array_shift($contents);
file_put_contents($file, implode("\r\n", $contents));
There's countless other ways to do that also, but all the methods would involve separating the first line somehow and saving the rest. You cannot avoid rewriting the whole file. An alternative take:
list($first_line, $contents) = explode("\r\n", file_get_contents($file), 2);
file_put_contents($file, implode("\r\n", $contents));
My problem was large files. I just needed to edit, or remove the first line. This was a solution I used. Didn't require to load the complete file in a variable. Currently echos, but you could always save the contents.
$fh = fopen($local_file, 'rb');
echo "add\tfirst\tline\n"; // add your new first line.
fgets($fh); // moves the file pointer to the next line.
echo stream_get_contents($fh); // flushes the remaining file.
fclose($fh);
I think this is best for any file size
$myfile = fopen("yourfile.txt", "r") or die("Unable to open file!");
$ch=1;
while(!feof($myfile)) {
$dataline= fgets($myfile) . "<br>";
if($ch == 2){
echo str_replace(' ', ' ', $dataline)."\n";
}
$ch = 2;
}
fclose($myfile);
The solutions here didn't work performantly for me.
My solution grabs the last line (not the first line, in my case it was not relevant to get the first or last line) from the file and removes that from that file.
This is very quickly even with very large files (>150000000 lines).
function file_pop($file)
{
if ($fp = #fopen($file, "c+")) {
if (!flock($fp, LOCK_EX)) {
fclose($fp);
}
$pos = -1;
$found = 0;
while ($found < 2) {
if (fseek($fp, $pos--, SEEK_END) < 0) { // can not seek to position
rewind($fp); // rewind to the beginnung of the file
break;
};
if (ord(fgetc($fp)) == 10) { // newline
$found++;
}
}
$lastpos = ftell($fp); // get current position of file
$lastline = fgets($fp); // get current line
ftruncate($fp, $lastpos); // truncate file to last position
flock($fp, LOCK_UN); // unlock
fclose($fp); // close the file
return trim($lastline);
}
}
You could use file() method.
Gets the first line
$content = file('myfile.txt');
echo $content[0];