Rotate text files based on size - php

I am trying to record hits to my website to a text file and I want to limit the size of the text file to some value. Once the limit is crossed I want a new file to be created dynamically. With my current code, a new file does get created after the set limit is passed, but the rest of the data is only stored on that new file.
public function storeActivityHitCount(Request $request)
{
if($request->ajax() && isset($request->data)){
$clientIP = request()->ip(); $i= 1;
$date = date("Y-m-d h:i:sa");
$data = $clientIP.', '.$date.', '.$request->data;
$file = public_path().'/adminpanel/hits/activity/activity'.$i.'.txt';
if(!file_exists($file)){
fopen($file,"w+");
}
$filesize = filesize($file);
if($filesize >= 76){ $i++;
$file1 = public_path().'/adminpanel/hits/activity/activity'.$i.'.txt';
if(!file_exists($file1)){
fopen($file1,"w+");
}
$content = file_get_contents($file1);
$content .= $data. PHP_EOL;
$upload_success = file_put_contents($file1, $content);
}else{
$content = file_get_contents($file);
$content .= $data. PHP_EOL;
$upload_success = file_put_contents($file, $content);
}
if($upload_success){
return Response::json(['status' => 'success'], 200);
}
return Response::json(['status' => 'failed'], 400);
}
abort(403);
}

Your original code of course was only trying one other file and then writing to it regardless of its size. You want to put that logic into a repeating structure. Or in other words, you want to look for a different file while you keep finding full ones.
public function storeActivityHitCount(Request $request)
{
if ($request->ajax() && isset($request->data)) {
$clientIP = request()->ip();
$date = date('Y-m-d h:i:sa');
$data = $clientIP . ', ' . $date . ', ' . $request->data;
$i = 1;
$file = public_path() . '/adminpanel/hits/activity/activity' . $i . '.txt';
while (filesize($file) >= 76 && $i <= 20) {
$i++;
$file = public_path() . '/adminpanel/hits/activity/activity' . $i . '.txt';
}
if (filesize($file) >= 76) {
// the loop got away from us
// do something?
$upload_success = false;
} else {
$upload_success = file_put_contents($file, $content, \FILE_APPEND);
}
if ($upload_success) {
return Response::json(['status' => 'success'], 200);
}
return Response::json(['status' => 'failed'], 400);
}
abort(403);
}
I put an upper limit of 20 iterations on the loop; you usually don't want a while loop without some kind of escape mechanism.
file_put_contents will always create a file that doesn't exist, so you didn't need to use fopen (and, you weren't using fclose.) Furthermore, if you pass the FILE_APPEND flag, it will append to the existing file; no need for getting contents and appending stuff to it.
Try to keep your code consistent and readable. Multiple commands on one line, inconsistent whitespace around control structures and operators, inconsistent indentation: all these things end up making you work harder than you have to.
And, of course, this would all be better handled by standard system tools like logrotate which is probably running on your server.

I would isolate the action of getting the activity log file. Like using a function like this:
function getAvailableActivityLogFile(){
$looking=true;
$i=0;
while($looking){
$i++;
$file_path = public_path() . "/adminpanel/hits/activity/activity{$i}.txt";
// If file does not exist or file is less than 76bytes you have the right candiate
if(!file_exists($file_path) || filesize($file_path) < 76){
$looking=false;
return fopen($file_path, 'a+');
}
// Otherwise keep looking on next iteration. You can also write some logic to have a max of loop iterations or max files to look for also.
}
}
Then you can use this function to get the next available file and don't bother with too much logic about what file is available. In my version about you should use fwrite() to write to the file using the file pointer returned by the function.
With the a+ option, you get a pointer there that appends content to the file on every new fwrite to the pointer.
You can also write the function to retrieve a path instead of a pointer.

Related

How do i split a 6 gb CSV file into chunks using php

I'm a beginner level developer learning php.The task that i need to do is upload a 6gb CSV file which contains data, into the data base.I need to access the data i.e reading the file through controller.php file and then splitting that huge CSV file into 10,000 row output CSV files and writing data into those output CSV files. I have been through this task a week already and dint figure it out yet.Would you guys please help me in solving this issue.
<?php
namespace App\Http\Controllers;
use Illuminate\Queue\SerializesModels;
use App\User;
use DateTime;
use Illuminate\Http\Request;
use Storage;
use Validator;
use GuzzleHttp\Client;
use GuzzleHttp\RequestOptions;
use Queue;
use App\model;
class Name extends Controller
{
public function Post(Request $request)
{
if($request->hasfile('upload')){
ini_set('auto_detect_line_endings', TRUE);
$main_input = $request->file('upload');
$main_output = 'output';
$filesize = 10000;
$input = fopen($main_input,'r');
$rowcount = 0;
$filecount = 1;
$output = '';
// echo "here1";
while(!feof($input)){
if(($rowcount % $filesize) == 0){
if($rowcount>0) {
fclose($output);
}
$output = fopen(storage_path(). "/tmp/".$main_output.$filecount++ . '.csv','w');
}
$data = fgetcsv($input);
print_r($data);
if($data) {
fputcsv($output, $data);
}
$rowcount++;
}
fclose($output);
}
}
}
Maybe it's because you are creating a new $output file handler for each iteration.
I've made some adjustments, so that we only create a file when the rowCount = 0 and close it when the fileSize is reached. Also the rowCount has to be reset to 0 each time we close the file.
public function Post(Request $request)
{
if($request->hasfile('upload')){
ini_set('auto_detect_line_endings', TRUE);
$main_input = $request->file('upload');
$main_output = 'output';
$filesize = 10000;
$input = fopen($main_input,'r');
$rowcount = 0;
$filecount = 1;
$output = '';
// echo "here1";
while(!feof($input)){
if ($rowCount == 0) {
$output = fopen('php://output', storage_path(). "/tmp/".$main_output.$filecount++ . '.csv','w');
}
if(($rowcount % $filesize) == 0){
if($rowcount>0) {
fclose($output);
$rowCount = 0;
continue;
}
}
$data = fgetcsv($input);
print_r($data);
if($data) {
fputcsv($output, $data);
}
$rowcount++;
}
fclose($output);
}
}
Here is working example of splitting CSV file by the amount of lines (defined by$numberOfLines). Just set your path in $filePath and run the script in shell for example:
php -f convert.php
script code:
convert.php
<?php
$filePath = 'data.csv';
$numberOfLines = 10000;
$file = new SplFileObject($filePath);
//get header of the csv
$header = $file->fgets();
$outputBuffer = '';
$outputFileNamePrefix = 'datasplit-';
$readLinesCount = 1;
$readlLinesTotalCount = 1;
$suffix=0;
$outputBuffer .= $header;
while ($currentLine = $file->fgets()) {
$outputBuffer .= $currentLine;
$readLinesCount++;
$readlLinesTotalCount++;
if ($readLinesCount >= $numberOfLines) {
$outputFilename = $outputFileNamePrefix . $suffix . '.csv';
file_put_contents($outputFilename, $outputBuffer);
echo 'Wrote ' . $readLinesCount . ' lines to: ' . $outputFilename . PHP_EOL;
$outputBuffer = $header;
$readLinesCount = 0;
$suffix++;
}
}
//write remainings of output buffer if it is not empty
if ($outputBuffer !== $header) {
$outputFilename = $outputFileNamePrefix . $suffix . '.csv';
file_put_contents($outputFilename, $outputBuffer);
echo 'Wrote (last time)' . $readLinesCount . ' lines to: ' . $outputFilename . PHP_EOL;
$outputBuffer = '';
$readLinesCount = 0;
}
you will not be able to convert such amount of data in one php execution if it is run form web because of the maximum execution time of php scripts that is usually between 30-60sec and there is a reason for that - don't event try to extend it to some huge number. If you want your script to run even for hours you need to call it from command line, but you also can call it similar way from another script (for example the controller you have)
You do that this way:
exec('php -f convert.php');
and that's it.
The controller you have will not be able to tell if the whole data was converted because before that happens it will be terminated. What you can do is to write your own code in convert.php that updates some field in database and other controller in your application can read that and print to the user the progress of the runnig convert.php.
The other approach is to crate job/jobs that you can put in the queue and can be run by job manager process with workers that can take care for the conversion but I think that would be an overkill for your need.
Keep in mind that if you split something and on different location join you may have problem of getting something wrong in that process the method that would assure you that you split, transferred, joined your data successfully is to calculate HASH ie SHA-1 of the whole 6GB file before split, send that HASH to destination where all small parts of data needs to be combined, combine them into one 6GB file, calculate HASH of that file and compare with the one that was send. Keep in mind that each of small parts of your data after splitting has their own header to be CSV file easy to interpret (import), where in the original file you have only one header row.

PHP file stream data coming back in Binary

So I have a function that I CANNOT get to work. What's happening is it's coming back with data that's in binary and not the actual file. However, in the binary that I get back, the name of the particular file is included in the string. The data looks like this:
PK‹Mpt-BR/finalinport.html³)°S(ÉÈ,V¢‘ŒT…‚ü¢’ÒôÒÔâT…´Ì¼Ä ™” “I,QðT(ÏÌÉQ(-ÈÉOLQH„¨‡*Ê/B×]’X”žZ¢`£_`ÇPK.Ùô LePK‹M.Ùô Lept-BR/finalinport.htmlPKD
pt-BR is the directory and the 'finalinport.html' is the file that I am trying to have downloaded. If I replace the second parameter of fwrite to just a plain string, then everything works and I get the string that I wrote in a file inside of the zip. But not when I'm using Stream->getContents(), which leads me to believe that it is something going on with the stream. I cannot wrap my head around what can be happening. I've been on this for a week and a half now so any suggestions would be great.
public function downloadTranslations(Request $request, $id)
{
$target_locales = $request->input("target_locale");
$has_source = $request->input("source");
$client = new API(Auth::user()->access_token, ZKEnvHelper::env('API_URL', 'https://myaccount.com'));
$document = Document::find($id);
$job_document = JobDocument::where('document_id', $id)->first();
$job = Job::find($job_document->job_id);
$file = tempnam('tmp', 'zip');
$zip = new ZipArchive();
$zip->open($file, ZipArchive::OVERWRITE);
$name_and_extension = explode('.', $document->name);
if($target_locales == null){
$target_locales = [];
foreach ($job->target_languages as $target_language) {
$target_locales[] = $target_language['locale'];
}
}
foreach($target_locales as $target_locale){
$translation = $client->downloadDocument($document->document_id, $target_locale);
$filename = $name_and_extension[0] . ' (' . $target_locale . ').' . $name_and_extension[1];
if($translation->get('message') == 'true') {
//API brings back file in stream type
$stream = Stream::factory($translation->get('body'));
$newFile = tempnam(sys_get_temp_dir(), 'lingo');
$handle = fopen($newFile, 'w');
fwrite($handle, $stream->getContents());
$zip->addFile($newFile, 'eh.html');
fclose($handle);
}
else if($translation->get('message') == 'false'){
//API brings back file contents
$zip->addFromString($filename, $translation->get('body'));
}
}
$translation = $client->downloadDocument($document->document_id, null, null);
$filename = $name_and_extension[0]. ' (Source).'.$name_and_extension[1];
$zip->addFromString($filename, $translation->get('body'));
sleep(10);
$zip->close();
return response()->download($file, $name_and_extension[0].'.zip')->deleteFileAfterSend(true);
}
I'm unfamiliar with PHP streams and I don't have a debugger set up so I keep thinking it has something to do with how I am handling the stream. Because the other condition (the else if) is coming back as content of the file (string) and the if statement, the data is coming back as a stream resource, which I am unfamiliar with.
Stream::factory
is used in Guzzle 5, use json_encode for Guzzle 6.

Text file contents will echo but not return php

I've been working o this for the last few weeks and can't find an alternate route. What I need to do is return the contents of a text file after the file has been read. I have two different logs that use text files to log errors. The first log returns the correct variable that I ask for but for some reason, even though I use the exact same methods to call the variable, it doesn't return anything. If I echo the variable then the correct string is displayed but the variable returns nothing. Here is the function:
function GetNoticeLog($strDate){
$logdate = preg_replace("/[^0-9]/", "_", $strDate );
$strFileName = realpath('debuglogs/enotice_logs').'/ENOTICELOG_' . $logdate . '.txt';
if(is_readable($strFileName)){
$file = fopen($strFileName,"r");
$contents = fread($file, filesize($strFileName));
$fclose($file);
return nl2br($contents);
}
else if(!is_readable($strFileName)){
echo $strFileName." is unreadable";
}
}
Why does this function return the necessary string when executed in one function but has to be echoed to see content in the other is my question.
Try changing
$fclose($file);
to this
fclose($file);
I was able to get the code to work on my server. The only change I made here is the path to the file which is in the same directory as the PHP script.
<?php
function GetNoticeLog($strDate){
$logdate = preg_replace("/[^0-9]/", "_", $strDate );
//$strFileName = realpath('debuglogs/enotice_logs').'/ENOTICELOG_' . $logdate . '.txt';
$strFileName = "blah.txt";
if(is_readable($strFileName)){
$file = fopen($strFileName,"r");
$contents = fread($file, filesize($strFileName));
fclose($file);
return nl2br($contents);
}
else if(!is_readable($strFileName)){
echo $strFileName." is unreadable";
}
}
$stuff = GetNoticeLog("whatever");
echo $stuff;
?>

Tailing Log File and Write results to new file

I'm not sure how to word this so I'll type it out and then edit and answer any questions that come up..
Currently on my local network device (PHP4 based) I'm using this to tail a live system log file: http://commavee.com/2007/04/13/ajax-logfile-tailer-viewer/
This works well and every 1 second it loads an external page (logfile.php) that does a tail -n 100 logfile.log The script doesn't do any buffering so the results it displayes onscreen are the last 100 lines from the log file.
The logfile.php contains :
<? // logtail.php $cmd = "tail -10 /path/to/your/logs/some.log"; exec("$cmd 2>&1", $output);
foreach($output as $outputline) {
echo ("$outputline\n");
}
?>
This part is working well.
I have adapted the logfile.php page to write the $outputline to a new text file, simply using fwrite($fp,$outputline."\n");
Whilst this works I am having issues with duplication in the new file that is created.
Obviously each time tail -n 100 is run produces results, the next time it runs it could produce some of the same lines, as this repeats I can end up with multiple lines of duplication in the new text file.
I can't directly compare the line I'm about to write to previous lines as there could be identical matches.
Is there any way I can compare this current block of 100 lines with the previous block and then only write the lines that are not matching.. Again possible issue that block A & B will contain identical lines that are needed...
Is it possible to update logfile.php to note the position it last tooked at in my logfile and then only read the next 100 lines from there and write those to the new file ?
The log file could be upto 500MB so I don't want to read it all in each time..
Any advice or suggestions welcome..
Thanks
UPDATE # 16:30
I've sort of got this working using :
$file = "/logs/syst.log";
$handle = fopen($file, "r");
if(isset($_SESSION['ftell'])) {
clearstatcache();
fseek($handle, $_SESSION['ftell']);
while ($buffer = fgets($handle)) {
echo $buffer."<br/>";
#ob_flush(); #flush();
}
fclose($handle);
#$_SESSION['ftell'] = ftell($handle);
} else {
fseek($handle, -1024, SEEK_END);
fclose($handle);
#$_SESSION['ftell'] = ftell($handle);
}
This seems to work, but it loads the entire file first and then just the updates.
How would I get it start with the last 50 lines and then just the updates ?
Thanks :)
UPDATE 04/06/2013
Whilst this works it's very slow with large files.
I've tried this code and it seems faster, but it doesn't just read from where it left off.
function last_lines($path, $line_count, $block_size = 512){
$lines = array();
// we will always have a fragment of a non-complete line
// keep this in here till we have our next entire line.
$leftover = "";
$fh = fopen($path, 'r');
// go to the end of the file
fseek($fh, 0, SEEK_END);
do{
// need to know whether we can actually go back
// $block_size bytes
$can_read = $block_size;
if(ftell($fh) < $block_size){
$can_read = ftell($fh);
}
// go back as many bytes as we can
// read them to $data and then move the file pointer
// back to where we were.
fseek($fh, -$can_read, SEEK_CUR);
$data = fread($fh, $can_read);
$data .= $leftover;
fseek($fh, -$can_read, SEEK_CUR);
// split lines by \n. Then reverse them,
// now the last line is most likely not a complete
// line which is why we do not directly add it, but
// append it to the data read the next time.
$split_data = array_reverse(explode("\n", $data));
$new_lines = array_slice($split_data, 0, -1);
$lines = array_merge($lines, $new_lines);
$leftover = $split_data[count($split_data) - 1];
}
while(count($lines) < $line_count && ftell($fh) != 0);
if(ftell($fh) == 0){
$lines[] = $leftover;
}
fclose($fh);
// Usually, we will read too many lines, correct that here.
return array_slice($lines, 0, $line_count);
}
Any way this can be amend so it will read from the last known position.. ?
Thanks
Introduction
You can tail a file by tracking the last position;
Example
$file = __DIR__ . "/a.log";
$tail = new TailLog($file);
$data = $tail->tail(100) ;
// Save $data to new file
TailLog is a simple class i wrote for this task here is a simple example to show its actually tailing the file
Simple Test
$file = __DIR__ . "/a.log";
$tail = new TailLog($file);
// Some Random Data
$data = array_chunk(range("a", "z"), 3);
// Write Log
file_put_contents($file, implode("\n", array_shift($data)));
// First Tail (2) Run
print_r($tail->tail(2));
// Run Tail (2) Again
print_r($tail->tail(2));
// Write Another data to Log
file_put_contents($file, "\n" . implode("\n", array_shift($data)), FILE_APPEND);
// Call Tail Again after writing Data
print_r($tail->tail(2));
// See the full content
print_r(file_get_contents($file));
Output
// First Tail (2) Run
Array
(
[0] => c
[1] => b
)
// Run Tail (2) Again
Array
(
)
// Call Tail Again after writing Data
Array
(
[0] => f
[1] => e
)
// See the full content
a
b
c
d
e
f
Real Time Tailing
while(true) {
$data = $tail->tail(100);
// write data to another file
sleep(5);
}
Note: Tailing 100 lines does not mean it would always return 100 lines. It would return new lines added 100 is just the maximum number of lines to return. This might not be efficient where you have heavy logging of more than 100 line per sec is there is any
Tail Class
class TailLog {
private $file;
private $data;
private $timeout = 5;
private $lock;
function __construct($file) {
$this->file = $file;
$this->lock = new TailLock($file);
}
public function tail($lines) {
$pos = - 2;
$t = $lines;
$fp = fopen($this->file, "r");
$break = false;
$line = "";
$text = array();
while($t > 0) {
$c = "";
// Seach for End of line
while($c != "\n" && $c != PHP_EOL) {
if (fseek($fp, $pos, SEEK_END) == - 1) {
$break = true;
break;
}
if (ftell($fp) < $this->lock->getPosition()) {
break;
}
$c = fgetc($fp);
$pos --;
}
if (ftell($fp) < $this->lock->getPosition()) {
break;
}
$t --;
$break && rewind($fp);
$text[$lines - $t - 1] = fgets($fp);
if ($break) {
break;
}
}
// Move to end
fseek($fp, 0, SEEK_END);
// Save Position
$this->lock->save(ftell($fp));
// Close File
fclose($fp);
return array_map("trim", $text);
}
}
Tail Lock
class TailLock {
private $file;
private $lock;
private $data;
function __construct($file) {
$this->file = $file;
$this->lock = $file . ".tail";
touch($this->lock);
if (! is_file($this->lock))
throw new Exception("can't Create Lock File");
$this->data = json_decode(file_get_contents($this->lock));
// Check if file is valida json
// Check if Data in the original files as not be delete
// You expect data to increate not decrease
if (! $this->data || $this->data->size > filesize($this->file)) {
$this->reset($file);
}
}
function getPosition() {
return $this->data->position;
}
function reset() {
$this->data = new stdClass();
$this->data->size = filesize($this->file);
$this->data->modification = filemtime($this->file);
$this->data->position = 0;
$this->update();
}
function save($pos) {
$this->data = new stdClass();
$this->data->size = filesize($this->file);
$this->data->modification = filemtime($this->file);
$this->data->position = $pos;
$this->update();
}
function update() {
return file_put_contents($this->lock, json_encode($this->data, 128));
}
}
Not really clear on how you want to use the output but would something like this work ....
$dat = file_get_contents("tracker.dat");
$fp = fopen("/logs/syst.log", "r");
fseek($fp, $dat, SEEK_SET);
ob_start();
// alternatively you can do a while fgets if you want to interpret the file or do something
fpassthru($fp);
$pos = ftell($fp);
fclose($fp);
echo nl2br(ob_get_clean());
file_put_contents("tracker.dat", ftell($fp));
tracker.dat is just a text file that contains where the read position position was from the previous run. I'm just seeking to that position and piping the rest to the output buffer.
Use tail -c <number of bytes, instead of number of lines, and then check the file size. The rough idea is:
$old_file_size = 0;
$max_bytes = 512;
function last_lines($path) {
$new_file_size = filesize($path);
$pending_bytes = $new_file_size - $old_file_size;
if ($pending_bytes > $max_bytes) $pending_bytes = $max_bytes;
exec("tail -c " + $pending_bytes + " /path/to/your_log", $output);
$old_file_size = $new_file_size;
return $output;
}
The advantage is that you can do away with all the special processing stuff, and get good performance. The disadvantage is that you have to manually split the output into lines, and probably you could end up with unfinished lines. But this isn't a big deal, you can easily work around by omitting the last line alone from the output (and appropriately subtracting the last line number of bytes from old_file_size).

File manupulation search and replace csv php

I need a script that is finding and then replacing a sertain line in a CSV like file.
The file looks like this:
18:110327,98414,127500,114185,121701,89379,89385,89382,92223,89388,89366,89362,89372,89369
21:82297,79292,89359,89382,83486,99100
98:110327,98414,127500,114185,121701
24:82297,79292,89359,89382,83486,99100
Now i need to change the line 21.
This is wat i got so far.
The first 2 to 4 digits folowed by : ar a catergory number. Every number after this(followed by a ,) is a id of a page.
I acces te id's i want (i.e. 82297 and so on) from database.
//test 2
$sQry = "SELECT * FROM artikelen WHERE adviesprijs <>''";
$rQuery = mysql_query ($sQry);
if ( $rQuery === false )
{
echo mysql_error ();
exit ;
}
$aResult = array ();
while ( $r = mysql_fetch_assoc ($rQuery) )
{
$aResult[] = $r['artikelid'];
}
$replace_val_dirty = join(",",$aResult);
$replace_val= "21:".$replace_val_dirty;
// file location
$file='../../data/articles/index.lst';
// read the file index.lst
$file1 = file_get_contents($file);
//strip eerde artikel id van index.lst
$file3='../../data/articles/index_grp21.lst';
$file3_contents = file_get_contents($file3);
$file2 = str_replace($file3_contents, $replace_val, $file1);
if (file_exists($file)) {
echo "The file $filename exists";
} else {
echo "The file $filename does not exist";
}
if (file_exists($file3)) {
echo "The file $filename exists";
} else {
echo "The file $filename does not exist";
}
// replace the data
$file_val = $file2;
// write the file
file_put_contents($file, $file_val);
//write index_grp98.lst
file_put_contents($file3, $replace_val);
mail('info#', 'Aanbieding catergorie geupdate', 'Aanbieding catergorie geupdate');
Can anyone point me in the right direction to do this?
Any help would be appreciated.
You need to open the original file and go through each line. When you find the line to be changed, change that line.
As you can not edit the file while you do that, you write a temporary file while doing this, so you copy over line-by-line and in case the line needs a change, you change that line.
When you're done with the whole file, you copy over the temporary file to the original file.
Example Code:
$path = 'file';
$category = 21;
$articles = [111182297, 79292, 89359, 89382, 83486, 99100];
$prefix = $category . ':';
$prefixLen = strlen($prefix);
$newLine = $prefix . implode(',', $articles);
This part is just setting up the basics: The category, the IDs of the articles and then building the related strings.
Now opening the file to change the line in:
$file = new SplFileObject($path, 'r+');
$file->setFlags(SplFileObject::DROP_NEW_LINE | SplFileObject::SKIP_EMPTY);
$file->flock(LOCK_EX);
The file is locked so that no other process can edit the file while it gets changed. Next to that file, the temporary file is needed, too:
$temp = new SplTempFileObject(4096);
After setting up the two files, let's go over each line in $file and compare if it needs to be replaced:
foreach ($file as $line) {
$isCategoryLine = substr($line, 0, $prefixLen) === $prefix;
if ($isCategoryLine) {
$line = $newLine;
}
$temp->fwrite($line."\n");
}
Now the $temporary file contains already the changed line. Take note that I used UNIX type of EOF (End Of Line) character (\n), depending on your concrete file-type this may vary.
So now, the temporary file needs to be copied over to the original file. Let's rewind the file, truncate it and then write all lines again:
$file->seek(0);
$file->ftruncate(0);
foreach ($temp as $line) {
$file->fwrite($line);
}
And finally you need to lift the lock:
$file->flock(LOCK_UN);
And that's it, in $file, the line has been replaced.
Example at once:
$path = 'file';
$category = 21;
$articles = [111182297, 79292, 89359, 89382, 83486, 99100];
$prefix = $category . ':';
$prefixLen = strlen($prefix);
$newLine = $prefix . implode(',', $articles);
$file = new SplFileObject($path, 'r+');
$file->setFlags(SplFileObject::DROP_NEW_LINE | SplFileObject::SKIP_EMPTY);
$file->flock(LOCK_EX);
$temp = new SplTempFileObject(4096);
foreach ($file as $line) {
$isCategoryLine = substr($line, 0, $prefixLen) === $prefix;
if ($isCategoryLine) {
$line = $newLine;
}
$temp->fwrite($line."\n");
}
$file->seek(0);
$file->ftruncate(0);
foreach ($temp as $line) {
$file->fwrite($line);
}
$file->flock(LOCK_UN);
Should work with PHP 5.2 and above, I use PHP 5.4 array syntax, you can replace [111182297, ...] with array(111182297, ...) in case you're using PHP 5.2 / 5.3.

Categories