PHP Piping Data with f_open - php

I have a piece of code, the issue is the file "data" is over 8GB. This is very memory intensive. I want to reduce the usage of RAM and saw the f_load would be ideal, however, how could i explode this data?
This is my current code:
$data = file_get_contents("data");
$data = explode("|", $data);
foreach ($data as $d) { // rest of code
theoretically, i need to open a pipe, stream and close a pipe How would i go about this?
I've tried using f_open rather than file_get_contents but errors started popping up so i'm doing something wrong and would really like to learn.

You can use stream_get_line to read your data block per block (with | as the delimiter character) : PHP stream_get_line()
$fh = fopen('data', 'r'); // open the file in read-only mode
while( $d = stream_get_line($fh, 1000, '|') ) // read the file until the next |
{
echo $d . PHP_EOL ; // display one block of data
}
fclose($fh); // close the file

Related

PHP File Handling (Download Counter) Reading file data as a number, writing it as that plus 1

I'm trying to make a download counter in a website for a video game in PHP, but for some reason, instead of incrementing the contents of the downloadcount.txt file by 1, it takes the number, increments it, and appends it to the end of the file. How could I just make it replace the file contents instead of appending it?
Here's the source:
<?php
ob_start();
$newURL = 'versions/v1.0.0aplha/Dungeon1UP.zip';
//header('Location: '.$newURL);
//increment download counter
$file = fopen("downloadcount.txt", "w+") or die("Unable to open file!");
$content = fread($file,filesize("downloadcount.txt"));
echo $content;
$output = (int) $content + 1;
//$output = 'test';
fwrite($file, $output);
fclose($file);
ob_end_flush();
?>
The number in the file is supposed to increase by one every time, but instead, it gives me numbers like this: 101110121011101310111012101110149.2233720368548E+189.2233720368548E+189.2233720368548E+18
As correctly pointed out in one of the comments, for your specific case you can use fseek ( $file, 0 ) right before writing, such as:
fseek ( $file, 0 );
fwrite($file, $output);
Or even simpler you can rewind($file) before writing, this will ensure that the next write happens at byte 0 - ie the start of the file.
The reason why the file gets appended it is because you're opening the file in append and truncate mode, that is "w+". You have to open it in readwrite mode in case you do not want to reset the contents, just "r+" on your fopen, such as:
fopen("downloadcount.txt", "r+")
Just make sure the file exists before writing!
Please see fopen modes here:
https://www.php.net/manual/en/function.fopen.php
And working code here:
https://bpaste.net/show/iasj
It will be much simpler to use file_get_contents/file_put_contents:
// update with more precise path to file:
$content = file_get_contents(__DIR__ . "/downloadcount.txt");
echo $content;
$output = (int) $content + 1;
// by default `file_put_contents` overwrites file content
file_put_contents(__DIR__ . "/downloadcount.txt", $output);
That appending should just be a typecasting problem, but I would not encourage you to handle counts the file way. In order to count the number of downloads for a file, it's better to make a database update of a row using transactions to handle concurrency properly, as doing it the file way could compromise accuracy.
You can get the content, check if the file has data. If not initialise to 0 and then just replace the content.
$fileContent = file_get_contents("downloadcount.txt");
$content = (!empty($fileContent) ? $fileContent : 0);
$content++;
file_put_contents('downloadcount.txt', $content);
Check $str or directly content inside the file

PHP Array Processing Ability Decreases

I need help processing files holding about 46k lines or more than 30MB of data.
My original idea was to open the file and turn each line into an array element. This worked the first time as the array held about 32k values total.
The second time, the process was repeated, the array only held 1011 elements, and finally, the third time it could only hold 100.
I'm confused and don't know much about the backend array processes. Can someone explain what is happening and fix the code?
function file_to_array($cvsFile){
$handle = fopen($cvsFile, "r");
$path = fread($handle, filesize($cvsFile));
fclose($handle);
//Turn the file into an array and separate lines to elements
$csv = explode(",", $path);
//Remove common double spaces
foreach ($csv as $key => $line){
$csv[$key] = str_replace(' ', '', str_getcsv($line));
}
array_filter($csv);
//get the row count for the file and array
$rows = count($csv);
$filerows = count(file($cvsFile)); //this no longer works
echo "File has $filerows and array has $rows";
return $csv;
}
The approach here can be split in 2.
Optimized file reading and processing
Proper storage solution
Optimized file processing can be done like so:
$handle = fopen($cvsFile, "r");
$rowsSucceed = 0;
$rowsFailed = 0;
if ($handle) {
while (($line = fgets($handle)) !== false) { // Reading file by line
// Process CSV line and check if it was parsed correctly
// And count as you go
if (!empty($parsedLine)) {
$csv[$key] = ... ;
$rowsSucceed++;
} else {
$rowsFailed++;
}
}
fclose($handle);
} else {
// Error handling
}
$totalLines = $rowsSucceed + $rowsFailed;
Also you can avoid array_filter() simply by not adding processed line if its empty.
It will allow to optimize memory usage during script execution.
Proper storage
Proper storage here is needed for performing operations on certain amount of data. File reading are ineffective and expensive. Using simple file based database like sqlite can help you a lot and increase overall performance of your script.
For this purpose you probably should process your CSV directly to database and than perform count operation on parsed data avoiding excessive file line counts etc.
Also it gives you further advantage on working with data not keeping it all in memory.
Your question says you want to "turn each line into an array element" but that is definitely not what you are doing. The code is quite clear; it reads the entire file into $path and then uses explode() to make one massive flat array of every element on every line. Then later you're trying to run str_getcsv() on each item, which of course isn't going to work; you've already exploded all the commas away.
Looping over the file using fgetcsv() makes more sense:
function file_to_array($cvsFile) {
$filerows = 0;
$handle = fopen($cvsFile, "r");
while ($line = fgetcsv($handle)) {
$filerows++;
// skip empty lines
if ($line[0] === null) {
continue;
}
//Remove common double spaces
$csv[] = str_replace(' ', '', $line);
}
//get the row count for the file and array
$rows = count($csv);
echo "File has $filerows and array has $rows";
fclose($handle);
return $csv;
}

Append at the beginning of the file in PHP [duplicate]

This question already has answers here:
Need to write at beginning of file with PHP
(10 answers)
Closed 9 years ago.
Hi I want to append a row at the beginning of the file using php.
Lets say for example the file is containing the following contnet:
Hello Stack Overflow, you are really helping me a lot.
And now i Want to add a row on top of the repvious one like this:
www.stackoverflow.com
Hello Stack Overflow, you are really helping me a lot.
This is the code that I am having at the moment in a script.
$fp = fopen($file, 'a+') or die("can't open file");
$theOldData = fread($fp, filesize($file));
fclose($fp);
$fp = fopen($file, 'w+') or die("can't open file");
$toBeWriteToFile = $insertNewRow.$theOldData;
fwrite($fp, $toBeWriteToFile);
fclose($fp);
I want some optimal solution for it, as I am using it in a php script. Here are some solutions i found on here:
Need to write at beginning of file with PHP
which says the following to append at the beginning:
<?php
$file_data = "Stuff you want to add\n";
$file_data .= file_get_contents('database.txt');
file_put_contents('database.txt', $file_data);
?>
And other one here:
Using php, how to insert text without overwriting to the beginning of a text file
says the following:
$old_content = file_get_contents($file);
fwrite($file, $new_content."\n".$old_content);
So my final question is, which is the best method to use (I mean optimal) among all the above methods. Is there any better possibly than above?
Looking for your thoughts on this!!!.
function file_prepend ($string, $filename) {
$fileContent = file_get_contents ($filename);
file_put_contents ($filename, $string . "\n" . $fileContent);
}
usage :
file_prepend("couldn't connect to the database", 'database.logs');
My personal preference when writing to a file is to use file_put_contents
From the manual:
This function is identical to calling fopen(), fwrite() and fclose()
successively to write data to a file.
Because the function automatically handles those three functions for me I do not have to remember to close the resource after I'm done with it.
There is no really efficient way to write before the first line in a file. Both solutions mentioned in your questions create a new file from copying everything from the old one then write new data (and there is no much difference between the two methods).
If you are really after efficiency, ie avoiding the whole copy of the existing file, and you need to have the last inserted line being the first in the file, it all depends how you plan on using the file after it is created.
three files
Per you comment, you could create three files header, content and footer and output each of them in sequence ; that would avoid the copy even if header is created after content.
work reverse in one file
This method puts the file in memory (array).
Since you know you create the content before the header, always write lines in reverse order, footer, content, then header:
function write_reverse($lines, $file) { // $lines is an array
for($i=count($lines)-1 ; $i>=0 ; $i--) fwrite($file, $lines[$i]);
}
then you call write_reverse() first with footer, then content and finally header. Each time you want to add something at the beginning of the file, just write at the end...
Then to read the file for output
$lines = array();
while (($line = fgets($file)) !== false) $lines[] = $line;
// then print from last one
for ($i=count($lines)-1 ; $i>=0 ; $i--) echo $lines[$i];
Then there is another consideration: could you avoid using files at all - eg via PHP APC
You mean prepending. I suggest you read the line and replace it with next line without losing data.
<?php
$dataToBeAdded = "www.stackoverflow.com";
$file = "database.txt";
$handle = fopen($file, "r+");
$final_length = filesize($file) + strlen($dataToBeAdded );
$existingData = fread($handle, strlen($dataToBeAdded ));
rewind($handle);
$i = 1;
while (ftell($handle) < $final_length)
{
fwrite($handle, $dataToBeAdded );
$dataToBeAdded = $existingData ;
$existingData = fread($handle, strlen($dataToBeAdded ));
fseek($handle, $i * strlen($dataToBeAdded ));
$i++;
}
?>

replacing a single line of a .txt file using php

I am trying to use a php call through AJAX to replace a single line of a .txt file, in which I store user-specific information. The problem is that if I use fwrite once getting to the correct line, it leaves any previous information which is longer than the replacement information untouched at the end. Is there an easy way to clear a single line in a .txt file with php that I can call first?
Example of what is happening - let's say I'm storing favorite composer, and a user has "Beethoven" in their .txt file, and want's to change it to "Mozart", when I used fwrite over "Beethoven" with "Mozart", I am getting "Mozartven" as the new line. I am using "r+" in the fopen call, as I only want to replace a single line at a time.
If this configuration data doesn't need to be made available to non-PHP apps, consider using var_export() instead. It's basically var_dump/print_r, but outputs the variable as parseable PHP code. This'd reduce your code to:
include('config.php');
$CONFIG['musician'] = 'Mozart';
file_put_contents('config.php', '<?php $CONFIG = ' . var_export($CONFIG, true));
This is a code I've wrote some time ago to delete line from the file, it have to be modified. Also, it will work correctly if the new line is shorter than the old one, for longer lines heavy modification will be required.
The key is the second while loop, in which all contents of the file after the change is being rewritten in the correct position in the file.
<?php
$size = filesize('test.txt');
$file = fopen('test.txt', 'r+');
$lineToDelete = 3;
$counter = 1;
while ($counter < $lineToDelete) {
fgets($file); // skip
$counter++;
}
$position = ftell($file);
$lineToRemove = fgets($file);
$bufferSize = strlen($lineToRemove);
while ($newLine = fread($file, $bufferSize)) {
fseek($file, $position, SEEK_SET);
fwrite($file, $newLine);
$position = ftell($file);
fseek($file, $bufferSize, SEEK_CUR);
}
ftruncate($file, $size - $bufferSize);
echo 'Done';
fclose($file);
?>

Split big files using PHP

I want to split huge files (to be specific, tar.gz files) in multiple part from php code. Main reason to do this is, php's 2gb limit on 32bit system.
SO I want to split big files in multiple part and process each part seperately.
Is this possible? If yes, how?
My comment was voted up twice, so maybe my guess was onto something :P
If on a unix environment, try this...
exec('split -d -b 2048m file.tar.gz pieces');
split
Your pieces should be pieces1, pieces2, etc.
You could get the number of resulting pieces easily by using stat() in PHP to get the file size and then do the simple math (int) ($stat['size'] / 2048*1024*1024) (I think).
A simple method (if using Linux based server) is to use the exec command and to run the split command:
exec('split Large.tar.gz -b 4096k SmallParts'); // 4MB parts
/* | | | | |
| | |______| |
App | | |_____________
The source file | |
The split size Out Filename
*/
See here for more details: http://www.computerhope.com/unix/usplit.htm
Or you can use: http://www.computerhope.com/unix/ucsplit.htm
exec('csplit -k -s -f part_ -n 3 LargeFile.tar.gz');
PHP runs within a single thread and the only way to increase this thread count is to create child process using the fork commands.
This is not resource friendly. What I would suggest is to look into a language that can do this fast and effectively. I would suggest using node.js.
Just install node on the server and then create a small script, called node_split for instance, that can do the job on its own for you.
But I do strongly advise that you do not use PHP for this job but use exec to allow the host operating system to do this.
HJSPLIT
http://www.hjsplit.org/php/
PHP itself might not be able to...
If you can figure out how to do this from your computers' command line,
You should be able to then execute these commands using exec();
function split_file($source, $targetpath='/split/', $lines=1000){
$i=0;
$j=1;
$date = date("m-d-y");
$buffer='';
$handle = fopen ($_SERVER['DOCUMENT_ROOT'].$source, "r");
while (!feof ($handle)) {
$buffer .= fgets($handle, 4096);
$i++;
if ($i >= $lines) {
$fname = $_SERVER['DOCUMENT_ROOT'].$targetpath."part_".$date.$j.".txt";
$fhandle = fopen($fname, "w") or die($php_errormsg);
if (!$fhandle) {
echo "Cannot open file ($fname)";
//exit;
}
if (!fwrite($fhandle, $buffer)) {
echo "Cannot write to file ($fname)";
//exit;
}
fclose($fhandle);
$j++;
$buffer='';
$i=0;
$line+=10; // add 10 to $lines after each iteration. Modify this line as required
}
}
fclose ($handle);
}
$handle = fopen('source/file/path','r');
$f = 1; //new file number
while(!feof($handle))
{
$newfile = fopen('newfile/path/'.$f.'.txt','w'); //create new file to write to with file number
for($i = 1; $i <= 5000; $i++) //for 5000 lines
{
$import = fgets($handle);
//print_r($import);
fwrite($newfile,$import);
if(feof($handle))
{break;} //If file ends, break loop
}
fclose($newfile);
$f++; //Increment newfile number
}
fclose($handle);
If you want to split files which are
already on server, you can do it
(simply use the file functions fread,
fopen, fwrite, fseek to read/write
part of the file).
If you want to
split files which are uploaded from
the client, I am afraid you cannot.
This would probably be possible in php, but php was built for web development and trying to this whole operation in one request will result in the request timing out.
You could however use another language like java or c# and build a background process that you can notify from php to perform the operation. Or even run from php, depending on your Security settings on the host.
Splits are named as filename.part0 filename.part1 ...
<?php
function fsplit($file,$buffer=1024){
//open file to read
$file_handle = fopen($file,'r');
//get file size
$file_size = filesize($file);
//no of parts to split
$parts = $file_size / $buffer;
//store all the file names
$file_parts = array();
//path to write the final files
$store_path = "splits/";
//name of input file
$file_name = basename($file);
for($i=0;$i<$parts;$i++){
//read buffer sized amount from file
$file_part = fread($file_handle, $buffer);
//the filename of the part
$file_part_path = $store_path.$file_name.".part$i";
//open the new file [create it] to write
$file_new = fopen($file_part_path,'w+');
//write the part of file
fwrite($file_new, $file_part);
//add the name of the file to part list [optional]
array_push($file_parts, $file_part_path);
//close the part file handle
fclose($file_new);
}
//close the main file handle
fclose($file_handle);
return $file_parts;
}
?>

Categories