reading a bin file with php bin2hex function - php

I am trying to read a bin file that contains a lots of two 4-byte numbers in it, which I want to read and convert to hex numbers that is then going to be printed to the screen.... hopefully however I am having a little trouble getting my head around this one. this is what I have so far from reading examples and documentation..
<?php
$handle = #fopen("files/bigbin1.bin", "r");
if ($handle) {
while (!feof($handle)) {
$hex = bin2hex($handle);
}
fclose($handle);
}
print_r($hex);
?>
I am 95% sure the error is in passing $handle over to tbin2hex.. but this being my first ever reading of a bin file I am slightly lost. the overall goal at some point will be to read the bin file into the database however I am just trying to figure out what this file looks like on screen.

<?php
$handle = #fopen("files/bigbin1.bin", "r");
if ($handle) {
while (!feof($handle)) {
$hex = bin2hex(fread ($handle , 4 ));
print $hex."\n";
}
fclose($handle);
}
?>
EDIT: Also you should avoid using # it can make debugging extremely frustrating.

Related

how to count number of lines in a php file

I want to count number of files in a php file. I am using CodeIgniter framework.
Currently I have tried this below
$file='contractor.php';
$mypath= 'application/controller/'.$file;
$linecount = 0;
$handle = fopen($mypath, "r");
while(!feof($handle)){
$line = fgets($handle);
$linecount++;
}
fclose($handle);
echo $linecount;
Currently after executing the file in a function it keeps loading. i want to find number of lines in the file.like
output: 202
Try this one hope this will work for you :)
$file = 'getinvoice';
no_of_lines = count(file($file));
echo "number of lines $file";
Your code looks good.
I suppose that the function keeps loading because the file is too big.
Check this answer for an efficient approach to the problem:
Efficiently counting the number of lines of a text file. (200mb+)
$file = basename($_SERVER['PHP_SELF']);
$no_of_lines = count(file($file));
echo "There are $no_of_lines lines in $file"."\n";
If your files are quite small, you could try something like this.
$lines=file($filename);
$count=count($lines);

PHP Array Processing Ability Decreases

I need help processing files holding about 46k lines or more than 30MB of data.
My original idea was to open the file and turn each line into an array element. This worked the first time as the array held about 32k values total.
The second time, the process was repeated, the array only held 1011 elements, and finally, the third time it could only hold 100.
I'm confused and don't know much about the backend array processes. Can someone explain what is happening and fix the code?
function file_to_array($cvsFile){
$handle = fopen($cvsFile, "r");
$path = fread($handle, filesize($cvsFile));
fclose($handle);
//Turn the file into an array and separate lines to elements
$csv = explode(",", $path);
//Remove common double spaces
foreach ($csv as $key => $line){
$csv[$key] = str_replace(' ', '', str_getcsv($line));
}
array_filter($csv);
//get the row count for the file and array
$rows = count($csv);
$filerows = count(file($cvsFile)); //this no longer works
echo "File has $filerows and array has $rows";
return $csv;
}
The approach here can be split in 2.
Optimized file reading and processing
Proper storage solution
Optimized file processing can be done like so:
$handle = fopen($cvsFile, "r");
$rowsSucceed = 0;
$rowsFailed = 0;
if ($handle) {
while (($line = fgets($handle)) !== false) { // Reading file by line
// Process CSV line and check if it was parsed correctly
// And count as you go
if (!empty($parsedLine)) {
$csv[$key] = ... ;
$rowsSucceed++;
} else {
$rowsFailed++;
}
}
fclose($handle);
} else {
// Error handling
}
$totalLines = $rowsSucceed + $rowsFailed;
Also you can avoid array_filter() simply by not adding processed line if its empty.
It will allow to optimize memory usage during script execution.
Proper storage
Proper storage here is needed for performing operations on certain amount of data. File reading are ineffective and expensive. Using simple file based database like sqlite can help you a lot and increase overall performance of your script.
For this purpose you probably should process your CSV directly to database and than perform count operation on parsed data avoiding excessive file line counts etc.
Also it gives you further advantage on working with data not keeping it all in memory.
Your question says you want to "turn each line into an array element" but that is definitely not what you are doing. The code is quite clear; it reads the entire file into $path and then uses explode() to make one massive flat array of every element on every line. Then later you're trying to run str_getcsv() on each item, which of course isn't going to work; you've already exploded all the commas away.
Looping over the file using fgetcsv() makes more sense:
function file_to_array($cvsFile) {
$filerows = 0;
$handle = fopen($cvsFile, "r");
while ($line = fgetcsv($handle)) {
$filerows++;
// skip empty lines
if ($line[0] === null) {
continue;
}
//Remove common double spaces
$csv[] = str_replace(' ', '', $line);
}
//get the row count for the file and array
$rows = count($csv);
echo "File has $filerows and array has $rows";
fclose($handle);
return $csv;
}

Fgets progress - easier way?

I read a big text file ~500MB and want to get the progress during my read operations.
To do so I now count the lines the files has and then compare it to the ones I already read. This needs two complete iterations over the file. Is there an easier way using the filesize and fgets buffer size?
My current code looks like:
$lineTotal = 0;
while ((fgets($handle)) !== false) {
$lineTotal++;
}
rewind($handle);
$linesDone = 0;
while (($line = fgets($handle)) !== false) {
progressBar($linesDone += 1, $lineTotal);
}
Based on bytes rather than lines, but you can quickly get the total size of the file upfront with filesize:
$bytesTotal = filesize("input.txt")
Then, after you've opened the file, you can read each line and then get your current position within the file, something like:
progressBar(0, $bytesTotal);
while (($line = fgets($handle)) !== false) {
doSomethingWith($line, 'presumably');
progressBar(ftell($handle), $bytesTotal);
}
There are caveats about the fact that PHP integers may not handle files over 2G but, since you specified your files are about 500M, that shouldn't be an immediate problem.

`Type L: not enough input` when to unpack the target data

I get the job done to parse data from target file in binary form with the help of stackoverflow's friends.
<?php
$handle = fopen('data', 'rb');
fread($handle,64);
while (!feof($handle)) {
$bytes= fread($handle,32);
print_r(unpack("La/fb/fc/fd/fe/ff/fg/fh",$bytes));
echo "<br/>";
}
echo "finish";
fclose($handle);
?>
I got the result ,one last bug remains here that can't solve myself.
1.why unpack(): Type L: not enough input, need 4, have 0 ?
2.how to fix it?
Change your loop to:
while ($bytes = fread($handle, 32)) {
print_r(unpack("La/fb/fc/fd/fe/ff/fg/fh",$bytes));
echo "<br/>";
}
feof($handle) doesn't become true until after you've tried to read at the end of the file.
So you're performing an extra fread(), which returns false, and then trying to unpack an empty byte string.

Append at the beginning of the file in PHP [duplicate]

This question already has answers here:
Need to write at beginning of file with PHP
(10 answers)
Closed 9 years ago.
Hi I want to append a row at the beginning of the file using php.
Lets say for example the file is containing the following contnet:
Hello Stack Overflow, you are really helping me a lot.
And now i Want to add a row on top of the repvious one like this:
www.stackoverflow.com
Hello Stack Overflow, you are really helping me a lot.
This is the code that I am having at the moment in a script.
$fp = fopen($file, 'a+') or die("can't open file");
$theOldData = fread($fp, filesize($file));
fclose($fp);
$fp = fopen($file, 'w+') or die("can't open file");
$toBeWriteToFile = $insertNewRow.$theOldData;
fwrite($fp, $toBeWriteToFile);
fclose($fp);
I want some optimal solution for it, as I am using it in a php script. Here are some solutions i found on here:
Need to write at beginning of file with PHP
which says the following to append at the beginning:
<?php
$file_data = "Stuff you want to add\n";
$file_data .= file_get_contents('database.txt');
file_put_contents('database.txt', $file_data);
?>
And other one here:
Using php, how to insert text without overwriting to the beginning of a text file
says the following:
$old_content = file_get_contents($file);
fwrite($file, $new_content."\n".$old_content);
So my final question is, which is the best method to use (I mean optimal) among all the above methods. Is there any better possibly than above?
Looking for your thoughts on this!!!.
function file_prepend ($string, $filename) {
$fileContent = file_get_contents ($filename);
file_put_contents ($filename, $string . "\n" . $fileContent);
}
usage :
file_prepend("couldn't connect to the database", 'database.logs');
My personal preference when writing to a file is to use file_put_contents
From the manual:
This function is identical to calling fopen(), fwrite() and fclose()
successively to write data to a file.
Because the function automatically handles those three functions for me I do not have to remember to close the resource after I'm done with it.
There is no really efficient way to write before the first line in a file. Both solutions mentioned in your questions create a new file from copying everything from the old one then write new data (and there is no much difference between the two methods).
If you are really after efficiency, ie avoiding the whole copy of the existing file, and you need to have the last inserted line being the first in the file, it all depends how you plan on using the file after it is created.
three files
Per you comment, you could create three files header, content and footer and output each of them in sequence ; that would avoid the copy even if header is created after content.
work reverse in one file
This method puts the file in memory (array).
Since you know you create the content before the header, always write lines in reverse order, footer, content, then header:
function write_reverse($lines, $file) { // $lines is an array
for($i=count($lines)-1 ; $i>=0 ; $i--) fwrite($file, $lines[$i]);
}
then you call write_reverse() first with footer, then content and finally header. Each time you want to add something at the beginning of the file, just write at the end...
Then to read the file for output
$lines = array();
while (($line = fgets($file)) !== false) $lines[] = $line;
// then print from last one
for ($i=count($lines)-1 ; $i>=0 ; $i--) echo $lines[$i];
Then there is another consideration: could you avoid using files at all - eg via PHP APC
You mean prepending. I suggest you read the line and replace it with next line without losing data.
<?php
$dataToBeAdded = "www.stackoverflow.com";
$file = "database.txt";
$handle = fopen($file, "r+");
$final_length = filesize($file) + strlen($dataToBeAdded );
$existingData = fread($handle, strlen($dataToBeAdded ));
rewind($handle);
$i = 1;
while (ftell($handle) < $final_length)
{
fwrite($handle, $dataToBeAdded );
$dataToBeAdded = $existingData ;
$existingData = fread($handle, strlen($dataToBeAdded ));
fseek($handle, $i * strlen($dataToBeAdded ));
$i++;
}
?>

Categories