I want to know how I would read a specific line (or even specific character) with fgets() or fgetss().
In example;
data.txt-
this is data 1
this is data 2
this is data 3
How would I read only data 2?
If it's possible, that is.
Also, how can I only read the last line? Like if I were to use a+ to write at the end of the file, it'd be the newest content.
Another question:
Is it possible to read through the whole file and check if something exists? If so, how?
Thanks in advance!
For text file:
$fileName = 'file.txt';
$file = new \SplFileObject($fileName);
$file->setFlags(\SplFileObject::READ_CSV);
$seek = 1;
$file->seek($seek);
$line = $file->fgets();
In $line variable has second line from file.
Related
I am using php to get the contents of a webpage:
$fileContent = file_get_contents('http://insertwebsitenamehere.com');
Is there a way to seek a line number andreturn the line at that location?
I know you can use SplFileObject::Seek if the fileContent is a file. Can I perform something similar without the need to turn it into a file?
You want file() instead
$fileContent = file('http://insertwebsitenamehere.com');
echo $fileContent[39]; //line 40
So i have a little issue with some PHP read functionality. What I am trying to do is basically grab data into an array from a file that is being continuously updated from a python script reading values from a micro controller. So basically, the file would look something like this.
ID, Datetime, Count, Name
ID, Datetime, Count, Name
ID, Datetime, Count, Name
What i need is for it to read the new line that is being added in (eof) and store it into an array. So what i have so far is allowing read access into the file
<?php
$myfile = fopen("read.txt", "r")
For the storing the lines in an array i figured something like an array map would be efficient
$result = array();
// some loop
$parts = array_map('trim', explode(':', $line_of_text, 2)));
$result[$parts[0]] = $parts[1];
However i am not to sure on how to structure the loop to have it read the new line that is being updated in the file without exiting the loop.
while (feof($file)) {
}
fclose($file);
?>
Any help would be appreciated!!
Can you do this?
Read the lines of the file to an array using $lines = file("filename");.
Use the $lines[count($lines) - 1] to get the last line?
You can even trim off the empty lines before you wanna do this.
Trim Empty Lines
Use this function:
$lines = array_filter($lines);
Since the file is continually being appended, you'd have to read until you hit the end of file, sleep for a while to let more data be appended, then read again.
e.g.
while(true) {
while(!feof($file)) {
... process data
}
sleep(15); // pause to let more data be appended
}
However, I'm not sure if PHP will cache the fact that it hit eof, and not try again once the sleep() finishes. It may be necessary to record your current position ftell(), close the file, reopen it, then fseek() to the stored location.
I've came up with this solution
$filename = "file.txt";
$file = fopen($filename, "r");
$lines = explode("/n", fread($file, filesize($filename)));
$last = $lines[count($lines)-1];
If the file is going to get big, it could take some time to parse, so its also possible to adjust the fread() function so it only reads the last 100 characters for example.
I'm trying to open a file and determine if it is valid. It's valid if the first line is START and the last line is END.
I've seen different ways of getting the last line of a file, but it does not pay particular attention to the first line either.
How should I go about this? I was thinking of loading the file contents in an array and checking $array[0] and $array[x] for START and END. But this seems to be a waste for all the junk that could possibly be in the middle.
If its a valid file, I will be reading/processing the contents of the file between START and END.
Don't read entire file into an array if it is not needed. If file can be big you can do it that way:
$h = fopen('text.txt', 'r');
$firstLine = fgets($h);
fseek($h, -3, SEEK_END);
$lastThreeChars = fgets($h);
Memory footprint is much lower
That's from me:
$lines = file($pathToFile);
if ($lines[0] == 'START' && end($lines) == 'END') {
// do stuff
}
Reading whole file with fgets will be efficient for small siles. iF ur file is big then:
open It and read first line
use tail (i didn't check it but it looks OK) function I found in php.net in fseek documentation
I want to merge two large CSV files with PHP. This files are too big to even put into memory all at once. In pseudocode, I can think of something like this:
for i in file1
file3.write(file1.line(i) + ',' + file2.line(i))
end
But when I'm looping through a file using fgetcsv, it's not really clear how I would grab line n from a certain file without loading the whole thing into memory first.
Any ideas?
Edit: I forgot to mention that each of the two files has the same number of lines and they have a one-to-one relationship. That is, line 62,324 in file1 goes with line 62,324 in file2.
Not sure what operating system you're on, but if you're using Linux, using the paste command is probably a lot easier than trying to do this in PHP.
If this is a viable solution and you don't absolutely need to do it in PHP, you could try the following:
paste -d ',' file1 file2 > combined_file
Take a look at the fgets function. You could read a single line of each file, process them, and write them to your new file, then move on to the next line until you've reached the end of your file.
PHP: fgets
Specifically look at the example titled Example #1 Reading a file line by line in the PHP manual. It's also important to note the return value of the the fgets functions.
Returns a string of up to length - 1
bytes read from the file pointed to by
handle. If there is no more data to
read in the file pointer, then FALSE
is returned.
So, if it doesn't return FALSE you know you still have more lines to process.
You can use fgets().
$file1 = fopen('file1.txt', 'r');
$file2 = fopen('file2.txt', 'r');
$merged = fopen('merged.txt', 'w');
while (
($line1 = fgets($file1)) !== false
&& ($line2 = fgets($file2)) !== false) {
fwrite($merged, $line1 . ',' . $line2);
}
fgets() reads one line from a file. As you can see, this code uses it on both files at the same time, writing the merged lines to a third file. The manual here:
http://php.net/fgets
http://php.net/fopen
http://php.net/fwrite
Try using fgets() to read one line from each file at a time.
I think the solution for this is to map first line begins for each line ( and some kind of key if you need ) and then make a new csv using fread and fwrite ( we know beginning and ending of each line now , so we need just seek and read )
Another way is to put it into MySQL ( if it is possible ) and then back to new CSV
How can i get a particular line in a 3 gig text file. The lines are delimited by \n. And i need to be able to get any line on demand.
How can this be done? Only one line need be returned. And i would not like to use any system calls.
Note: There is the same question elsewhere regarding how to do this in bash. I would like to compare it with the PHP equiv.
Update: Each line is the same length the whole way thru.
Without keeping some sort of index to the file, you would need to read all of it until you've encountered x number of \n characters. I see that nickf has just posted some way of doing that, so I won't repeat it.
To do this repeatedly in an efficient manner, you will need to build an index. Store some known file positions for certain (or all) line numbers once, which you can then use to seek to the right location using fseek.
Edit: if each line is the same length, you do not need the index.
$myfile = fopen($fileName, "r");
fseek($myfile, $lineLength * $lineNumber);
$line = fgets($myfile);
fclose($myfile);
Line number is 0 based in this example, so you may need to subtract one first. The line length includes the \n character.
There is little discussion of the problem and no mention is made of how the 'one line' should be referenced (by number, some value within it, etc.) so below is just a guess as to what you're wanting.
If you're not averse to using an object (it might be 'too high level', perhaps) and wish to reference the line by offset, then SplFileObject (available as of PHP 5.1.0) could be used. See the following basic example:
$file = new SplFileObject('myreallyhugefile.dat');
$file->seek(12345689); // seek to line 123456790
echo $file->current(); // or simply, echo $file
That particular method (seek) requires scanning through the file line-by-line. However, if as you say all the lines are the same length then you can instead use fseek to get where you want to go much, much faster.
$line_length = 1024; // each line is 1 KB line
$file->fseek($line_length * 1234567); // seek lots of bytes
echo $file->current(); // echo line 1234568
You said each line has the same length, so you can use fopen() in combination with fseek() to get a line quickly.
http://ch2.php.net/manual/en/function.fseek.php
The only way I can think to do it would be like this:
function getLine($fileName, $num) {
$fh = fopen($fileName, 'r');
for ($i = 0; $i < $num && ($line = fgets($fh)); ++$i);
return $line;
}
While this is not a solution exactly, how come you are needing to pull out one line from a 3 gig text file? is perfomance an issue or can this run a leisurely pace?
If you need pull lots of lines out of this file at different points in time, i would definately suggest putting this data into a DB of some kind. SQLite maybe your friend here as its very simple but not great with lots of scripts/people accessing it at one time.