I do have two text files and want to loop through both files then combine both line (line 1 of first test file and line1 of second text file. like that for thousands of lines) and do some function
I am familiar with loop through one file and for that code is given below:
$lines = file('data.txt');
foreach ($lines as $line) {
//some function
}
but how will I do for two files and combine bothe lines?
Not sure what you mean by search through the table, but to open both files and do stuff with them:
$file1 = fopen("/path/to/file1.txt","r"); //Open file with read only access
$file2 = fopen("/path/to/file2.txt","r");
$combined = fopen("/path/to/combined.txt","w"); //in case you want to write the combined lines to a new file
while(!feof($file1) && !feof($file2))
{
$line1 = trim(fgets($file1)); //Grab a line of the first file, note the trim will clip off the carriage return/new line at the end of the line, can remove it if you don't need it.
$line2 = trim(fgets($file2)); //Grab a line of the second file
$combline = $line1 . $line2;
fwrite($combined,$combline . "\r\n"); //Write to new combined file, and add a new carriage return/newline at the end of the combined line to replace the one trimmed off.
//You can do whatever with data from $line1, $line2, or the combined $combline after getting them.
}
Note: You might run into trouble if you hit the end of file on one file before the other, which would only happen if they aren't the same length, might need some if control statements to set $line1 or $line2 to "" or something else if feof() their respective files, once both hit the end of file, the while loop will end.
You can do this programmatically as Crayon and Tim have shown. If both files have the same number of lines, it should work. If the line number is different you will have to loop over the larger file to make sure you get all lines or check EOF on both.
To combine line by line, I often use the unix command paste which is very fast. This also accounts for files with different lengths. Run this on the command line:
paste file1 file2 > output.txt
See the manpage for paste for command line options, field delimiters.
man paste
Example:
$file1 = fopen("file1.txt", "rb");
$file2 = fopen("file2.txt", "rb");
while (!feof($file1)) {
$combined = fread($file1, 8192) . " " . fread($file2, 8192);
// now insert $combined into db
}
fclose($file1);
fclose($file2);
you will want to use the longer of the two files in the while condition.
you may need to adjust the bytes read in fread depending on how long your lines are
change " " to whatever delimiter you want
Related
Let's consider we got following format text file with a size around 1Gb:
...
li1
li2
li3
...
My task is to update line li2 to line2.
Following will not work:
$fd = fopen("file", 'c+');
// ..
// code that loops till we reach li2 line..
// ..
$offset = ftell($fd);
// ..
fseek($fd, $offset );
fwrite($fd, "line2" . PHP_EOL);
Since it produces:
...
li1
line2
3
...
I'm expecting to have as result:
...
li1
line2
li3
...
Thanks
In my opinion we need a temporary file here where we copy the non-changed result, add the change and continue with non-changed result. At the end the original file may be renamed and the temp file to take its name.
As I am familiar with file systems you cannot just delete/insert part of the file. Or it is just too complicated and it is not worthy.
Second - you can copy the file in the memory and make your modifications there but here you are resource dependent and your code may not pass on some servers (e.g. dedicated hosting).
Good luck!
If you know what you must change in text just use this:
$file_data=file_get_contents(some_name.txt);
$text = str_replace("line2", "li2", $file_data);
file_put_contents('some_name_changed.txt', $text);
Or read line by line and load in $text then replace what you want.
I am using PHP file which execute sed:
shell_exec("C:\\cygwin64\\bin\\bash.exe --login -c 'sed -i -r \'s/.{2}//\' $text_files_path/File.txt 2>&1'");
This statement will delete the first 2 character from file.txt.
How to delete the first 2 char from (each line) in the file?
File.text:
< TTGCATGCAAAAATTT
< AAAAAAATTTTGCTGA
< AAGGTTCCCCCTTAGT
Edit 1:
shell_exec("C:\\cygwin64\\bin\\bash.exe --login -c 'sed -i -r 's/^..//' $text_files_path/File.txt 2>&1'");
This works but, it concatenate all lines together:
File.text after above command:
TTGCATGCAAAAATTTAAAAAAATTTTGCTGAAAGGTTCCCCCTTAGT
Please don't call sed via bash to do something that PHP can do natively. It's a complete anti-pattern. Worryingly, I have seen the exact same thing in another question quite recently...
I hope you've got plenty of free disk space:
$input_filename = "$text_files_path/File.txt";
$output_filename = 'path/to/temp/output.txt';
$input_file = fopen($input_filename, 'rb');
$output_file = fopen($output_filename, 'wb');
while (($line = fgets($input_file)) !== false) {
fwrite($output_file, substr($line, 2));
}
fclose($input_file);
fclose($output_file);
rename($output_filename, $input_filename);
Open the input file for reading and the temporary output file for writing. Use binary mode in both cases to avoid issues related to different line endings on different systems.
Read each line of the input and write the substring from the second character to the temporary output.
Close both files and then overwrite the input with the temporary file.
Technically this could actually be implemented in-place but the resulting script would be much more complicated and you would run further risk of corrupting your input file if things went wrong.
If you just want to use PHP, then you can explode() the file into individual lines and then use substr() to drop the first two characters before joining the lines back into a single string separated with a new line:
// Set the results array.
$result = array();
// Split the file into lines.
$file = $text_files_path . '/File.txt';
$lines = explode("\n", $file);
// Cut the first two characters of each line and add to the results array.
foreach($lines AS $line) {
$result[] = substr($line, 2);
}
// Split the result back into lines.
$result = implode("\n", $result);
s/^..// That should give you the result you need.
^ points to the start of the line then the . will match any character
I want to read a CSV data file, load it into an array, edit it and write it back to a file. I have been able to accomplish this a single iteration with examples here on Stackoverflow! Thanks.
The trouble is when I write the new data back to the file, both methods I have tried to write the edited Array back to the file add an newline at the end the file. This creates an issue when loading the CSV file data a 2nd time. The 2nd read causes an empty Index in the Array that causes an error when writing the file.
Example #1:
foreach($editArray as $row) {
$writeStuff = implode(",", $row);
fwrite($file_handle, $writeStuff);
fwrite($file_handle, "\n");
}
Example #2:
foreach ($editArray as $row) {
fputcsv($file_handle, $row);
}
This is the original csv data:
1/1/16,Yes,No
1/2/16,No,Yes
When written using the above it produces this data with the added newline:
1/1/16,Yes,No
1/2/16,No,Yes
This extra new line creates an issue when reading the file a 2nd time. I get an error on both the fputcsv() or implode(). I believe it is because the empty Index caused by the newline when I read the file the 2nd time after the first write.
I could use a for loop with a conditional on the last fwrite() in the implode() Example #1, but that would seem clunky and not the way to do it.
Maybe there is a completely different way to handle this?
This is the expected behaviour of fputcsv
fputcsv() formats a line (passed as a fields array) as CSV and write it (terminated by a newline) to the specified file handle.
Being that all lines are terminated by newline, you will have an extra blank line at the end of the file
You should apply a fix for the second read, where the last line creates issues, by checking if the line is empty before processing.
If you want to prevent adding a new line at the end of the file, you could build your data set with new lines where you need them (and where you don't) then write it once:
$writeStuff = [];
foreach($editArray as $row) {
$writeStuff[] = implode(',', $row);
}
fwrite($file_handle, implode(PHP_EOL, $writeStuff));
Also, I'm not sure how you load the file, but you could always skip empty lines - here's an example:
$editArray = file('your_filename.csv', FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES);
Based upon the recommendation, I looked for a solution when reading and loading the file rather than when I wrote the file.
These are the solutions I came up with.
First Option:
while(! feof($file_handle)) {
$tmp = fgetcsv($file_handle);
if($tmp != NULL) {
$myArray[] = $tmp;
}
}
fgetcsv returns a NULL if the line is empty.
Second Option. Ditch the fgetcsv() for file(). It ignores the empty newline without testing.
$data_Array = file($file);
foreach($$data_Array as $key) {
$myArray[] = explode(",", $key);
}
This seems to work. Additionally the example given earlier with implode() and PHP_EOL seems to work also. I may be missing something, but these work for now.
I have a text file. I want to delete some lines with a query of search.
The array is line by line. I want to made it like http://keywordshitter.com/
The logic is,
SEARCH --> IN ARRAY --> OUTPUT IS ARRAY WITHOUT "QUERY OF SEARCH"
Code I have tried:
$fileset = file_get_contents("file.txt");
$line = explode("\n", $fileset);
$content = array_search("query",$line);
print_r($content);
MY file.txt
one
two
three
apple
map
I have used array_search but not working.
you can do search like
$fileset=file("file.txt"); // file function reads entire file into an array
$len=count($fileset);
for($i=0;$i<$len;$i++)
{
if($fileset[$i]=="query")
{
$fileset[$i]="";
break; //then we will stop searching
}
}
$fileset_improve=implode($fileset); //this will again implode your file
$handle=fopen("file.txt","w"); //opening your file in write mode
fwrite($handle,$fileset_improve); //writing file with improved lines
fclose($handle); //closing the opened file
remember this lines will make your search line blank....
if you wanna then you can arrange whole array i.e. shifting following indexed data to previous index to decrease line counts but this will increase your programming complexity.
Hope this will work for you.
Thanks
Use PHP_EOL on your explode function instead of "\n". PHP_EOL will handle the correct line break character(s) of the server platform.
I want to merge two large CSV files with PHP. This files are too big to even put into memory all at once. In pseudocode, I can think of something like this:
for i in file1
file3.write(file1.line(i) + ',' + file2.line(i))
end
But when I'm looping through a file using fgetcsv, it's not really clear how I would grab line n from a certain file without loading the whole thing into memory first.
Any ideas?
Edit: I forgot to mention that each of the two files has the same number of lines and they have a one-to-one relationship. That is, line 62,324 in file1 goes with line 62,324 in file2.
Not sure what operating system you're on, but if you're using Linux, using the paste command is probably a lot easier than trying to do this in PHP.
If this is a viable solution and you don't absolutely need to do it in PHP, you could try the following:
paste -d ',' file1 file2 > combined_file
Take a look at the fgets function. You could read a single line of each file, process them, and write them to your new file, then move on to the next line until you've reached the end of your file.
PHP: fgets
Specifically look at the example titled Example #1 Reading a file line by line in the PHP manual. It's also important to note the return value of the the fgets functions.
Returns a string of up to length - 1
bytes read from the file pointed to by
handle. If there is no more data to
read in the file pointer, then FALSE
is returned.
So, if it doesn't return FALSE you know you still have more lines to process.
You can use fgets().
$file1 = fopen('file1.txt', 'r');
$file2 = fopen('file2.txt', 'r');
$merged = fopen('merged.txt', 'w');
while (
($line1 = fgets($file1)) !== false
&& ($line2 = fgets($file2)) !== false) {
fwrite($merged, $line1 . ',' . $line2);
}
fgets() reads one line from a file. As you can see, this code uses it on both files at the same time, writing the merged lines to a third file. The manual here:
http://php.net/fgets
http://php.net/fopen
http://php.net/fwrite
Try using fgets() to read one line from each file at a time.
I think the solution for this is to map first line begins for each line ( and some kind of key if you need ) and then make a new csv using fread and fwrite ( we know beginning and ending of each line now , so we need just seek and read )
Another way is to put it into MySQL ( if it is possible ) and then back to new CSV