I am having a lot of problem while writing a file within foreach loop. It either writes the line which is at the end in the array or is it at the start of the array.
For Example:
A file contains such elements,
page.php?id=1
page.php?id=3
page.php?id=4
investor.php?id=1&la=1
page.php?id=15
page.php?id=13
page.php?id=14
The code will open this file and then separate each array using explode by using = delimiter. And will return such elements
page.php?id
page.php?id
page.php?id
investor.php?id
page.php?id
page.php?id
page.php?id
then it will choose unique elements using array_unique function & then save it in a file. I have this code. Please Help me
$lines = file($fopen2);
foreach($lines as $line)
{
$rfi_links = explode("=",$line);
echo $array = $rfi_links[0];
$save1 = $rfi.$file.$txt;
$fp=fopen("$save1","w+");
fwrite($fp,$array);
fclose($fp);
}
$links_duplicate_removed = array_unique($array);
print_r($links_duplicate_removed);
"w+" would create a new file on each open, wiping out the old content.
"a+" solves the problem, but it's better to open the file for writing before the loop, and closing after it.
What kind of does not make sense, is the fact that you're writing the current url always to that file while overwriting its previous content. In every step of the foreach-loop, you reopen that file, erase its content and write one url to that file. In the next step, you reopen exactly the same file and do that again. That's why you end up with only the last url in that file.
You will need to collect all urls in an array, throw out duplicates and then write the unique ones to the disc:
$lines = file($fopen2);
$urls = array(); // <-- create empty array for the urls
foreach ($lines as $line) {
$rfi_links = explode('=', $line, 2); // <-- you need only two parts, rights?
$urls[] = $rfi_links[0]; // <-- push new URL to the array
}
// Remove duplicates from the array
$links_duplicate_removed = array_unique($urls);
// Write unique urls to the file:
file_put_contents($rfi.$file.$ext, implode(PHP_EOL, $links_duplicate_removed));
Another solution (much more inspired by your former method) is to open the file once, before starting to iterate over the lines:
$lines = file($fopen2);
$urls = array();
// Open file
$fp = fopen($rfi.$file.$ext, 'w');
foreach ($lines as $line) {
$rfi_url = explode('=', $line, 2);
// check if that url is new
if (!in_array($rfi_url[0], $urls)) {
// it is new, so add it to the array (=mark it as "already occured")
$urls[] = $rfi_url[0];
// Write new url to the file
fputs($fp, $rfi_url[0] . PHP_EOL);
}
}
// Close the file
fclose($fp);
Related
I'm coding a plugin that runs everyday at 5am. It combines multiple csv files (That have a txt extension).
Currently, it is working... HOWEVER, the output format is incorrect.
The input will look like this:
"","","","","email#gmail.com","PARK PLACE 109 AVE","SOME RANDOM DATA","","","",""
And so on. this is only a partial row.
The ouput of this code does not retun the same format. It produces something like this without the " in columns without data
,,,,email#gmail.com,"PARK PLACE 109 AVE","SOME RANDOM DATA",,,,
Here is the part of the function that combines everything:
function combine_and_email_csv_files() {
// Get the current time and date
$now = new DateTime();
$date_string = $now->format('Y-m-d_H-i-s');
// Get the specified directories
$source_directory = get_option('csv_file_combiner_source_directory');
$destination_directory = get_option('csv_file_combiner_destination_directory');
// Load the CSV files from the source directory
$csv_files = glob("$source_directory/*.txt");
// Create an empty array to store the combined CSV data
$combined_csv_data = array();
// Loop through the CSV files
foreach ($csv_files as $file) {
// Load the CSV data from the file
$csv_data = array_map('str_getcsv', file($file));
// Add the CSV data to the combined CSV data array
$combined_csv_data = array_merge($combined_csv_data, $csv_data);
}
// Create the combined CSV file
$combined_csv_file = fopen("$destination_directory/$date_string.txt", 'w');
// Write the combined CSV data to the file
foreach ($combined_csv_data as $line) {
fputcsv($combined_csv_file, $line);
}
// Close the combined CSV file
fclose($combined_csv_file);
}
No matter, what I've tried... it's not working. I'm missing something simple I know.
Thank you Nigel!
So this thread, Forcing fputcsv to Use Enclosure For *all* Fields helped me get there....
Using fputs instead of fputscsv and force "" on null values is the short answer for me. Works beautifully... code is below:
function combine_and_email_csv_files() {
// Get the current time and date
$now = new DateTime();
$date_string = $now->format('Y-m-d_H-i-s');
// Get the specified directories
$source_directory = get_option('csv_file_combiner_source_directory');
$destination_directory = get_option('csv_file_combiner_destination_directory');
// Load the CSV files from the source directory
$csv_files = glob("$source_directory/*.txt");
// Create an empty array to store the combined CSV data
$combined_csv_data = array();
// Loop through the CSV files
foreach ($csv_files as $file) {
// Load the CSV data from the file
$csv_data = array_map('str_getcsv', file($file));
// Add the CSV data to the combined CSV data array
$combined_csv_data = array_merge($combined_csv_data, $csv_data);
}
// Create the combined CSV file
$combined_csv_file = fopen("$destination_directory/$date_string.txt", 'w');
// Write the combined CSV data to the file
foreach ($combined_csv_data as $line) {
// Enclose each value in double quotes
$line = array_map(function($val) {
if (empty($val)) {
return "\"\"";
}
return "\"$val\"";
}, $line);
// Convert the line array to a CSV formatted string
$line_string = implode(',', $line) . "\n";
// Write the string to the file
fputs($combined_csv_file, $line_string);
}
Thank you Sammitch
After much haggling with this problem... Sammitch pointed out why not just concat the files... Simplicity is the ultimate sophistication... right?
*Note: this will only work for my specific circumstance. All I'm doing now is concating the files and checking each file ends with a new line and just plain skipping the csv manipulation.
Code below:
function combine_and_email_csv_files() {
// Get the current time and date
$now = new DateTime();
$date_string = $now->format('Y-m-d_H-i-s');
// Get the specified directories
$source_directory = get_option('csv_file_combiner_source_directory');
$destination_directory = get_option('csv_file_combiner_destination_directory');
// Load the files from the source directory
$files = glob("$source_directory/*.txt");
// Create the combined file
$combined_file = fopen("$destination_directory/$date_string.txt", 'w');
// Loop through the files
foreach ($files as $file) {
// Read the contents of the file
$contents = file_get_contents($file);
// Ensure that the file ends with a newline character
if (substr($contents, -1) != "\n") {
$contents .= "\n";
}
// Write the contents of the file to the combined file
fwrite($combined_file, $contents);
}
// Close the combined file
fclose($combined_file);
I have a .csv file which I can use with Google maps API to successfully create map data.
What I'm looking to do is merge 2 (or more) .csv files and display the TOTAL data on the Google map in the same way. They are all in the same format.
I have the paths to the 2 csv files and if need be, a blank .csv file in the same directory where the files could be merged to...
Unfortuantely, the .csv files all have an initial 'header row' which would be awesome to omit when merging...
If anyone can point me in the right direction, I'd be very happy. Thanks
edit: I've tried:
$data1 = file_get_contents('google_map_data.csv');
$data2 = file_get_contents('google_map_data2.csv');
$TOTALdata = "google_map_dataALL.csv";
function joinFiles(array $files, $result)
{
if(!is_array($files)) {
throw new Exception('`$files` must be an array');
}
$wH = fopen($result, "w+");
foreach($files as $file)
{
$fh = fopen($file, "r");
while(!feof($fh))
{
fwrite($wH, fgets($fh));
}
fclose($fh);
unset($fh);
fwrite($wH, "\n"); //usually last line doesn't have a newline
}
fclose($wH);
unset($wH);
joinFiles(array($data1, $data2), $TOTALdata);
I'm assuming both files are small, so loading them all in one go should be OK.
The code loads both files then removes the first line off the second one. It also removes any end of line from the first file, but adds it's own to ensure it always has a new line...
$data1 = file_get_contents('google_map_data.csv');
$data2 = file_get_contents('google_map_data2.csv');
$TOTALdata = "google_map_dataALL.csv";
$data2 = ltrim(strstr($data2, PHP_EOL));
file_put_contents($TOTALdata, rtrim($data1).PHP_EOL.$data2);
My text file sample.txt. I want to exclude the first row from the text file and store the other rows into mysql database.
ID Name EMail
1 Siva xyz#gmail.com
2 vinoth xxx#gmail.com
3 ashwin yyy#gmail.com
Now I want to read this data from the text file except the first row(ID,name,email) and store into the MYsql db.Because already I have created a filed in database with the same name.
I have tried
$handle = #fopen($filename, "r"); //read line one by one
while (!feof($handle)) // Loop till end of file.
{
$buffer = fgets($handle, 4096); // Read a line.
}
print_r($buffer); // It shows all the text.
Please let me know how to do this?
Thanks.
Regards,
Siva R
It's easier if you use file() since it will get all rows in an array instead:
// Get all rows in an array (and tell file not to include the trailing new lines
$rows = file($filename, FILE_IGNORE_NEW_LINES);
// Remove the first element (first row) from the array
array_shift($rows);
// Now do what you want with the rest
foreach ($rows as $lineNumber => $row) {
// do something cool with the row data
}
If you want to get it all as a string again, without the first row, just implode it with a new line as glue:
// The rows still contain the line break, since we only trimmed the copy
$content = implode("\n", $rows);
Note: As #Don'tPanic pointed out in his comment, using file() is simple and easy but not advisable if the original file is large, since it will read the whole thing into memory as an array (and arrays take more memory than strings). He also correctly recommended the FILE_IGNORE_NEW_LINES-flag, just so you know :-)
You can just call fgets once before your while loop to get the header row out of the way.
$firstline = fgets($handle, 4096);
while (!feof($handle)) // Loop till end of file.
{ ...
I have a huge issue, I cant find any way to sort array entries. My code:
<?php
error_reporting(0);
$lines=array();
$fp=fopen('file.txt, 'r');
$i=0;
while (!feof($fp))
{
$line=fgets($fp);
$line=trim($line);
$lines[]=$line;
$oneline = explode("|", $line);
if($i>30){
$fz=fopen('users.txt', 'r');
while (!feof($fz))
{
$linez=fgets($fz);
$linez=trim($linez);
$lineza[]=$linez;
$onematch = explode(",", $linez);
if (strpos($oneline[1], $onematch[1])){
echo $onematch[0],$oneline[4],'<br>';
}
else{
}
rewind($onematch);
}
}
$i++;
}
fclose($fp);
?>
The thing is, I want to sort items that are being echo'ed by $oneline[4]. I tried several other posts from stackoverflow - But was not been able to find a solution.
The anser to your question is that in order to sort $oneline[4], which seems to contain a string value, you need to apply the following steps:
split the string into an array ($oneline[4] = explode(',',
$oneline[4]))
sort the resulting array (sort($oneline[4]))
combine the array into a string ($oneline[4] = implode(',',
$oneline[4]))
As I got the impression variable naming is low on the list of priorities I'm re-using the $oneline[4] variable. Mostly to clarify which part of the code I am referring to.
That being said, there are other improvements you should be making, if you want to be on speaking terms with your future self (in case you need to work on this code in a couple of months)
Choose a single coding style and stick to it, the original code looked like it was copy/pasted from at least 4 different sources (mostly inconsistent quote-marks and curly braces)
Try to limit repeating costly operations, such as opening files whenever you can (to be fair, the agents.data could contain 31 lines and the users.txt would be opened only once resulting in me looking like a fool)
I have updated your code sample to try to show what I mean by the points above.
<?php
error_reporting(0);
$lines = array();
$users = false;
$fp = fopen('http://20.19.202.221/exports/agents.data', 'r');
while ($fp && !feof($fp)) {
$line = trim(fgets($fp));
$lines[] = $line;
$oneline = explode('|', $line);
// if we have $users (starts as false, is turned into an array
// inside this if-block) or if we have collected 30 or more
// lines (this condition is only checked while $users = false)
if ($users || count($lines) > 30) {
// your code sample implies the users.txt to be small enough
// to process several times consider using some form of
// caching like this
if (!$users) {
// always initialize what you intend to use
$users = [];
$fz = fopen('users.txt', 'r');
while ($fz && !feof($fz)) {
$users[] = explode(',', trim(fgets($fz)));
}
// always close whatever you open.
fclose($fz);
}
// walk through $users, which contains the exploded contents
// of each line in users.txt
foreach ($users as $onematch) {
if (strpos($oneline[1], $onematch[1])) {
// now, the actual question: how to sort $oneline[4]
// as the requested example was not available at the
// time of writing, I assume
// it to be a string like: 'b,d,c,a'
// first, explode it into an array
$oneline[4] = explode(',', $oneline[4]);
// now sort it using the sort function of your liking
sort($oneline[4]);
// and implode the sorted array back into a string
$oneline[4] = implode(',', $oneline[4]);
echo $onematch[0], $oneline[4], '<br>';
}
}
}
}
fclose($fp);
I hope this doesn't offend you too much, just trying to help and not just providing the solution to the question at hand.
I am building a small application that does some simple reporting based on CSV files, the CSV files are in the following format:
DATE+TIME,CLIENTNAME1,HAS REQUEST BLABLA1,UNIQUE ID
DATE+TIME,CLIENTNAME2,HAS REQUEST BLABLA2,UNIQUE ID
DATE+TIME,CLIENTNAME1,HAS REQUEST BLABLA1,UNIQUE ID
DATE+TIME,CLIENTNAME2,HAS REQUEST BLABLA2,UNIQUE ID
Now I am processing this using the following function:
function GetClientNames(){
$file = "backend/AllAlarms.csv";
$lines = file($file);
arsort($lines);
foreach ($lines as $line_num => $line) {
$line_as_array = explode(",", $line);
echo '<li><i class="icon-pencil"></i>' . $line_as_array[1] . '</li>';
}
}
I am trying to retrieve only the Clientname values, but I only want the unique values.
I have tried to create several different manners of approaching this, I understand I need to use the unique_array function, but I have no clue on exactly how to use this function.
I've tried this:
function GetClientNames(){
$file = "backend/AllAlarms.csv";
$lines = file($file);
arsort($lines);
foreach ($lines as $line_num => $line) {
$line_as_array = explode(",", $line);
$line_as_array[1] = unique_array($line_as_array[1]);
echo '<li><i class="icon-pencil"></i>' . $line_as_array[1] . '</li>';
}
}
But this gives me a very very dirty result with 100's of spaces instead of the correct data.
I would recommend you to use the fgetcsv() function when reading in csv files. In the wild csv files can be quite complicated handle by naive explode() approach:
// this array will hold the results
$unique_ids = array();
// open the csv file for reading
$fd = fopen('t.csv', 'r');
// read the rows of the csv file, every row returned as an array
while ($row = fgetcsv($fd)) {
// change the 3 to the column you want
// using the keys of arrays to make final values unique since php
// arrays cant contain duplicate keys
$unique_ids[$row[3]] = true;
}
var_dump(array_keys($unique_ids));
You can also collect values and use array_unique() on them later. You probably want to split the "reading in" and the "writing out" part of your code too.
Try using array_unique()
Docs:
http://php.net/manual/en/function.array-unique.php