I have some values in a excel file and I want all of them to be array element remember file also have other data in it.
I know one way is that copy them one by one and put into array initialize statment
A sample list which is just a part of whole list
Bay Area
Greater Boston
Greater Chicago
Greater Dallas-Ft. Worth
Greater D.C.
Las Vegas
Greater Houston
Greater LA
Greater New York
Greater San Diego
Seattle
South Florida
It is easy to initialize array with values when there are not much items like
$array= array('Bay Area '=>'Bay Area ','Greater Boston'=>'Greater Boston',....)
// and so on
But I have 70-80 of items it is very tedious task to initialize array like above.
So, Guys Is there any alternate or short cut to assign array with the list of values?
Is there any auto array generator tool?
If you copied them to a file with each one its own line you could read the file in php like this
$myArray = file('myTextFile');
//sets the keys to the array equal to the values
$myArray = array_combine($myArray, $myArray);
You could export the data from excel to a CSV, then read the file into php like so:
$myArray = array();
if (($file = fopen("myTextFile", "r")) !== FALSE) {
while (($data = fgetcsv($file)) !== FALSE) {
foreach($data as $value) {
$myArray[$value] = $value;
}
}
fclose($handle);
}
$array = explode("\n", file_get_contents('yourfile.txt'));
For more complex cases for loading CSV files in PHP, maybe use fgetcsv() or even PHPExcelReader for XLS files.
EDIT (after question edit)
(Removed my poor solution, as ohmusama's file() + array_combine() is clearly nicer)
This one:
$string_var = "
Bay Area
Greater Boston
Greater Chicago
Greater Dallas-Ft. Worth
";
$array_var = explode("\n", $string_var);
get notepad++, open the excel file there, and do a simple search and replace with regex. something like search for "(.*)\n" and replace with "'(\1)'," (" quoutes not included), this would give you a long list of:
'Bay Area','Greater Boston','Greater Chicago'
This would be the fastest way of creating the array in terms of php execution time.
I think it's looks better:
$a[] = "Bay Area";
$a[] = "Greater Boston";
$a[] = "Greater Chicago";
For creating such text file, use Excel (I don't have Excel, but it looks somewhat):
=concatenate(" $a[] = ",chr(34),A1,chr(34),";")
Then export only that column.
Related
I have a form that sends an email. I have a list of words to ban, and they are manually entered in an array. Each word in the array gets a point which eventually can reject a mail send. I want to put the words into a file instead to call up because though this works, its slow to update especially across several domains. Sorry for my lack of skill. Thanks.
$badwords = array("word1", "word2", "word3");
foreach ($badwords as $word)
if (strpos(strtolower($_POST['comments']), $word) !== false
As the badwords add up, the point value increase to a limit which then rejects the send.
Excuse me, I was not clear evidently. I want to take the EXISTING array of badwords and put them in a file, in some sort of order and entry (line per line, or comma separated?). I want to call that file to be read by the existing script.
So maybe it theoretically looks like :
$badwords = badwords.php and so on....
Thanks
I'm not sure if that's what you need? Try it.
This code should solve what you need. Find 'badwords' from the 'bedwords' list in the 'message', calculate the occurrence of word each of the 'bedwords' and add 1 penalty point to the '$ penalty' for each positive result (even duplicate).
the code ignores uppercase and lowercase letters.
Set the list:
$badwords = ['world', 'car', 'cat', 'train',];
$message = "World is small. I love music and my car. But I also love to
travel by train. I like animals, especially my cat.";
We will initialize the variable for counting penalty points.
$penalty = 0;
Now we need to go through the 'message' as many as there are in the 'badwords' fields. We will use the 'for' loop.
for($k =0; $k <= count($badwords) - 1; $k++):
preg_match_all("/$badwords[$k]/i", $message, $out[])
endfor;
We have now passed a total of 5 (from 0 to 4) through the message loop. Using a regular expression, we store word matches in an 'out' array, creating a multidimensional array. Now you need to go through this "out" field. We reduce its dimensions.
foreach ($out as &$value):
$value = $value[0];
endforeach;
We will now go through this out field again using the 'for' loop and calculate the number of values in each dimension. Based on the calculated values we will assign 1 penalty point for each match and a duplicate.
for($n = 0; $n <= count($out)-1; $n++):
$penalty += count($out[$n]);
endfor;
The result is the number of points awarded.
Here is source of the php, on PHP Fiddle
http://phpfiddle.org/main/code/jzyw-hva6
In words.php:
<?php
$words = ["filter","these","words","out"];
In your main script:
<?php
include "words.php";
print_r($words);
Result:
Array
(
[0] => filter
[1] => these
[2] => words
[3] => out
)
figured it out.
in the root of my webspace I made a file called words.php
<?php
$badwords = array("000", "adult", etc
then added an include (as there are counted words, so can be more than one) to my main file
include "../badwords.php"; // the array list was here
and on to the foreach statement.
and removed this original line from that main file.
$badwords = array("word1", "word2", "word3");
Seems to be working. Thanks
you helped me so much time but now to this Problem i didn´t find a solution yet.
I have two csv which i had to compare and get the differences.
Both csv looks like this:
https://stackoverflow.com
https://google.com
Both files are about 10 MB
Till now i make this:
array1 = array_map('str_getcsv', file(file1));
array2 = array_map('str_getcsv', file(file2));
$diff = array_diff(array_map('serialize',$array1), array_map('serialize',$array2 ));
it works very nice so long as i have unlimited memory.
And thats the problem;-) i don´t have unlimited memory because the server is not the same as befor.
So now the question is:
How can i reduce the memory_usage of it or how can i compare two files.
Please don´t think of filesize or so what.
I need the real differences of the file.
Like in one file it stands
https://stackoverflow.com
and in the other
https://google.com
so the difference is both:-)
thanks for your help guys
Read file1 into the keys of an associative array. Then read file2 line by line, removing those entries from the array.
$file1 = array_flip(file("file1.csv", FILE_IGNORE_NEW_LINES));
$fd2 = fopen("file2.csv");
$diff = array();
while ($line = fgets($fd2)) {
$line = str_replace("\n", "", $line); // remove trailing newline
if (!array_key_exists($line, $file1)) {
// line is only in file2, add it to result
$diff[] = $line;
} else {
// line is in both files, remove it from $file1
unset($file1[$line]);
}
}
fclose($fd2);
// Remaining keys in $file1 are unique to that file
$diff += array_keys($file1);
If reading the first file into an array and then flipping it is too much memory, you could do that with an fgets() loop as well (but the garbage collector should clean up the temporary array created by file()).
Well, my question is very simple, but I didn't find the proper answer in nowhere. What I need is to find a way that reads a .txt file, and if there's a duplicated line, remove ALL of them, not preserving one. For example, in a .txt contains the following:
1234
1233
1232
1234
The output should be:
1233
1232
Because the code has to delete the duplicated line, all of them. I searched all the web, but it always point to answers that removes duplicated lines but preserve one of them, like this, this or that.
I'm afraid that the only way to do this is to read the x line and check the whole .txt, if it finds an equal result, delete, and delete the x line too. If not, change to the next line. But the .txt file I'm checking has 50 milions lines (~900Mb), I don't know how much memory I need to do this kind of task, so I appreciate some help here.
Read the file line by line, and use the line contents as the key of an associative array whose values are a count of the number of times the line appears. After you're done, write out all the lines whose value is only 1. This will require as much memory as all the unique lines.
$lines = array();
$fd = fopen("inputfile.txdt", "r");
while ($line = fgets($fd)) {
$line = rtrim($line, "\r\n"); // ignore the newline
if (array_key_exists($line, $lines)) {
$lines[$line]++;
} else {
$lines[$line] = 1;
}
}
fclose($fd);
$fd = fopen("outputfile.txt", "w");
foreach ($lines as $line => $count) {
if ($count == 1) {
fputs($fd, "$line" . PHP_EOL); // add the newlines back
}
}
I doubt there is one and only one function that does all of what you want to do. So, this breaks it down into steps...
First, can we load a file directly into an array? See the documentation for the file command
$lines = file('mytextfile.txt');
Now, I have all of the lines in an array. I want to count how many of each entry I have. See the documentation for the array_count_values command.
$counts = array_count_values($lines);
Now, I can easily loop through the array and delete any entries where the count>1
foreach($counts as $value=>$cnt)
if($cnt>1)
unset($counts[$value]);
Now, I can turn the array keys (which are the values) into an array.
$nondupes = array_keys($counts);
Finally, I can write the contents out to a file.
file_put_contents('myoutputfile.txt', $nondupes);
I think I have a solution far more elegant:
$array = array('1', '1', '2', '2', '3', '4'); // array with some unique values, some not unique
$array_count_result = array_count_values($array); // count values occurences
$result = array_keys(array_filter($array_count_result, function ($value) { return ($value == 1); })); // filter and isolate only unique values
print_r($result);
gives:
Array
(
[0] => 3
[1] => 4
)
It's been years since I've used PHP and I am more than a little rusty.
I am trying to write a quick script that will open a large file and split it into an array and then look for similar occurrences in each value. For example, the file consist of something like this:
Chapter 1. The Beginning
Art. 1.1 The story of the apple
Art. 1.2 The story of the banana
Art. 1.3 The story of the pear
Chapter 2. The middle
Art. 1.1 The apple gets eaten
Art. 1.2 The banana gets split
Art. 1.3 Looks like the end for the pear!
Chapter 3. The End
…
I would like the script to automatically tell me that two of the values have the string "apple" in it and return "Art. 1.1 The Story of the apple" and "Art. 1.1 The apple gets eaten", and then also does the same for the banana and pear.
I am not looking to search through the array for a specific string I just need it to count occurrences and return what and where.
I have already got the script to open a file and then split it into an array. Just cant figure out how to find similar occurrences.
<?php
$file = fopen("./index.txt", "r");
$blah = array();
while (!feof($file)) {
$blah[] = fgets($file);
}
fclose($file);
var_dump($blah);
?>
Any help would be appreciated.
This solution is not perfect as it counts every single word in the text, so maybe you will have to modify it to better serve your needs, but it gives accurate statistic about how many times each word is mentioned in the file and also exactly on which rows.
$blah = file('./index.txt') ;
$stats = array();
foreach ($blah as $key=>$row) {
$words = array_map('trim', explode(' ', $row));
foreach ($words as $word)
if (empty($stats[$word])) {
$stats[$word]['rows'] = $key.", ";
$stats[$word]['count'] = 1;
} else {
$stats[$word]['rows'] .= $key.", ";
$stats[$word]['count']++;
}
}
print_r($stats);
I hope this idea will help you to get going on and polish it further to better suit your needs!
I'm using the following code to pull the definition of a word from a tab-delimited file with only two columns (word, definition). Is this the most efficient code for what I'm trying to do?
<?php
$haystack = file("dictionary.txt");
$needle = 'apple';
$flipped_haystack = array_flip($haystack);
foreach($haystack as $value)
{
$haystack = explode("\t", $value);
if ($haystack[0] == $needle)
{
echo "Definition of $needle: $haystack[1]";
$defined = "1";
break;
}
}
if($defined != "1")
{
echo "$needle not found!";
}
?>
Right now you're doing a lot of pointless work
1) load the file into a per-line array
2) flip the array
3) iterate over and explode every value of the array
4) test that exploded value
You can't really avoid step 1, but why do you have to do all that useless "busy work" for 2&3?
e.g. if your dictionary text was set up something like this:
word:definition
then a simple:
$matches = preg_grep('/^$word:(.*)$/', $haystack);
would do the trick for you, with far less code.
No. Most likely a trie is more efficient and you didn't sort your dictionary and it doesn't use a binary tree or ternary tree. I guess if you need to search in a huge dictionary your method is simply too slow.
Is this the most efficient code for what I'm trying to do?
Surely not.
To find only one needle you are processing all the entries.
I will be building up to have 100,000+ entries.
use a database then.