Which is faster, a regexp search or array search? - php

Which is the faster:
a regexp to search the contents of a large file for specific pattern, or
an array_search to search a large array to match value at any index.

other things being equal, I would expect the array search to always be faster, not having to read a file and not having to parse and execute a regex.

It will depend on the type of data you have, the type of data you are searching for, and the amount. You really need to try it and find which works for you, there isn't really a right answer any of us can give you without knowing the context and specific implementations.
Take a look at the benchmark class if you want to see some metrics and figure it out.

Related

How to robustly check Wikipedia pages via API using search terms of different casing

I have a website which allows users to submit photos of wildlife. Once uploaded, they can identify the specie on the photo, for example "Polar bear".
This triggers me to get information from Wikipedia about that specie, using that search term:
$query = "http://en.wikipedia.org/w/api.php?action=query&rvprop=content&format=json&titles=" . $query;
$pages = file_get_contents($query);
Such a query returns one of the following:
An array of pageids, which I can then query for that page's content
Nothing, because there simply isn't any match
a REDIRECT result, which allows me to resolve the page with the proper name
The problem I have has to do with casing. For example, the search term "Milky stork", returns nothing, not even a redirect. "Milky Stork" does work. Uppercasing each word in the query is not a solution either, as it could be that some pages are in lowercase, whereas the uppercase query does not work. There's no consistency.
I'm looking for a way to make this more robust. It shouldn't be that a query fails because of wrong casing, which cannot even be predicted on the user's side.
Does anyone know of a solution for this? Other than trying every possible combination of casings?
Note: Some may suggest to use dbpedia instead, but this is no solution for my total needs.
Unfortunatelly, there is no easy solution - read http://www.mediawiki.org/wiki/API:Opensearch#Note_on_case_sensitivity
You can try instead use opensearch to find appropriate casing (if normal query returns nothing usable):
http://en.wikipedia.org/w/api.php?action=opensearch&search=milky+stork&namespace=0&suggest=
will give you
["milky stork",["Milky Stork"]]
I think trying every possible combination is a viable solution. So, your query might look like:
http://en.wikipedia.org/w/api.php?action=query&rvprop=content&format=json&titles=Milky stork|Milky Stork
Note that the first letter is not case-sensitive on Wikipedia.

PHP library for word clustering/NLP?

What I am trying to implement is a rather trivial "take search results (as in title & short description), cluster them into meaningful named groups" program in PHP.
After hours of googling and countless searches on SO (yielding interesting results as always, albeit nothing really useful) I'm still unable to find any PHP library that would help me handle clustering.
Is there such a PHP library out there that I might have missed?
If not, is there any FOSS that handles clustering and has a decent API?
Like this:
Use a list of stopwords, get all words or phrases not in the stopwords, count occurances of each, sort in descending order.
The stopwords needs to be a list of all common English terms. It should also include punctuation, and you will need to preg_replace all the punctuation to be a separate word first, e.g. "Something, like this." -> "Something , like this ." OR, you can just remove all punctuation.
$content=preg_replace('/[^a-z\s]/', '', $content); // remove punctuation
$stopwords='the|and|is|your|me|for|where|etc...';
$stopwords=explode('|',$stopwords);
$stopwords=array_flip($stopwords);
$result=array(); $temp=array();
foreach ($content as $s)
if (isset($stopwords[$s]) OR strlen($s)<3)
{
if (sizeof($temp)>0)
{
$result[]=implode(' ',$temp);
$temp=array();
}
} else $temp[]=$s;
if (sizeof($temp)>0) $result[]=implode(' ',$temp);
$phrases=array_count_values($result);
arsort($phrases);
Now you have an associative array in order of the frequency of terms that occur in your input data.
How you want to do the matches depends upon you, and it depends largely on the length of the strings in the input data.
I would see if any of the top 3 array keys match any of the top 3 from any other in the data. These are then your groups.
Let me know if you have any trouble with this.
"... cluster them into meaningful groups" is a bit to vague, you'll need to be more specific.
For starters you could look into K-Means clustering.
Have a look at this page and website:
PHP/irInformation Retrieval and other interesting topics
EDIT: You could try some data mining yourself by cross referencing search results with something like the open directory dmoz RDF data dump and then enumerate the matching categories.
EDIT2: And here is a dmoz/category question that also mentions "Faceted Search"!
Dmoz/Monster algorithme to calculate count of each category and sub category?
If you're doing this for English only, you could use WordNet: http://wordnet.princeton.edu/. It's a lexicon widely used in research which provides, among other things, sets of synonyms for English words. The shortest distance between two words could then serve as a similarity metric to do clustering yourself as zaf proposed.
Apparently there is a PHP interface to WordNet here: http://www.foxsurfer.com/wordnet/. It came up in this question: How to use word Net with php, but I have not tried it. However, interfacing with a command line tool from PHP yourself is feasible as well.
You could also have a look at Programming Collective Intelligence (Chapter 3 : Discovering Groups) by Toby Segaran which goes through just this use case using Python. However, you should be able to implement things in PHP once you understand how it works.
Even though it is not PHP, the Carrot2 project offers several clustering engines and can be integrated with Solr.
This may be way off but check out OpenCalais. They have a web service which allows you to pass a block of text in and it will pass you back a parseable response of things that it found in the text, such as places, people, facts etc. You could use these categories to build your "clouds" and too choose which results to display.
I've used this library a few times in php and it's always been quite easy to work with.
Again, might not be relevant to what your trying to do. Maybe you could post an example of what your trying to accomplish?
If you can pre-define the filters for your faceted search (the named groups) then it will be much easier.
Rather than relying on an algorithm that uses the current searcher's input and their particular results to generate the filter list, you would use an aggregate of the most commonly performed searches by all users and then tag results with them if they match.
You would end up with a table (or something) of URLs in a many-to-many join to a table of tags, so each result url could have several appropriate tags.
When the user searches, you simply match their search against the full index. But for the filters, you take the top results from among the current resultset.
I'll work on query examples if you want.

How can PHP string indexed arrays be achieved efficiently in C++

I’ve been playing around with searching text in big lists and found that using a PHP array seems to be a quick way of doing it.
E.g. if you had loads of place names and associated postcodes you could read them into a PHP array like this:
$place[‘place name here’] = “postcode”;
Then to look up you just take the place you want to look up and plug it in to the array:
$postcode_sought = $place[‘place I want to look up’];
I thought I could speed this up using C++ but of course C++ does not allow (as far as I know) arrays with a string as the index.
The only way I can think to do it is to create vectors for the place and postcode and loop through the place vector looking for a match but the repeated string comparisons take forever as I'd expected. I also experimented with hashing the text but I still couldn’t get it anywhere near as fast as PHP.
I think PHP is written in C so my question is how does C manage to create this string index name functionality for PHP?
I’m not looking for the actual code or anything, it just seems to me that there must be some fundamental technique that is used for this and I was just wondering if there is anyone out there who could briefly explain it.
Thanks in advance.
C
I thought I could speed this up using C++ but of course C++ does not allow (as far as I know) arrays with a string as the index.
It does, You can use std::map as an Associative array.
You could try using Berkeley DB. Back in the days it was the fastest but by default it's disk oriented. I don't know if you can run it in memory but you can always mount the directory from tmpfs.
PHP propably uses some external class for hashing table. You can get quite far by writing a quicksearch algorithm. Sort the keys and check up the key in the middle. Then again in middle until you've found the key. You can also use MD5() for keys as it's faster than pure string comparison.
C and C++ only allow integer types to be array indexes, and strings aren't even a type on C/C++, they're actually an array of chars.
As stated above, use std::map or similar.

Need an algorithm to find near-duplicate text values

I run a photo website where users are free to enter any tag they like, even tags not used before. As a result, a photo of a tag may sometimes be tagged as "insect" whilst somebody else tags it as "insects".
I'd like to keep the free-tagging capability, yet would like to have a way to filter out such near-duplicates. The total collection of tags is currently at 1,500. My idea is to read all of them from the DB into mem and then run an alghoritm on it that displays "suspects".
My idea of a suspect is that x% of the characters in the string are the same (same char and order), where x is configurable. I could probably code a really inefficient way to do this but I was wondering if there is an existing solution to this problem?
Edit: Forgot to mention: just sorting the tags isn't enough, as that would require me to go through the entire set to find dupes.
There are some flaws in your logic. For example, what happens when the plural of an object is different from the singular (i.e. person vs. people or even candy vs. candies).
If English is the primary language, check out Soundex which allows phonetic matches. Also consider using a crowd-sourced synonym model where users can create links to existing tags.
Maybe the algorithm you are looking for is approximate string matching.
http://en.wikipedia.org/wiki/Approximate_string_matching.
by a given word you can match it to list of words and if the 'distance' is close add it to suspects.
A fast implementation is to use dynamic programming like the Needleman–Wunsch algorithm.
I have made a blog example of this in C# where you can configure the 'distance' using a matrix character lookup file.
http://kunuk.wordpress.com/2010/10/17/dynamic-programming-example-with-c-using-needleman-wunsch-algorithm/
Is "either contains either" fine? You could do a SQL query something like this, if your images are in a database (which would only make sense):
SELECT * FROM ImageTags WHERE INSTR('theNewTag', TagName) > 0 OR INSTR(TagName, 'theNewTag') > 0 LIMIT 1;
If you really want to do this efficiently I would suggest some sort of JavaScript implementation that displays possibilities as the user is typing in a tag that they want. Not only will it save the user time to happily see 5 suggestions as they type. It will automatically stop them from typing "suspects" when "suspect" shows up as a suggestion. That is, of course, unless they really want "suspects" as a point of urgency.
You could load a huge list of words and as the user types narrow them down. I get the feeling that this could be very simplistic esp if you want to anticipate correctly spelled words. If someone misses a letter, they'll probably go back to fix it when they see a list of suggestions that isn't at all what they meant to type. And when they do correctly type a word it'll pop up in the suggestions.

Ideas for importing text data using PHP array functions

I am new to php and am asking for some coding help. I have little experience with php and have gone to the php.net site and read couple books to get some ideas on how to perform this task.
There seems to be many functions and I am confused on what would be the best fit. (i.e. fgetcsv, explode(), regex??) for extracting data in the file. THen I would need assistance printing/display this information in orderly fashion.
Here is what I need to do:
import, readin txt file that is
delimited (see sample)
The attributes are not always ordered and some records will have missing attributes.
Dynamically create a web table (html)
to present this data
Sample records:
attribute1=value;attribute2=value;attribute3=value;attribute4=value;
attribute1=value;attribute2=value;attribute4=value;
attribute1=value;attribute2=value;attribute3=value;
How do I go about this? What would be best practice for this? From my research it seems I would create an array? multidimensional? Thank you for your time and insight and i hope my question is clear.
Seems like homework, if so best to tag it as such.
You will want to look into file(), foreach() and explode() given that it is delimited by ;
The number of attributes should not matter if they are missing, but all depends on how you setup the display data. Given that they are missing though, you will need know what is the largest amount of attributes to setup the table correctly and not cause issues.
Best of luck!
i would first use the file() method, which will give you an array with each line as an element. Then a couple of explodes and loops to get through it all,first exploding on ';', then loop through each of these and explode on '='.

Categories