I'm using two regular expressions to pull assignments out of MySQL queries and using them to create an audit trail. One of them is the 'picky' one that requires quoted column names/etc., the other one does not.
Both of them are tested and parse the values out correctly. The issue I'm having is that with certain queries the 'picky' regexp is actually just causing Apache to segfault.
I tried a variety of things to determine this was the cause up to leaving the regexp in the code, and just modifying the conditional to ensure it wasn't run (to rule out some sort of compile-time issue or something). No issues. It's only when it runs the regexp against specific queries that it segfaults, and I can't find any obvious pattern to tell me why.
The code in question:
if ($picky)
preg_match_all("/[`'\"]((?:[A-Z]|[a-z]|_|[0-9])+)[`'\"] *= *'((?:[^'\\\\]|\\\\.)*)'/", $sql, $matches);
else
preg_match_all("/[`'\"]?((?:[A-Z]|[a-z]|_|[0-9])+)[`'\"]? *= *[`'\"]?([^`'\" ,]+)[`'\"]?/", $sql, $matches);
The only difference between the two is that the first one removes the question marks on the quotes to make them non-optional and removes the option of using different kinds of quotes on the value - only allows single quotes. Replacing the first regexp with the second (for testing purposes) and using the same data removes the issue - it is definitely something to do with the regexp.
The specific SQL that is causing me grief is available at:
http://stackoverflow.pastebin.com/m75c2a2a0
Interestingly enough, when I remove the highlighted section, it all works fine. Trying to submit the highlighted section by itself causes no error.
I'm pretty perplexed as to what's going on here. Can anyone offer any suggestions as to further debugging or a fix?
EDIT: Nothing terribly exciting, but for the sake of completeness here's the relevant log entry from Apache (/var/log/apache2/error.log - There's nothing in the site's error.log. Not even a mention of the request in the access log.)
[Thu Dec 10 10:08:03 2009] [notice] child pid 20835 exit signal Segmentation fault (11)
One of these for each request containing that query.
EDIT2: On the suggestion of Kuroki Kaze, I tried gibberish of the same length and got the same segfault. Sat and tried a bunch of different lengths and found the limit. 6035 characters works fine. 6036 segfaults.
EDIT3: Changing the values of pcre.backtrack_limit and pcre.recursion_limit in php.ini mitigated the problem somewhat. Apache no longer segfaults, but my regexp no longer matches all of the matches in the string. Apparently this is a long-known (from 2007) bug in PHP/PCRE:
http://bugs.php.net/bug.php?id=40909
EDIT4: I posted the code in the answers below that I used to replace this specific regular expression as the workarounds weren't acceptable for my purpose (product for sale, can't guarantee php.ini changes and the regexp only partially working removed functionality we require). Code I posted is released into the public domain with no warranty or support of any kind. I hope it can help someone else. :)
Thank you everyone for the help!
Adam
I have been hit with a similar preg_match-related issue, same Apache segfault. Only the preg_match that causes it is built-into the CMS I'm using (WordPress).
The "workaround" that was offered was to change these settings in php.ini:
[Pcre]
;PCRE library backtracking limit.
;pcre.backtrack_limit=100000
pcre.recursion_limit=200000000
pcre.backtrack_limit=100000000
The trade-off is for rendering larger pages, (in my case, > 200 rows; when one of the columns is limited to a 1500-character text description), you'll get pretty high CPU utilization, and I'm still seeing the segfaults. Just not as frequently.
My site's close to end-of-life, so I don't really have much need (or budget) to look for a real solution. But maybe this can mitigate the issue you're seeing.
Interestingly enough, when I remove the highlighted section, it all works fine. Trying to submit the highlighted section by itself causes no error.
What about size of the submission? If you pass gibberish of equal length, what will happen?
EDIT: splitting and merging will look something like this:
$strings = explode("\n", $sql);
$matches = array(array(), array(), array());
foreach ($strings AS $string) {
preg_match_all("/[`'\"]?((?:[A-Z]|[a-z]|_|[0-9])+)[`'\"]? *= *[`'\"]?([^`'\" ,]+)[`'\"]?/", $string, $matches_temp);
$matches[0] = array_merge($matches[0], $matches_temp[0]);
$matches[1] = array_merge($matches[1], $matches_temp[1]);
$matches[2] = array_merge($matches[2], $matches_temp[2]);
}
Given that this only needs to match against the queries when saving pages or performing other not very often-executed operations, I felt the performance hit of the following code was acceptable. It parses the SQL query ($sql) and places name=>value pairs into $data. Seems to be working well and handles large queries fine.
$quoted = '';
$escaped = false;
$key = '';
$value = '';
$target = 'key';
for ($i=0; $i<strlen($sql); $i++)
{
if ($escaped)
{
$$target .= $sql[$i];
$escaped = false;
}
else if ($quoted!='')
{
if ($sql[$i]=='\\')
$escaped = true;
else if ($sql[$i]==$quoted)
$quoted = '';
else
$$target .= $sql[$i];
}
else
{
if ($sql[$i]=='\'' || $sql[$i]=='`')
{
$quoted = $sql[$i];
$$target = '';
}
else if ($sql[$i]=='=')
$target = 'value';
else if ($sql[$i]==',')
{
$target = 'key';
$data[$key] = $value;
$key = '';
$value = '';
}
}
}
if ($value!='')
$data[$key] = $value;
Thank you everyone for the help and direction!
Related
Initial text
I created console in Laravel project. The command takes html data from one table, checks two pattern matches for each record by preg_match. If it returns true, updates are being done to other table's record that has the same attribute as record from the first table that is currently in focus in foreach loop.
Number of records is cca 3500
After cca 150 iterations, command dramatically slows down, and I need one day for getting the command done.
I read all similar issues from this forum but they didn't help me. Not even the answer about forcing garbage collection.
Code is like following:
$ras = RecordsA::all();
$pattern = '/===this is the pattern===/';
foreach($ras as $ra){
$html = $ra->html;
$rb = RecordB::where("url", $ra->url)->first();
$rb->phone = preg_match($pattern, $html, $matches) ? $matches[1] : $rb->phone;
$rb->save();
}
I was searching for possible issue about preg_match performance but it was unsuccessful.
Did anybody meet such problem?
For MMMTroy update
I forgot to say I also tried custom but similar to your code:
$counter = DB::select("select count(*) as count from records_a")->first();
//Pattern for Wiktor Stribiżew :)
$pattern = '/Telefon:([^<])+</';
for($i = 0; $i < $counter->count; $i+=150){
$ras = RecordsA::limit(150)->offset($i);
foreach($ras as $ra){
$html = $ra->html;
$rb = RecordB::where("url", $ra->url)->first();
$rb->phone = preg_match($pattern, $html, $matches) ? $matches[1] : $rb->phone;
$rb->save();
}
}
"Pagination via OFFSET" is Order(N*N). You would be better off with Order(N), so "remember where you left off".
More discussion.
There is a good chance you a running out of memory. Laravel has a handy method to "chunk" results which dramatically reduces the amount of memory by limiting the amount of items you are looping. Try something like this.
$pattern = '/===this is the pattern===/';
Records::chunk(100, function($ras)use($pattern){
foreach($ras as $ra){
$html = $ra->html;
$rb = RecordB::where("url", $ra->url)->first();
$rb->phone = preg_match($pattern, $html, $matches) ? $matches[1] : $rb->phone;
$rb->save();
}
});
What this is doing is grabbing 100 records at a time, and then looping through those. Once done, it creates an offset and grabs the next records in the database. This will prevent the entire loop from being stored in memory.
Does your database grow while looping through? What happens if RecordB is not found and it returns null? Feels to me your table for RecordB is growing, causing the search query to slow down.
Had recently similar problems and hitting memory limits. There is 1 thing whats the number 1 of slowing down stuff and leaking memory.
The DB::$queryLog (disable it with: DB::disableQueryLog();). Everytime there is a query called, the query string will be stored in a variable.
Perhaps one of those things is causing it, but else the code looks fine to me.
OK, so I shave my head, but if I had hair I wouldn't need a razor because I'd have torn it all out tonight. It's gone 3am and what looked like a simple solution at 00:30 has become far from it.
Please see the code extract below..
$psusername = substr($list[$count],16);
if ($psusername == $psu_value){
$answer = "YES";
}
else {
$answer = "NO";
}
$psusername holds the value "normann" which is taken from a URL in a text based file (url.db)
$psu_value also holds the value "normann" which is retrieved from a cookie set on the user's computer (or a parameter in the browser address bar - URL).
However, and I'm sure you can guess my problem, the variable $answer contains "NO" from the test above.
All the PHP I know I've picked up from Google searches and you guys here, so I'm no expert, which is perhaps evident.
Maybe this is a schoolboy error, but I cannot figure out what I'm doing wrong. My assumption is that the data types differ. Ultimately, I want to compare the two variables and have a TRUE result when they contain the same information (i.e normann = normann).
So if you very clever fellows can point out why two variables echo what appears to be the same information but are in fact different, it'd be a very useful lesson for me and make my users very happy.
Do they echo the same thing when you do:
echo gettype($psusername) . '\n' . gettype($psu_value);
Since i can't see what data is stored in the array $list (and the index $count), I cannot suggest a full solution to yuor problem.
But i can suggest you to insert this code right before the if statement:
var_dump($psusername);
var_dump($psu_value);
and see why the two variables are not identical.
The var_dump function will output the content stored in the variable and the type (string, integer, array ec..), so you will figure out why the if statement is returning false
Since it looks like you have non-printable characters in your string, you can strip them out before the comparison. This will remove whatever is not printable in your character set:
$psusername = preg_replace("/[[:^print:]]/", "", $psusername);
0D 0A is a new line. The first is the carriage return (CR) character and the second is the new line (NL) character. They are also known as \r and \n.
You can just trim it off using trim().
$psusername = trim($psusername);
Or if it only occurs at the end of the string then rtrim() would do the job:
$psusername = rtrim($psusername);
If you are getting the values from the file using file() then you can pass FILE_IGNORE_NEW_LINES as the second argument, and that will remove the new line:
$contents = file('url.db', FILE_IGNORE_NEW_LINES);
I just want to thank all who responded. I realised after viewing my logfile the outputs in HEX format that it was the carriage return values causing the variables to mismatch and a I mentioned was able to resolve (trim) with the following code..
$psusername = preg_replace("/[^[:alnum:]]/u", '', $psusername);
I also know that the system within which the profiles and usernames are created allow both upper and lower case values to match, so I took the precaution of building that functionality into my code as an added measure of completeness.
And I'm happy to say, the code functions perfectly now.
Once again, thanks for your responses and suggestions.
I have some code running and it's been working fine BUT the site in question has started producing a duplicate when the value in an array is "morphsuite"
The code:
if(isset($sort2))
{
$sort2 = array_unique($sort2);
foreach($sort2 as $value)
{
$f_dress .= '<li>'.$value.'</li>';
}
}
else{
$f_dress = '';
}
All the other enteries pull from the DB are OK but getting a double when the value is "morphsuit"
Anyone know why?
The values aren't exactly the same - the most likely cause is that there's some kind of non-printable embedded into one or the other (or both); things like whitespace, in-line HTML, or control characters.
Try running var_dump() on the values and pay attention to the length portion of the output when it says something like string(9) "morphsuit" vs. string(2031) "morphsuit" (I invented the number there, but you get the idea).
I want to extrat the content of a specific div in an external webpage, the div looks like this:
<dt>Win rate</dt><dd><div>50%</div></dd>
My target is the "50%". I'm actually using this php code to extract the content:
function getvalue($parameter,$content){
preg_match($parameter, $content, $match);
return $match[1];
};
$parameter = '#<dt>Score</dt><dd><div>(.*)</div></dd>#';
$content = file_get_contents('https://somewebpage.com');
Everything works fine, the problem is that this method is taking too much time, especially if I've to use it several times with diferents $content.
I would like to know if there's a better (faster, simplier, etc.) way to acomplish the same function? Thx!
You may use DOMDocument::loadHTML and navigate your way to the given node.
$content = file_get_contents('https://somewebpage.com');
$doc = new DOMDocument();
$doc->loadHTML($content);
Now to get to the desired node, you may use method DOMDocument::getElementsByTagName, e.g.
$dds = $doc->getElementsByTagName('dd');
foreach($dds as $dd) {
// process each <dd> element here, extract inner div and its inner html...
}
Edit: I see a point #pebbl has made about DomDocument being slower. Indeed it is, however, parsing HTML with preg_match is a call for trouble; In that case, I'd also recommend looking at event-driven SAX XML parser. It is much more lightweight, faster and less memory intensive as it does not build a tree. You may take a look at XML_HTMLSax for such a parser.
There are basically three main things you can do to improve the speed of your code:
Off load the external page load to another time (i.e. use cron)
On a linux based server I would know what to suggest but seeing as you use Windows I'm not sure what the equivalent would be, but Cron for linux allows you to fire off scripts at certain schedule time offsets - in the background - so not using a browser. Basically I would recommend that you create a script who's sole purpose is to go and fetch the website pages at a particular time offset (depending on how frequently you need to update your data) and then write those webpages to files on your local system.
$listOfSites = array(
'http://www.something.com/page.htm',
'http://www.something-else.co.uk/index.php',
);
$dirToContainSites = getcwd() . '/sites';
foreach ( $listOfSites as $site ) {
$content = file_get_contents( $site );
/// i've just simply converted the URL into a filename here, there are
/// better ways of handling this, but this at least keeps things simple.
/// the following just converts any non letter or non number into an
/// underscore... so, http___www_something_com_page_htm
$file_name = preg_replace('/[^a-z0-9]/i','_', $site);
file_put_contents( $dirToContainSites . '/' . $file_name, $content );
}
Once you've created this script, you then need to set the server up to execute it as regularly as you need. Then you can modify your front-end script that displays the stats to read from local files, this would give a significant speed increase.
You can find out how to read files from a directory here:
http://uk.php.net/manual/en/function.dir.php
Or the simpler method (but prone to possible problems) is just to re-step your array of sites, convert the URLs to file names using the preg_replace above, and then check for the file's existence in the folder.
Cache the result of calculating your statistics
It's quite likely this being a stats page that you'll want to visit it quite frequently (not as frequent as a public page, but still). If the same page is visited more often than the cron-based script is executed then there is no reason to do all the calculation again. So basically all you have to do to cache your output is do something similar to the following:
$cachedVersion = getcwd() . '/cached/stats.html';
/// check to see if there is a cached version of this page
if ( file_exists($cachedVersion) ) {
/// if so, load it and echo it to the browser
echo file_get_contents($cachedVersion);
}
else {
/// start output buffering so we can catch what we send to the browser
ob_start();
/// DO YOUR STATS CALCULATION HERE AND ECHO IT TO THE BROWSER LIKE NORMAL
/// end output buffering and grab the contents so we now have a string
/// of the page we've just generated
$content = ob_get_contents(); ob_end_clean();
/// write the content to the cached file for next time
file_put_contents($cachedVersion, $content);
echo $content;
}
Once you start caching things you need to be aware of when you should delete or clear your cache - otherwise if you don't your stats output will never change. With regards to this situation, the best time to clear your cache is at the point you go and fetch the external web pages again. So you should add this line to the bottom of your "cron" script.
$cachedVersion = getcwd() . '/cached/stats.html';
unlink( $cachedVersion ); /// will delete the file
There are other speed improvements you could make to the caching system (you could even record the modified times of the external webpages and load only when they have been updated) but I've tried to keep things easy to explain.
Don't use a HTML Parser for this situation
Scanning a HTML file for one particular unique value does not require the use of a fully-blown or even lightweight HTML Parser. Using RegExp incorrectly seems to be one of those things that lots of start-up programmers fall into, and is a question that is always asked. This has led to lots of automatic knee-jerk reactions from more experience coders to automatically adhere to the following logic:
if ( $askedAboutUsingRegExpForHTML ) {
$automatically->orderTheSillyPersonToUse( $HTMLParser );
} else {
$soundAdvice = $think->about( $theSituation );
print $soundAdvice;
}
HTMLParsers should be used when the target within the markup is not so unique, or your pattern to match relies on such flimsy rules that it'll break the second an extra tag or character occurs. They should be used to make your code more reliable, not if you want to speed things up. Even parsers that do not build a tree of all the elements will still be using some form of string searching or regular expression notation, so unless the library-code you are using has been compiled in an extremely optimised manner, it will not beat well coded strpos/preg_match logic.
Considering I have not seen the HTML you are hoping to parse, I could be way off, but from what I've seen of your snippet it should be quite easy to find the value using a combination of strpos and preg_match. Obviously if your HTML is more complex and might have random multiple occurances of <dt>Win rate</dt><dd><div>50%</div></dd> it will cause problems - but even so - a HTMLParser would still have the same problem.
$offset = 0;
/// loop through the occurances of 'Win rate'
while ( ($p = stripos ($html, 'win rate', $offset)) !== FALSE ) {
/// grab out a snippet of the surrounding HTML to speed up the RegExp
$snippet = substr($html, $p, $p + 50 );
/// I've extended your RegExp to try and account for 'white space' that could
/// occur around the elements. The following wont take in to account any random
/// attributes that may appear, so if you find some pages aren't working - echo
/// out the $snippet var using something like "echo '<xmp>'.$snippet.'</xmp>';"
/// and that should show you what is appearing that is breaking the RegExp.
if ( preg_match('#^win\s+rate\s*</dt>\s*<dd>\s*<div>\s*([0-9]+%)\s*<#i', $snippet, $regs) ) {
/// once you are here your % value will be in $regs[1];
break; /// exit the while loop as we have found our 'Win rate'
}
/// reset our offset for the next loop
$offset = $p;
}
Gotchas to be aware of
If you are new to PHP, as you state in a comment above, then the above may seem rather complicated - which it is. What you are trying to do is quite complex, especially if you want to do it optimally and fast. However, if you follow throught the code I've given and research any bits that you aren't sure of / haven't heard of (php.net is your friend), it should give you a better understanding of a good way to achieve what you are doing.
Guessing ahead however, here are some of the problems you might face with the above:
File Permission errors - in order to be able to read and write files to and from the local operating system you will need to have the correct permissions to do so. If you find you can not write files to a particular directory it might be that the host you are using wont allow you to do so. If this is the case you can either contact them to ask about how to get write permission to a folder, or if that isn't possible you can easily change the code above to use a database instead.
I can't see my content - when using output buffering all the echo and print commands do not get sent to the browser, they instead get saved up in memory. PHP should automatically output all the stored content when the script exits, but if you use a command like ob_end_clean() this actually wipes the 'buffer' so all the content is erased. This can lead to confusing situations when you know you are echoing something.. but it just isn't appearing.
(Mini Disclaimer :) I've typed all the above manually so you may find there are PHP errors, if so, and they are baffling, just write them back here and StackOverflow can help you out)
Instead of trying to not use preg_match why not just trim your document contents down in size? for example, you could dump everything before <body and everything after </body>. then preg_match will be searching less content already.
Also, you could try to do each one of these processes as a pseudo separate thread, so that way they aren't happening one at a time.
We've a PHP script that scrapes search engine results pages and outputs clients website positions into a bespoke report suite for their domains.
Google changed something in the first week of February which prevented our script from detecting the domain on the page and I haven't currently got the original developer in the office nor can any of our other staff resolve this.
I pretty sure I know where the issue lies in the script, it's just, as I'm not a developer, I'm unsure what each line is actually doing. Our script uses the relevant classes from the search results to determine where what we're looking for is actually situated.
The script itself still runs and outputs the HTML fine. It's purely just the part of the script that says look for 'domain' on page that isn't being detected.
I appreciate that you're probably going to need a lot more information from me in order to advise what the issue is and I am happy to provide the file/coding as necessary. I would be prepared to pay for a fix on this too if necessary.
Below is where I feel the issue is occurring:-
// Note our use of ===. Simply == would not work as expected
// because the position of 'a' was the 0th (first) character.
if ($pos4 === false) {
$mystring5 = $val[0];
$findme5 = $prevlink;
$pos5 = #strpos($mystring5, $findme5);
// Note our use of ===. Simply == would not work as expected
// because the position of 'a' was the 0th (first) character.
if ($pos5 === false) {
$serp = $serp + 1;
echo '<b>'.$serp.'.</b> '.$val[0].'<br /><br />';
$link = get_string_between($val[1], 'href="', '" onmousedown');
$link = str_replace('https://','',$link);
$link = str_replace('http://','',$link);
$link = str_replace('www.','',$link);
$link;
$prevlink = $link;
$prevlink = str_replace(strstr($prevlink, '/'), "", $prevlink);
$sitelen = strlen($row_site_check['website_name']);
$sitefrom_link = substr($link, 0, $sitelen);
if ($sitefrom_link == $row_site_check['website_name']) {
$site_found = 1;
$rank_postion = $serp;
$site_link = $link;
$con = mysql_connect("localhost","dbname","dbpass");
if (!$con)
{
die('Could not connect: ' . mysql_error());
}
Any help would be greatly appreciated.
Thanks.
Check out the Google rank scraper (php, opensource)
I am using software based on it daily since it was released and there was no change of Googles layout in February that broke anything as far as I can tell.
I'm not sure if you'll like the answer but the reason is likely that the Rank Scraper I pasted uses DOM to parse the HTML of google while you seem to rely on regular expressions and string operations.
I've personally tried to make a scraper based on such methods in the past and found that it requires a lot of maintenance work to keep it running. Sometimes real ugly workarounds.
When using DOM small changes usually don't even damage anything and otherwise adapting the code might be easier.
In the past few years the DOM code of that parser was working without major interruption, only two times a small change had to be made. And Google did change a lot on their site in that time, it just didn't cause ill effects.
The DOM functions of the above linked checker can be found in the functions.php file
function process_raw($htmdata,$page)