MediaWiki + Graphviz + Image maps + Pagelinks - php

Background: Working with MediaWiki 1.19.1, Graphviz 2.28.0, Extension:GraphViz 0.9 on WAMP stack (Server 2008, Apache 2.4.2, MySQL 5.5.27, PHP 5.4.5). Everything is working great and as expected for the basic functionality of rendering a clickable image from a Graphviz diagram using the GraphViz extension in MediaWiki.
Problem: The links in the image map are not added to the MediaWiki pagelinks table. I get why they aren't added but it becomes an issue if there is no way to follow the links back with the 'What links here' functionality.
Desired solution: During the processing of the diagram in the GraphViz extension, I would like to use the generated .map file to then create a list of wikilinks to add on the page to get picked up by MediaWiki and added to the pagelinks table.
Details:
This GraphViz extension code:
<graphviz border='frame' format='png'>
digraph example1 {
// define nodes
nodeHello [
label="I say Hello",
URL="Hello"
]
nodeWorld [
label="You say World!",
URL="World"
]
// link nodes
nodeHello -> nodeWorld!
}
</graphviz>
Generates this image:
And this image map code in a corresponding .map file on the server:
<map id="example1" name="example1">
<area shape="poly" id="node1" href="Hello" title="I say Hello" alt="" coords="164,29,161,22,151,15,137,10,118,7,97,5,77,7,58,10,43,15,34,22,31,29,34,37,43,43,58,49,77,52,97,53,118,52,137,49,151,43,161,37"/>
<area shape="poly" id="node2" href="World" title="You say World!" alt="" coords="190,125,186,118,172,111,152,106,126,103,97,101,69,103,43,106,22,111,9,118,5,125,9,133,22,139,43,145,69,148,97,149,126,148,152,145,172,139,186,133"/>
</map>
From that image map file, I would like to be able to extract the href and title to build wikilinks like so:
[[Hello|I say Hello]]
[[World|You say World!]]
I'm guessing that since that .map file is essentially XML that I could just use XPATH to query the file, but that is just a guess. PHP is not my strongest area and I don't know the best approach to going about the XML/XPATH option or if that is even the best approach to pull that info from the file.
Once I got that collection/array of wikilinks from the .map file, I'm sure I can hack up the GraphViz.php extension file to add it to the contents of the page to get it added to the pagelinks table.
Progress: I had a bit of an Rubber Duck Problem Solving moment right as I submitted the question. I realized that since I had well formed data in the image map, that XPATH was probably the way to go. It was fairly trivial to be able to pull the data I needed, especially since I found that the map file contents was stilled stored in a local string variable.
$xml = new SimpleXMLElement( $map );
foreach($xml->area as $item) {
$links .= "[[" . $item->attributes()->href . "|" . $item->attributes()->title . "]]";
}
Final Solution: See my accepted answer below.
Thanks for taking a look. I appreciate any assistance or direction you can offer.

I finally worked through all of the issues and now have a fairly decent solution to render the graph nicely, provide a list of links, and register the links with wiki. My solution doesn't fully support all of the capabilities of the current GraphViz extension as it is written as there is functionality we do not need and I do not want to support. Here are the assumptions / limitations of this solution:
Does not support MscGen: We only have a need for Graphviz.
Does not support imageAtrributes: We wanted to control the format and presentation and it seemed like there were inconsistencies in the imageAttributes implementation that would then cause further support issues.
Does not support wikilinks: While it would be nice to provide consistent link usage through wiki and the Graphviz extension, the reality is that Graphviz is a completely different markup environment. While the current extension 'supports' wikilinks, the implementation is a little weak and leaves areas for confusion. Example: Wikilinks support giving the link an optional description but Graphviz already uses the node label for the description. So then you end up ignoring the wikilink description and telling users that 'Yes, we support wikilinks but don't use the description part' So since we aren't really using wikilinks correctly, just implement a regular link implementation and try to avoid the confusion entirely.
Here is what the output looks like:
Here are the changes that were made
Comment out this line:
// We don't want to support wikilinks so don't replace them
//$timelinesrc = rewriteWikiUrls( $timelinesrc ); // if we use wiki-links we transform them to real urls
Replace this block of code:
// clean up map-name
$map = preg_replace( '#<ma(.*)>#', ' ', $map );
$map = str_replace( '</map>', '', $map );
if ( $renderer == 'mscgen' ) {
$mapbefore = $map;
$map = preg_replace( '/(\w+)\s([_:%#/\w]+)\s(\d+,\d+)\s(\d+,\d+)/',
'<area shape="$1" href="$2" title="$2" alt="$2" coords="$3,$4" />',
$map );
}
/* Procduce html
*/
if ( $wgGraphVizSettings->imageFormatting )
{
$txt = imageAtrributes( $args, $storagename, $map, $outputType, $wgUploadPath ); // if we want borders/position/...
} else {
$txt = '<map name="' . $storagename . '">' . $map . '</map>' .
'<img src="' . $wgUploadPath . '/graphviz/' . $storagename . '.' . $outputType . '"' .
' usemap="#' . $storagename . '" />';
}
With this code:
$intHtml = '';
$extHtml = '';
$badHtml = '';
// Wrap the map/area info with top level nodes and load into xml object
$xmlObj = new SimpleXMLElement( $map );
// What does map look like before we start working with it?
wfDebugLog( 'graphviz', 'map before: ' . $map . "\n" );
// loop through each of the <area> nodes
foreach($xmlObj->area as $areaNode) {
wfDebugLog( 'graphviz', "areaNode: " . $areaNode->asXML() . "\n" );
// Get the data from the XML attributes
$hrefValue = (string)$areaNode->attributes()->href;
$textValue = (string)$areaNode->attributes()->title;
wfDebugLog( 'graphviz', '$hrefValue before: ' . $hrefValue . "\n" );
wfDebugLog( 'graphviz', '$textValue before: ' . $textValue . "\n" );
// For the text fields, multiple spaces (" ") in the Graphviz source (label)
// turns into a regular space followed by encoded representations of
// non-breaking spaces ("   ") in the .map file which then turns
// into the following in the local variables: ("   ").
// The following two options appear to convert/decode the characters
// appropriately. Leaving the lines commented out for now, as we have
// not seen a graph in the wild with multiple spaces in the label -
// just happened to stumble on the scenario.
// See http://www.php.net/manual/en/simplexmlelement.asxml.php
// and http://stackoverflow.com/questions/2050723/how-can-i-preg-replace-special-character-like-pret-a-porter
//$textValue = iconv("UTF-8", "ASCII//TRANSLIT", $textValue);
//$textValue = html_entity_decode($textValue, ENT_NOQUOTES, 'UTF-8');
// Now we need to deal with the whitespace characters like tabs and newlines
// and also deal with them correctly to replace multiple occurences.
// Unfortunately, the \n and \t values in the variable aren't actually
// tab or newline characters but literal characters '\' + 't' or '\' + 'n'.
// So the normally recommended regex '/\s+/u' to replace the whitespace
// characters does not work.
// See http://stackoverflow.com/questions/6579636/preg-replace-n-in-string
$hrefValue = preg_replace("/( |\\\\n|\\\\t)+/", ' ', $hrefValue);
$textValue = preg_replace("/( |\\\\n|\\\\t)+/", ' ', $textValue);
// check to see if the url matches any of the
// allowed protocols for external links
if ( preg_match( '/^(?:' . wfUrlProtocols() . ')/', $hrefValue ) ) {
// external link
$parser->mOutput->addExternalLink( $hrefValue );
$extHtml .= Linker::makeExternalLink( $hrefValue, $textValue ) . ', ';
}
else {
$first = substr( $hrefValue, 0, 1 );
if ( $first == '\\' || $first == '[' || $first == '/' ) {
// potential UNC path, wikilink, absolute or relative path
$hrefValue = '#InvalidLink';
$badHtml .= Linker::makeExternalLink( $hrefValue, $textValue ) . ', ';
$textValue = 'Invalid link. Check Graphviz source.';
}
else {
$title = Title::newFromText( $hrefValue );
if ( is_null( $title ) ) {
// invalid link
$hrefValue = '#InvalidLink';
$badHtml .= Linker::makeExternalLink( $hrefValue, $textValue ) . ', ';
$textValue = 'Invalid link. Check Graphviz source.';
}
else {
// internal link
$parser->mOutput->addLink( $title );
$intHtml .= Linker::link( $title, $textValue ) . ', ';
$hrefValue = $title->getFullURL();
}
}
}
$areaNode->attributes()->href = $hrefValue;
$areaNode->attributes()->title = $textValue;
}
$map = $xmlObj->asXML();
// The contents of $map, which is now XML, gets embedded
// in the HTML sent to the browser so we need to strip
// the XML version tag and we also strip the <map> because
// it will get replaced with a new one with the correct name.
$map = str_replace( '<?xml version="1.0"?>', '', $map );
$map = preg_replace( '#<ma(.*)>#', ' ', $map );
$map = str_replace( '</map>', '', $map );
// Let's see what it looks like now that we are done with it.
wfDebugLog( 'graphviz', 'map after: ' . $map . "\n" );
$txt = '' .
'<table style="background-color:#f9f9f9;border:1px solid #ddd;">' .
'<tr>' .
'<td style="border:1px solid #ddd;text-align:center;">' .
'<map name="' . $storagename . '">' . $map . '</map>' .
'<img src="' . $wgUploadPath . '/graphviz/' . $storagename . '.' . $outputType . '"' . ' usemap="#' . $storagename . '" />' .
'</td>' .
'</tr>' .
'<tr>' .
'<td style="font:10px verdana;">' .
'This Graphviz diagram links to the following pages:' .
'<br /><strong>Internal</strong>: ' . ( $intHtml != '' ? rtrim( $intHtml, ' ,' ) : '<em>none</em>' ) .
'<br /><strong>External</strong>: ' . ( $extHtml != '' ? rtrim( $extHtml, ' ,' ) : '<em>none</em>' ) .
( $badHtml != '' ? '<br /><strong>Invalid</strong>: ' . rtrim($badHtml, ' ,') .
'<br /><em>Tip: Do not use wikilinks ([]), UNC paths (\\) or relative links (/) when creating links in Graphviz diagrams.</em>' : '' ) .
'</td>' .
'</tr>' .
'</table>';
Possible enhancements:
It would be nice if the list of links below the graph were sorted and de-duped.

Related

preg_replace_callback(): Unknown modifier '/'

I need search and highlight the word.
My sentence is
Please see our Author Guide for more information: http://digital-library.theiet.org/journals/author-guide.
you will be contacted shortly asking you to take a decision and sign either a copyright or Open Access licence form.
My code
function find_highlight_word($word) {
$text = preg_replace_callback($word, function($matches) use (&$counter) {
$counter++;
return '<b class="search_mark highlighted" id="matched_' . $counter . '">'
. substr($matches[0], 0, strlen($matches[0]))
. '</b>';
}, $text);
return $text;
}
$word = '//';
$word = '/' . preg_quote($word) . '/i';
$this->find_highlight_word($word);
When I'm searching with '//' that time showing php error.
You're correctly attempting to preg-quote your string, but you're not telling it what your delimiter is, so the // inside the string is causing issues. Pass the used delimiter as the second argument, so it can be escaped as well:
$word = '/' . preg_quote($string, '/') . '/i';

Why use DOMDocument instead of PHP with HTML?

I still can't really wrap my head around the built in DOMDocument class.
Why should I use that instead of just doing it similar to the following?
I would like to know the benefits.
$URI = $_SERVER['REQUEST_URI'];
$navArr = Config::get('navigation');
$navigation = '<ul id="nav">' . "\n";
foreach($navArr as $name => $path) {
$navigation .= ' <li' . ((in_array($URI, $path)) ? ' class="active"' : false) . '>' . $name . '</li>' . "\n";
}
$navigation .= '</ul>' . "\n\n";
return $navigation;
Here's the same example using DOMDocument:
$doc = new DOMDocument;
$list = $doc->appendChild($doc->createElement('ul'));
$list->setAttribute('id', 'nav');
foreach ($navArr as $name => $path) {
$listItem = $list->appendChild($doc->createElement('li'));
if (in_array($URI, $path)) {
$listItem->setAttribute('class', 'active');
}
$link = $listItem->appendChild($doc->createElement('a'));
$link->setAttribute('href', $path[1]);
$link->appendChild($doc->createTextNode($name));
}
return $doc->saveHTML();
It's more verbose, but not too much, and possibly clearer what's happening at each step.
One benefit is character escaping: createTextNode and setAttribute ensure that the special HTML characters (quotes, ampersands and angle brackets) are escaped properly.
In the end, though, for a larger application, you'd probably want to use an actual templating language like Twig for generating HTML, as the templates are more readable and extensible.

Find All images on a Page using preg_replace

How can i find all image links using "preg_replace"? I've hard time understanding how to implement regex
what I've tried so far:
$pattern = '~(http://pics.-[^0-9]*.jpg)(http://pics.-[^0-9]*.jpg)(</a>)~';
$result = preg_replace($pattern, '$2', $content);
preg_replace(), as the name suggests, replaces something. You want to use preg_match_all().
<?php
// The \\2 is an example of backreferencing. This tells pcre that
// it must match the second set of parentheses in the regular expression
// itself, which would be the ([\w]+) in this case. The extra backslash is
// required because the string is in double quotes.
$html = "<b>bold text</b><a href=howdy.html>click me</a>";
preg_match_all("/(<([\w]+)[^>]*>)(.*?)(<\/\\2>)/", $html, $matches, PREG_SET_ORDER);
foreach ($matches as $val) {
echo "matched: " . $val[0] . "\n";
echo "part 1: " . $val[1] . "\n";
echo "part 2: " . $val[2] . "\n";
echo "part 3: " . $val[3] . "\n";
echo "part 4: " . $val[4] . "\n\n";
}
another easy way to find all images link from web page, use simple html dom parser
// Create DOM from URL or file
$html = file_get_html('http://www.google.com/');
// Find all images
foreach($html->find('img') as $element)
echo $element->src . '<br>';
this is so simple way to get all image link from any webpage.

PHP - Do I need any UTF-8 encoding/decoding?

Ok, I am writing comments to a UTF-8 file that I read within the function below to remove the text in between these comments. My question is, do I need anything different in here to do this successfully for UTF-8 files? Or will the following code below work? Basically, I am wondering if I need utf8_decode and/or utf8_encode functions, or perhaps iconv function?
// This holds the current file we are working on.
$lang_file = 'files/DreamTemplates.russian-utf8.php';
// Can't read from the file if it doesn't exist now can we?
if (!file_exists($lang_file))
continue;
// This helps to remove the language strings for the template, since the comment is unique
$template_begin_comment = '// ' . ' Template - ' . $lang_file . ' BEGIN...';
$template_end_comment = '// ' . ' Template - ' . $lang_file . ' END!';
$fp = fopen($lang_file, 'rb');
$content = fread($fp, filesize($lang_file));
fclose($fp);
// Searching within the string, extracting only what we need.
$start = strpos($content, $template_begin_comment);
$end = strpos($content, $template_end_comment);
// We can't do this unless both are found.
if ($start !== false && $end !== false)
{
$begin = substr($content, 0, $start);
$finish = substr($content, $end + strlen($template_end_comment));
$new_content = $begin . $finish;
// Write it into the file.
$fo = fopen($lang_file, 'wb');
#fwrite($fo, $new_content);
fclose($fo);
}
Thanks for your help on this concerning UTF-8 encoding and decoding on strings, even if they are commented strings.
When I write the php comments into the UTF-8 file I am not using any conversion. Should I be?? The string definitions between the php comments is already encoded in UTF-8 however and seems to work fine within the file. Any help appreciated here.
No, you don't need to do any conversions.
Also, your extraction code will be reliable in the sense that it wont mangle multibyte characters, although you might want to make sure the end position occurs after the start pos.
To do this I would use preg_replace instead:
$content = file_get_contents($lang_file);
$template_begin_comment = '// ' . ' Template - ' . $lang_file . ' BEGIN...';
$template_end_comment = '// ' . ' Template - ' . $lang_file . ' END!';
// find from begin comment to end comment
// replace with emptiness
// keep track of how many replacements have been made
$new_content = preg_replace('/' .
preg_quote($template_begin_comment, '/') .
'.*?' .
preg_quote($template_end_comment, '/') . '/s',
'',
$content,
-1,
$replace_count
);
if ($replace_count) {
// if replacements have been made, write the file back again
file_put_contents($lang_file, $new_content);
}
Because your matching only contains ASCII, this approach is safe enough because the rest is copied verbatim.
Disclaimer
Above code is not tested, if there's anything wrong just let me know.

Can't read txt file correctly to PHP array

I have working script that shows YouTube videos on page but I need it to read addresses in array from separate(txt) file. I tried several solutions but have not yet found a single working one.
Any help from smarter people!
My working code is using array like this:
$video = array('JObJ7vuppGE', 'BsmE6leTbkc', 'zWtj4TEVLvU');// array of youtube videos
What I planning to use instead:
$tube = $_SERVER['DOCUMENT_ROOT'] .'/youtube.txt';// IDs of youtube videos inside
$lines = file($tube);
foreach($lines as $line_num => $line)
$video = array($line);// array of youtube videos
$v = preg_replace("/[^A-Za-z0-9\-\_]/", "", $_GET["v"]); // make sure "v" parameter is sanitized
if(!empty($v) && in_array($v, $video)) //check if parameter "v" is empty and is yours video
{
echo '<object width="853" height="505"><param name="movie" value="http://www.youtube.com/v/' . $v . '&hl=en_US&fs=1&hd=1&color1=0xe1600f&color2=0xfebd01"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param>
<embed src="http://www.youtube.com/v/' . $v . '&hl=en_US&fs=1&hd=1&color1=0xe1600f&color2=0xfebd01" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="853" height="505"></embed>
</object>';
}
else
{
foreach($video as $v)
{
echo '<div class="tube">
<a href="/sites/tube.php?v=' . $v . '" class="thickbox">
<img style="border: 2px solid #000; margin: 5px;" alt="" src="http://i1.ytimg.com/vi/' . $v . '/default.jpg"/>
</a>';
$xmlData = simplexml_load_string(utf8_encode(file_get_contents('http://gdata.youtube.com/feeds/api/videos/' . $v . '?fields=title,yt:recorded')));
$title = (string)$xmlData->title;
$entry = $xmlData;
$namespaces = $entry->getNameSpaces(true);
$yr = $entry->children((string)$namespaces['yt']);
// get <yt:recorded> node for date and replace yyyy-mm-dd to dd.mm.yyyy
$year = substr($yr->recorded, 0, 4);
$month = substr($yr->recorded, 5, 2);
$day = substr($yr->recorded, 8, 2);
$recorddate = $day . "." . $month . "." . $year;
echo '<p>' . $recorddate . '<br>' . $title . '<br>
</p>
</div>';
}
}
unfortunately I am able to read only first line of txt file. At least I can only show the first video in page. My approach is obviously wrong.
And don`t tell, I now my code is mess. Feel free to modify and improve it, as long as it still works as planned.
I just copied your code across to my local install of XAMPP.
Without a 'v' parameter in the Query String, a list of videos were shown, each displaying a thumbnail and title and within a link which redirected back to the same page, but with a 'v' parameter included.
Clicking the thumbnail loaded the page showing a YouTube player for the selected video.
I see no problems here...
what i understand from your question you want to read video id from multiple files and combine it into one php array? AM i correct?
Also i suggest using Zend_gdata api for youtube functionality its simple with many other options you can try
I think the main idea / question was to store the ids in a txt file and read them rather than having them stored in an array right?
<?php
/**
* #author - Sephedo
* #for - dogcat # Stackoverflow
* #question - http://stackoverflow.com/questions/6936958/cant-read-txt-file-correctly-to-php-array
*/
$filename = "youtube.txt";
// Open the file
if( $handle = #fopen( $filename, "r") )
{
// Cycle each line until end
while(! feof( $handle ) )
{
$line = fgets($handle); // Read the line
$youtube[] = $line;
}
}
if(! isset( $youtube ) ) $youtube = array();
foreach( $youtube as $videoId )
{
echo $videoId; // Etc etc etc.....
}
?>

Categories