PHP - Do I need any UTF-8 encoding/decoding? - php

Ok, I am writing comments to a UTF-8 file that I read within the function below to remove the text in between these comments. My question is, do I need anything different in here to do this successfully for UTF-8 files? Or will the following code below work? Basically, I am wondering if I need utf8_decode and/or utf8_encode functions, or perhaps iconv function?
// This holds the current file we are working on.
$lang_file = 'files/DreamTemplates.russian-utf8.php';
// Can't read from the file if it doesn't exist now can we?
if (!file_exists($lang_file))
continue;
// This helps to remove the language strings for the template, since the comment is unique
$template_begin_comment = '// ' . ' Template - ' . $lang_file . ' BEGIN...';
$template_end_comment = '// ' . ' Template - ' . $lang_file . ' END!';
$fp = fopen($lang_file, 'rb');
$content = fread($fp, filesize($lang_file));
fclose($fp);
// Searching within the string, extracting only what we need.
$start = strpos($content, $template_begin_comment);
$end = strpos($content, $template_end_comment);
// We can't do this unless both are found.
if ($start !== false && $end !== false)
{
$begin = substr($content, 0, $start);
$finish = substr($content, $end + strlen($template_end_comment));
$new_content = $begin . $finish;
// Write it into the file.
$fo = fopen($lang_file, 'wb');
#fwrite($fo, $new_content);
fclose($fo);
}
Thanks for your help on this concerning UTF-8 encoding and decoding on strings, even if they are commented strings.
When I write the php comments into the UTF-8 file I am not using any conversion. Should I be?? The string definitions between the php comments is already encoded in UTF-8 however and seems to work fine within the file. Any help appreciated here.

No, you don't need to do any conversions.
Also, your extraction code will be reliable in the sense that it wont mangle multibyte characters, although you might want to make sure the end position occurs after the start pos.

To do this I would use preg_replace instead:
$content = file_get_contents($lang_file);
$template_begin_comment = '// ' . ' Template - ' . $lang_file . ' BEGIN...';
$template_end_comment = '// ' . ' Template - ' . $lang_file . ' END!';
// find from begin comment to end comment
// replace with emptiness
// keep track of how many replacements have been made
$new_content = preg_replace('/' .
preg_quote($template_begin_comment, '/') .
'.*?' .
preg_quote($template_end_comment, '/') . '/s',
'',
$content,
-1,
$replace_count
);
if ($replace_count) {
// if replacements have been made, write the file back again
file_put_contents($lang_file, $new_content);
}
Because your matching only contains ASCII, this approach is safe enough because the rest is copied verbatim.
Disclaimer
Above code is not tested, if there's anything wrong just let me know.

Related

preg_replace_callback(): Unknown modifier '/'

I need search and highlight the word.
My sentence is
Please see our Author Guide for more information: http://digital-library.theiet.org/journals/author-guide.
you will be contacted shortly asking you to take a decision and sign either a copyright or Open Access licence form.
My code
function find_highlight_word($word) {
$text = preg_replace_callback($word, function($matches) use (&$counter) {
$counter++;
return '<b class="search_mark highlighted" id="matched_' . $counter . '">'
. substr($matches[0], 0, strlen($matches[0]))
. '</b>';
}, $text);
return $text;
}
$word = '//';
$word = '/' . preg_quote($word) . '/i';
$this->find_highlight_word($word);
When I'm searching with '//' that time showing php error.
You're correctly attempting to preg-quote your string, but you're not telling it what your delimiter is, so the // inside the string is causing issues. Pass the used delimiter as the second argument, so it can be escaped as well:
$word = '/' . preg_quote($string, '/') . '/i';

file_get_contents false when url have spaces (encode everything not working)

So, the problem is in this line
$imageString = file_get_contents($image_url);
with urls that have space character it doesn't work. But if I make
$imageString = file_get_contents(urlencode($image_url));
Nothing works.I keep receiving false in the variable.
the ulr is of the kind:
https://s3-eu-central-1.amazonaws.com/images/12/Screenshot from 2016-04-28 18 15:54:20.png
use this function
function escapefile_url($url){
$parts = parse_url($url);
$path_parts = array_map('rawurldecode', explode('/', $parts['path']));
return
$parts['scheme'] . '://' .
$parts['host'] .
implode('/', array_map('rawurlencode', $path_parts))
;
}
echo escapefile_url("http://example.com/foo/bar bof/some file.jpg") . "\n";
echo escapefile_url("http://example.com/foo/bar+bof/some+file.jpg") . "\n";
echo escapefile_url("http://example.com/foo/bar%20bof/some%20file.jpg") . "\n";
i'v faced the same problem and if you search about it you will see all the people tell you to use urlencode(), but No!! urlencode() wont work in this situation...
i used the #Akram Wahid answer and that work perfectly so i recommend it to use for file_get_contents().
and if you wonder what escapefile_url() does in #Akram Wahid answer here little explain for it:
Simply he take the url apart as array and then he use rawurlencode() to encode all parts that contain special characters without the main domain like (http://example.com).
so what the deference?!! here example uses urlencode() and escapefile_url() to clarify this
echo escapefile_url("http://example.com/foo/bar bof/some file.jpg") . "<br>";
// http://example.com/foo/bar%20bof/some%20file.jpg
echo urlencode("http://example.com/foo/bar bof/some file.jpg") . "<br>";
// http%3A%2F%2Fexample.com%2Ffoo%2Fbar+bof%2Fsome+file.jpg
If you want to apply #Akram Wahid's solution to URLs that may also contain GET arguments then an updated version would be this:
function escapefile_url($url){
$parts = parse_url($url);
$path_parts = array_map('rawurldecode', explode('/', $parts['path']));
return
$parts['scheme'] . '://' .
$parts['host'] .
implode('/', array_map('rawurlencode', $path_parts)) .
(isset($parts['query']) ? '?'.rawurldecode($parts['query']) : '')
;
}

MediaWiki + Graphviz + Image maps + Pagelinks

Background: Working with MediaWiki 1.19.1, Graphviz 2.28.0, Extension:GraphViz 0.9 on WAMP stack (Server 2008, Apache 2.4.2, MySQL 5.5.27, PHP 5.4.5). Everything is working great and as expected for the basic functionality of rendering a clickable image from a Graphviz diagram using the GraphViz extension in MediaWiki.
Problem: The links in the image map are not added to the MediaWiki pagelinks table. I get why they aren't added but it becomes an issue if there is no way to follow the links back with the 'What links here' functionality.
Desired solution: During the processing of the diagram in the GraphViz extension, I would like to use the generated .map file to then create a list of wikilinks to add on the page to get picked up by MediaWiki and added to the pagelinks table.
Details:
This GraphViz extension code:
<graphviz border='frame' format='png'>
digraph example1 {
// define nodes
nodeHello [
label="I say Hello",
URL="Hello"
]
nodeWorld [
label="You say World!",
URL="World"
]
// link nodes
nodeHello -> nodeWorld!
}
</graphviz>
Generates this image:
And this image map code in a corresponding .map file on the server:
<map id="example1" name="example1">
<area shape="poly" id="node1" href="Hello" title="I say Hello" alt="" coords="164,29,161,22,151,15,137,10,118,7,97,5,77,7,58,10,43,15,34,22,31,29,34,37,43,43,58,49,77,52,97,53,118,52,137,49,151,43,161,37"/>
<area shape="poly" id="node2" href="World" title="You say World!" alt="" coords="190,125,186,118,172,111,152,106,126,103,97,101,69,103,43,106,22,111,9,118,5,125,9,133,22,139,43,145,69,148,97,149,126,148,152,145,172,139,186,133"/>
</map>
From that image map file, I would like to be able to extract the href and title to build wikilinks like so:
[[Hello|I say Hello]]
[[World|You say World!]]
I'm guessing that since that .map file is essentially XML that I could just use XPATH to query the file, but that is just a guess. PHP is not my strongest area and I don't know the best approach to going about the XML/XPATH option or if that is even the best approach to pull that info from the file.
Once I got that collection/array of wikilinks from the .map file, I'm sure I can hack up the GraphViz.php extension file to add it to the contents of the page to get it added to the pagelinks table.
Progress: I had a bit of an Rubber Duck Problem Solving moment right as I submitted the question. I realized that since I had well formed data in the image map, that XPATH was probably the way to go. It was fairly trivial to be able to pull the data I needed, especially since I found that the map file contents was stilled stored in a local string variable.
$xml = new SimpleXMLElement( $map );
foreach($xml->area as $item) {
$links .= "[[" . $item->attributes()->href . "|" . $item->attributes()->title . "]]";
}
Final Solution: See my accepted answer below.
Thanks for taking a look. I appreciate any assistance or direction you can offer.
I finally worked through all of the issues and now have a fairly decent solution to render the graph nicely, provide a list of links, and register the links with wiki. My solution doesn't fully support all of the capabilities of the current GraphViz extension as it is written as there is functionality we do not need and I do not want to support. Here are the assumptions / limitations of this solution:
Does not support MscGen: We only have a need for Graphviz.
Does not support imageAtrributes: We wanted to control the format and presentation and it seemed like there were inconsistencies in the imageAttributes implementation that would then cause further support issues.
Does not support wikilinks: While it would be nice to provide consistent link usage through wiki and the Graphviz extension, the reality is that Graphviz is a completely different markup environment. While the current extension 'supports' wikilinks, the implementation is a little weak and leaves areas for confusion. Example: Wikilinks support giving the link an optional description but Graphviz already uses the node label for the description. So then you end up ignoring the wikilink description and telling users that 'Yes, we support wikilinks but don't use the description part' So since we aren't really using wikilinks correctly, just implement a regular link implementation and try to avoid the confusion entirely.
Here is what the output looks like:
Here are the changes that were made
Comment out this line:
// We don't want to support wikilinks so don't replace them
//$timelinesrc = rewriteWikiUrls( $timelinesrc ); // if we use wiki-links we transform them to real urls
Replace this block of code:
// clean up map-name
$map = preg_replace( '#<ma(.*)>#', ' ', $map );
$map = str_replace( '</map>', '', $map );
if ( $renderer == 'mscgen' ) {
$mapbefore = $map;
$map = preg_replace( '/(\w+)\s([_:%#/\w]+)\s(\d+,\d+)\s(\d+,\d+)/',
'<area shape="$1" href="$2" title="$2" alt="$2" coords="$3,$4" />',
$map );
}
/* Procduce html
*/
if ( $wgGraphVizSettings->imageFormatting )
{
$txt = imageAtrributes( $args, $storagename, $map, $outputType, $wgUploadPath ); // if we want borders/position/...
} else {
$txt = '<map name="' . $storagename . '">' . $map . '</map>' .
'<img src="' . $wgUploadPath . '/graphviz/' . $storagename . '.' . $outputType . '"' .
' usemap="#' . $storagename . '" />';
}
With this code:
$intHtml = '';
$extHtml = '';
$badHtml = '';
// Wrap the map/area info with top level nodes and load into xml object
$xmlObj = new SimpleXMLElement( $map );
// What does map look like before we start working with it?
wfDebugLog( 'graphviz', 'map before: ' . $map . "\n" );
// loop through each of the <area> nodes
foreach($xmlObj->area as $areaNode) {
wfDebugLog( 'graphviz', "areaNode: " . $areaNode->asXML() . "\n" );
// Get the data from the XML attributes
$hrefValue = (string)$areaNode->attributes()->href;
$textValue = (string)$areaNode->attributes()->title;
wfDebugLog( 'graphviz', '$hrefValue before: ' . $hrefValue . "\n" );
wfDebugLog( 'graphviz', '$textValue before: ' . $textValue . "\n" );
// For the text fields, multiple spaces (" ") in the Graphviz source (label)
// turns into a regular space followed by encoded representations of
// non-breaking spaces ("   ") in the .map file which then turns
// into the following in the local variables: ("   ").
// The following two options appear to convert/decode the characters
// appropriately. Leaving the lines commented out for now, as we have
// not seen a graph in the wild with multiple spaces in the label -
// just happened to stumble on the scenario.
// See http://www.php.net/manual/en/simplexmlelement.asxml.php
// and http://stackoverflow.com/questions/2050723/how-can-i-preg-replace-special-character-like-pret-a-porter
//$textValue = iconv("UTF-8", "ASCII//TRANSLIT", $textValue);
//$textValue = html_entity_decode($textValue, ENT_NOQUOTES, 'UTF-8');
// Now we need to deal with the whitespace characters like tabs and newlines
// and also deal with them correctly to replace multiple occurences.
// Unfortunately, the \n and \t values in the variable aren't actually
// tab or newline characters but literal characters '\' + 't' or '\' + 'n'.
// So the normally recommended regex '/\s+/u' to replace the whitespace
// characters does not work.
// See http://stackoverflow.com/questions/6579636/preg-replace-n-in-string
$hrefValue = preg_replace("/( |\\\\n|\\\\t)+/", ' ', $hrefValue);
$textValue = preg_replace("/( |\\\\n|\\\\t)+/", ' ', $textValue);
// check to see if the url matches any of the
// allowed protocols for external links
if ( preg_match( '/^(?:' . wfUrlProtocols() . ')/', $hrefValue ) ) {
// external link
$parser->mOutput->addExternalLink( $hrefValue );
$extHtml .= Linker::makeExternalLink( $hrefValue, $textValue ) . ', ';
}
else {
$first = substr( $hrefValue, 0, 1 );
if ( $first == '\\' || $first == '[' || $first == '/' ) {
// potential UNC path, wikilink, absolute or relative path
$hrefValue = '#InvalidLink';
$badHtml .= Linker::makeExternalLink( $hrefValue, $textValue ) . ', ';
$textValue = 'Invalid link. Check Graphviz source.';
}
else {
$title = Title::newFromText( $hrefValue );
if ( is_null( $title ) ) {
// invalid link
$hrefValue = '#InvalidLink';
$badHtml .= Linker::makeExternalLink( $hrefValue, $textValue ) . ', ';
$textValue = 'Invalid link. Check Graphviz source.';
}
else {
// internal link
$parser->mOutput->addLink( $title );
$intHtml .= Linker::link( $title, $textValue ) . ', ';
$hrefValue = $title->getFullURL();
}
}
}
$areaNode->attributes()->href = $hrefValue;
$areaNode->attributes()->title = $textValue;
}
$map = $xmlObj->asXML();
// The contents of $map, which is now XML, gets embedded
// in the HTML sent to the browser so we need to strip
// the XML version tag and we also strip the <map> because
// it will get replaced with a new one with the correct name.
$map = str_replace( '<?xml version="1.0"?>', '', $map );
$map = preg_replace( '#<ma(.*)>#', ' ', $map );
$map = str_replace( '</map>', '', $map );
// Let's see what it looks like now that we are done with it.
wfDebugLog( 'graphviz', 'map after: ' . $map . "\n" );
$txt = '' .
'<table style="background-color:#f9f9f9;border:1px solid #ddd;">' .
'<tr>' .
'<td style="border:1px solid #ddd;text-align:center;">' .
'<map name="' . $storagename . '">' . $map . '</map>' .
'<img src="' . $wgUploadPath . '/graphviz/' . $storagename . '.' . $outputType . '"' . ' usemap="#' . $storagename . '" />' .
'</td>' .
'</tr>' .
'<tr>' .
'<td style="font:10px verdana;">' .
'This Graphviz diagram links to the following pages:' .
'<br /><strong>Internal</strong>: ' . ( $intHtml != '' ? rtrim( $intHtml, ' ,' ) : '<em>none</em>' ) .
'<br /><strong>External</strong>: ' . ( $extHtml != '' ? rtrim( $extHtml, ' ,' ) : '<em>none</em>' ) .
( $badHtml != '' ? '<br /><strong>Invalid</strong>: ' . rtrim($badHtml, ' ,') .
'<br /><em>Tip: Do not use wikilinks ([]), UNC paths (\\) or relative links (/) when creating links in Graphviz diagrams.</em>' : '' ) .
'</td>' .
'</tr>' .
'</table>';
Possible enhancements:
It would be nice if the list of links below the graph were sorted and de-duped.

php file put contents reverse

Ok sorry if its a stupid question im a begginer.
Im making a small shoutbox just for practise.
It inserts the shout infos in a txt file.
My problem is that, it lists the text from top to bottom, and i would like to do this reversed.
if(isset($_POST['submit'])) {
$text = $_POST['text'];
if(!empty($text)) {
$text = $_POST['text'];
$name = $_POST['name'];
$time = date("H:i");
$content =
"<div class='text'><em>" . $time . "</em>
<span class='c11'><b>" . "<a href='userinfo_php_willbe_here.php' target='_blank'>" . htmlspecialchars($name) . "</a>" . ":</span></b>
" . htmlspecialchars($text) . "
</div>\n";
file_put_contents($file, $content, FILE_APPEND | LOCK_EX);
}
}
here is my code.
i was googleing around with not much luck maybe i wasnt looking hard enough.
could please someone give me a hint?
thank you
No way to do so with one function call. You need to read the content from your target file, prepend the data in php and rewrite the whole file (see file_get_contents).
$fileContent = file_get_contents($file);
$fileContent = $content . $fileContent;
file_put_contents($file, $fileContent, LOCK_EX);
You can also use the array_reverse like so:
// Data in file separated by new-line
$data = explode("\n",file_get_contents("filename.txt"));
foreach(array_reverse($data) as $value) {
echo $value."\n";
}
You can only prepend to a file by means of reading it and writing afterwards.
file_put_contents($file, $content . file_get_contents($file), LOCK_EX);

PHP Editing of Files Via FTP

Below is a script I am using to modify some files with placeholder strings. The .htaccess file sometimes gets truncated. It's about 2,712 bytes in size before editing and will vary in size after editing depending on the length of the domain name. When it gets truncated, it ends up around 1,400 bytes in size.
$d_parts = explode('.', $vals['domain']);
$ftpstring = 'ftp://' . $vals['username']
. ':' . $vals['password']
. '#' . $vals['ftp_server']
. '/' . $vals['web_path']
;
$stream_context = stream_context_create(array('ftp' => array('overwrite' => true)));
$htaccess = file_get_contents($ftpstring . '.htaccess');
$htaccess = str_replace(array('{SUB}', '{DOMAIN}', '{TLD}'), $d_parts, $htaccess);
file_put_contents($ftpstring . '.htaccess', $htaccess, 0, $stream_context);
$constants = file_get_contents($ftpstring . 'constants.php');
$constants = str_replace('{CUST_ID}', $vals['cust_id'], $constants);
file_put_contents($ftpstring . 'constants.php', $constants, 0, $stream_context);
Is there a bug in file_get_contents(), str_replace(), or file_put_contents()? I have done quite a bit of searching and haven't found any reports of this happening for others.
Is there a better method of accomplishing this?
SOLUTION
Based on Wrikken's response, I started using file pointers with ftp_f(get|put), but ended up with zero length files being written back. I stopped using file pointers and switched to ftp_(get|put), and now everything seems to be working:
$search = array('{SUB}', '{DOMAIN}', '{TLD}', '{CUST_ID}');
$replace = explode('.', $vals['site_domain']);
$replace[] = $vals['cust_id'];
$tmpfname = tempnam(sys_get_temp_dir(), 'config');
foreach (array('.htaccess', 'constants.php') as $file_name) {
$remote_file = $dest_path . $file_name;
if (!#ftp_get($conn_id, $tmpfname, $remote_file, FTP_ASCII, 0)) {
echo $php_errormsg;
} else {
$contents = file_get_contents($tmpfname);
$contents = str_replace($search, $replace, $contents);
file_put_contents($tmpfname, $contents);
if (!#ftp_fput($conn_id, $remote_file, $tmpfname, FTP_ASCII, 0)) {
echo $php_errormsg;
}
}
}
unlink($tmpfname);
With either passive of active ftp, I've never had much luck file using the file-family of functions with the ftp wrappers, usually with that kind of truncation problem. I usually just revert to the ftp functions with passive transfers, which do make it harder to switch, but work flawlessly for me.

Categories