PHP prints out HTML instead of processing the HTML - php

I hope someone can help me with this. The code below is from a WordPress Plugin. From the plugin option page I am entering some simple html -
<h3>Showname</h3><p>with DJ Top</p>
The code below ($showname) is used (a WordPress shortcode) to display what I've entered, but it actually displays the HTML rather than processing it i.e displaying the HTML header and paragraph.
Can anyone point me in the direction so $showname processes the HTML rather than just printing it out verbatim tags and all.
Many thanks
Rob
function showtime_schedule_handler($atts, $content=null, $code=""){
global $wpdb;
global $showtimeTable;
//Get the current schedule, divided into days
$daysOfTheWeek = array("Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday");
$schedule = array();
$output = '';
foreach ($daysOfTheWeek as $day) {
//Add this day's shows HTML to the $output array
$showsForThisDay = $wpdb->get_results( $wpdb->prepare ( "SELECT * FROM $showtimeTable WHERE dayOfTheWeek = '$day' ORDER BY startTime" ));
//Check to make sure this day has shows before saving the header
if ($showsForThisDay){
$output .= '<h2>'.$day.'</h2>';
$output .= '<ul class="showtime-schedule">';
foreach ($showsForThisDay as $show){
$showName = $show->showName;
$startClock = $show->startClock;
$endClock = $show->endClock;
$linkURL = $show->linkURL;
if ($linkURL){
$showName = ''.$showName.'';
}
$output .= '<li><strong>'.$startClock.'</strong> - <strong>'.$endClock.'</strong>: '.$showName.'</li>';
}
$output .= '</ul>';
}
}
return $output;
}

You're outputting with a variant of the text/plain MIME type. You'll need to send it out with a default header (text/html) in order for the browser to parse it properly.
For instance:
<?php
header('Content-Type: text/html', true);
echo showtime_schedule_handler($atts, $content);
?>

Instead of
return $output;
try
return html_entity_decode($output);

Related

Grab data from than 100 reddit posts php

I have been trying get data out of new reddit post, but theres limitation where you cant get data from more than 100 posts. can anybody help me to getover this below is my code
$output = "";
for($digit=0; $digit<1000; $digit+=25){
$jsondata = trim(file_get_contents("http://www.reddit.com/new/.json?count=$digit"));
$json = json_decode($jsondata, true);
$moviesChildren = $json['data']['children'];
foreach($moviesChildren as $movie){
$output .= '"'.$movie["data"]["title"].'", ';
$output .= $movie["data"]["ups"].", ";
$output .= $movie["data"]["num_comments"].", ";
$output .= $movie["data"]["domain"]."\n\r";
$output .= "<br />";
}
}
echo $output;
What is the output you get, and what are you expecting instead?
First off, you will want to follow the API rules about authentication or else you'll be quickly limited, and possibly banned.
Listings have before and after attributes to help with pagination. You will need to pass those into your subsequent GETs in order to fetch the next page.

What is causing my PHP page to render html tags as text (and what can I do to fix it)?

I have the following PHP code: When it is commented out (as it is now) from just after the comment // ... more stuff in here to the end of that comment block, my page renders html from that 2nd php block as expected as in this screenshot.
If I uncomment that block it renders like the screenshot after the code below which is not what I want (I want the rendered html as in the first screenshot). The only thing not showing in my comment block is a function call that curls a webpage, parses html with DomXNode types of things, and returns an array with 3 elements. How can I get the original rendering of the html back and what am I possibly doing that is ruining that for me? I tried echo instead of print and that makes no difference.
I honestly did search for the answer on here and found lots of pages describing how do do just the opposite of what I want so please be gentle with me. I was surprised that I couldn't find a similar question and I know there has to be an easy answer here. Thanks!
<?php
// ... more stuff in here
/*
include("../../includes/curl_fx.php");
if ($doAppend === "parcel") {
$lines = explode(PHP_EOL, $Data);
foreach($lines as $line) {
if(strpos($line, "http") > 0) {
$start = stripos(strval($line), "http");
$fullLength = strlen($line);
$urlLength = ($fullLength - $start);
$fullUrl = substr($line, $start, $urlLength);
$arraySDAT = getSDAT($fullUrl);
$line .= ", " . $arraySDAT[0] . ", " . $arraySDAT[1] . ", " . $arraySDAT[2] . "\n";
fwrite($Handle, $line);
}
}
}
*/
?>
<?php
if ($DataAdded === true) {
print "<h2>YourFile.txt</h2>Data has been added.<br />Close this window or tab to return to the web map.<br />";
} else {
print "Data may not have been added. Check the file.<br />";
}
fclose($Handle);
print $doAppendAnswer;
print "<br />";
?>
EDIT: Here is the function.
<?php
function getSDAT ($fullUrl="") {
$ch = curl_init($fullUrl);
if (! $ch) {
die( "Cannot allocate a new PHP-CURL handle" );
}
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
header("Content-type: text");
curl_close($ch);
libxml_use_internal_errors(true);
libxml_clear_errors();
$doc = DOMDocument::loadHTML($data);
$xpath = new DOMXPath($doc);
$ownName1query = '//table/tr/td/span[#id="MainContent_MainContent_cphMainContentArea_ucSearchType_wzrdRealPropertySearch_ucDetailsSearch_dlstDetaisSearch_lblOwnerName_0"][#class="text"]';
$ownName2query = '//table/tr/td/span[#id="MainContent_MainContent_cphMainContentArea_ucSearchType_wzrdRealPropertySearch_ucDetailsSearch_dlstDetaisSearch_lblOwnerName2_0"][#class="text"]';
$ownAddrquery = '//table/tr/td/span[#id="MainContent_MainContent_cphMainContentArea_ucSearchType_wzrdRealPropertySearch_ucDetailsSearch_dlstDetaisSearch_lblMailingAddress_0"][#class="text"]';
$entries = $xpath->query($ownName1query);
foreach($entries as $entry) {
$ownname1 = $entry->nodeValue;
}
$entries = $xpath->query($ownName2query);
foreach($entries as $entry) {
$ownname2 = $entry->nodeValue;
}
$entries = $xpath->query($ownAddrquery);
$pattern = '#<br\s*/?>#i';
$replacement = ", ";
$i=0;
foreach($entries as $entry) {
$ownAddr = $entry->nodeValue;
if(!$entry->childNodes == 0) {
$ownAddr = $doc->saveHTML($entry);
}
$ownAddr2 = preg_replace($pattern, $replacement, $ownAddr, 15, $count); // replace <br/> with a comma
$ownAddr3 = strip_tags($ownAddr2);
}
return array($ownname1, $ownname2, $ownAddr3);
}
Your problem is:
header("Content-type: text");
Just remove that. Why is it there?
As mentioned, it's the header which is causing you a problem. You see the header decides the type of content that the current document should have or how the current document should behave - an information that is usually found in the < head >...< /head > part of an HTML. You can use it for declaring the content type, controlling the cache, redirecting, and etc.
When you use header("Content-type: text"), you are deciding that the content of the current document "yourdocument.php" would be a text instead of the default which is HTML.
header("Content-type: text/html");
echo "<html>This would make mypage.php behave as an HTML</html>";
// This is usually unnecessary since text/html is already the default header
header("Content-type: text/javascript");
echo "this would make mypage.php behave as a javascript";
header("Content-type: text/css");
echo "this would make mypage.php behave as a CSS";
header('Content-type: image/jpeg');
readfile("source/to/my/file.jpg");
// this would make mypage.php display file.jpg and act as a jpg
header('Content-type: image/png');
readfile("source/to/my/file.png");
// this would make mypage.php display file.png and act as a png
header('Content-type: image/gif');
readfile("source/to/my/file.gif");
// this would make mypage.php display file.gif and act as a gif
header('Content-type: image/x-icon');
readfile("source/to/my/file.ico");
// this would make mypage.php display file.ico and act as an icon
header('Content-type: image/x-win-bitmap');
readfile("source/to/my/file.cur");
// this would make mypage.php display file.cur and act as a cursor

PHP Simple HTML DOM Scrape External URL

I'm trying to build a personal project of mine, however I'm a bit stuck when using the Simple HTML DOM class.
What I'd like to do is scrape a website and retrieve all the content, and it's inner html, that matches a certain class.
My code so far is:
<?php
error_reporting(E_ALL);
include_once("simple_html_dom.php");
//use curl to get html content
$url = 'http://www.peopleperhour.com/freelance-seo-jobs';
$html = file_get_html($url);
//Get all data inside the <div class="item-list">
foreach($html->find('div[class=item-list]') as $div) {
//get all div's inside "item-list"
foreach($div->find('div') as $d) {
//get the inner HTML
$data = $d->outertext;
}
}
print_r($data)
echo "END";
?>
All I get with this is a blank page with "END", nothing else outputted at all.
It seems your $data variable is being assigned a different value on each iteration. Try this instead:
$data = "";
foreach($html->find('div[class=item-list]') as $div) {
//get all divs inside "item-list"
foreach($div->find('div') as $d) {
//get the inner HTML
$data .= $d->outertext;
}
}
print_r($data)
I hope that helps.
I think, you may want something like this
$url = 'http://www.peopleperhour.com/freelance-seo-jobs';
$html = file_get_html($url);
foreach ($html->find('div.item-list div.item') as $div) {
echo $div . '<br />';
};
This will give you something like this (if you add the proper style sheet, it'll be displayed nicely)

str_replace PHP script can't handle foreign characters such as umlauts robustly

The following script does not always correctly catch and convert foreign characters. Could someone show me what I'm missing to get it to be more robust?
<?php
include("../index_head.inc.php");
$content = implode("",(#file("current.txt")));
$url = "http://XXXXXX.html?no_body=1";
$content = file_get_contents($url,'r');
if (isset($_GET['showcurrent']) && $_GET['showcurrent'] == '')
{
$content = substr($content,1,strpos($content,"<hr ")-1);
}
else
{
$content = str_replace("<br style=\"clear:both\" />\n</p>", "</p>",$content);
$content = str_replace("ck1\"><img", "ck1\" target=_blank><img",$content);
};
$content = str_replace("<h3>current</h3>", "",$content);
echo "<div id=\"service\" style=\"width: 660px;padding-left:5px\">",str_replace("current.html","current.html",$content),"</div>";
include("../index_footer.inc.php");
?>
New information: Pekka, you gave me the idea to check how the page emits without str_replace():
<?php
include("../index_head.inc.php");
$content = implode("",(#file("current.txt")));
$url = "XXXXXX.html?no_body=1";
$content = file_get_contents($url,'r');
echo "<div id=\"service\" style=\"width: 660px;padding-left:5px\">",$content,"</div>";
It seems the problem lies elsewhere because I get the same mangling even without using str_replace()! If you can help me get this sorted out, I would sure appreciate it. I have seen your wish list. ;)
Did you include the charset in php?
try this:
header('Content-Type: text/html; charset=utf-8');
If not working check if your file is already saved in utf8 before str replace:
utf8_encode ( string $data );
In the opposite case use:
utf8_decode( string $data );
Hope it helps!
Thank you SBO - It sure did help! I simply changed the code to:
<?php
include("../index_head.inc.php");
$content = implode("",(#file("current.txt")));
$url = "http://XXXXXX.html?no_body=1";
$content = file_get_contents(utf8_encode($url),'r');
if (isset($_GET['showcurrent']) && $_GET['showcurrent'] == '')
{
$content = substr($content,1,strpos($content,"<hr ")-1);
}
else
{
$content = str_replace("<br style=\"clear:both\" />\n</p>", "</p>",$content);
$content = str_replace("ck1\"><img", "ck1\" target=_blank><img",$content);
};
$content = str_replace("<h3>current</h3>", "",$content);
echo "<div id=\"service\" style=\"width: 660px;padding-left:5px\">",str_replace("current.html","current.html",utf8_decode($content)),"</div>";
include("../index_footer.inc.php");
?>
and everything is working fine.
Thank you very much for your help.

An Ajax request is taking 6 seconds to complete, not sure why

I am working on a user interface, "dashboard" of sorts which has some div boxes on it, which contain information relevant to the current logged in user. Their calendar, a todo list, and some statistics dynamically pulled from a google spreadsheet.
I found here:
http://code.google.com/apis/spreadsheets/data/3.0/reference.html#CellFeed
that specific cells can be requested from the sheet with a url like this:
spreadsheets.google.com/feeds/cells/0AnhvV5acDaAvdDRvVmk1bi02WmJBeUtBak5xMmFTNEE/1/public/basic/R3C2
I briefly looked into Zend GData, but it seemed way more complex that what I was trying to do.
So instead I wrote two php functions: (in hours.php)
1.) does a file_get_contents() of the generated url, based on the parameters row, column, and sheet
2.) uses the first in a loop to find which column number is associated with the given name.
So basically I do an ajax request using jQuery that looks like this:
// begin js function
function ajaxStats(fullname)
{
$.ajax({
url: "lib/dashboard.stats.php?name="+fullname,
cache: false,
success: function(html){
document.getElementById("stats").innerHTML = html;
}
});
}
// end js function
// begin file hours.php
<?php
function getCol($name)
{
$r=1;
$c=2;
while(getCell($r,$c,1) != $name)
{ $c++; }
return $c;
}
function getCell($r, $c, $sheet)
{
$baseurl = "http://spreadsheets.google.com/feeds/cells/";
$spreadsheet = "0AnhvV5acDaAvdDRvVmk1bi02WmJBeUtBak5xMmFTNEE/";
$sheetID = $sheet . "/";
$vis = "public/";
$proj = "basic/";
$cell = "R".$r."C".$c;
$url = $baseurl . $spreadsheet . $sheetID . $vis . $proj . $cell . "";
$xml = file_get_contents($url);
//Sometimes the data is not xml formatted,
//so lets try to remove the url
$urlLen = strlen($url);
$xmlWOurl = substr($xml, $urlLen);
//then find the Z (in the datestamp, assuming its always there)
$posZ = strrpos($xmlWOurl, "Z");
//then substr from z2end
$data = substr($xmlWOurl, $posZ + 1);
//if the result has more than ten characters then something went wrong
//And most likely it is xml formatted
if(strlen($data) > 10)
{
//Asuming we have xml
$datapos = strrpos($xml,"<content type='text'>");
$datapos += 21;
$datawj = substr($xml, $datapos);
$endcont = strpos($datawj,"</content>");
return substr($datawj, 0,$endcont);
}
else
return $data;
}
?>
//End hours.php
//Begin dashboard.stats.php
<?php
session_start();
// This file is requested using ajax from the main dashboard because it takes so long to load,
// as to not slow down the usage of the rest of the page.
if (!empty($_GET['name']))
{
include "hours.php";
// GetCollumn of which C#R1 = users name
$col = getCol($_GET['name']);
// then get cell from each of the sheets for that user,
// assuming they are in the same column of each sheet
$s1 = getcell(3, $col, 1);
$s2 = getcell(3, $col, 2);
$s3 = getcell(3, $col, 3);
$s4 = getcell(3, $col, 4);
// Store my loot in the session varibles,
// so next time I want this, I don't need to fetch it
$_SESSION['fhrs'] = $s1;
$_SESSION['fdol'] = $s2;
$_SESSION['chrs'] = $s3;
$_SESSION['bhrs'] = $s4;
}
//print_r($_SESSION);
?>
<!-- and finally output the information formated for the widget-->
<strong>You have:</strong><br/>
<ul style="padding-left: 10px;">
<li> <strong><?php echo $_SESSION['fhrs']; ?></strong> fundraising hours<br/></li>
<li>earned $<strong><?php echo $_SESSION['fdol']; ?></strong> fundraising<br/></li>
<li> <strong><?php echo $_SESSION['chrs']; ?></strong> community service hours<br/></li>
<li> <strong><?php echo $_SESSION['bhrs']; ?></strong> build hours <br/></li>
</ul>
//end dashboard.stats.php
I think that where I am loosing my 4 secs is the while loop in getCol() [hours.php]
How can I improve this, and reduce my loading time?
Should I just scrap this, and go to Zend GData?
If it is that while loop, should i try to store each users column number from the spreadsheet in the user database that also authenticates login?
I didn't have the proper break in the while loop, it continued looping even after it found the right person.
Plus the request take time to go to the google spreadsheet. About .025 second per request.
I also spoke with a user of ZendGdata and they said that the request weren't much faster.

Categories