file_get_contents not working with externel sites - php

I've been using this over 2 months and worked fine until some days ago, when an error message appeared.
I use the steam api to get some info of the players.
$url = "http://steamcommunity.com/id/CGaKeepoN/?xml=1";
The page is not blank, it has an xml document. So my first thinking was that my host had turned allow_url_fopen off, but they don't (I asked them).
I also tried using error_reporting(E_ALL); ini_set('display_errors', 1);
And that's what I get:
Warning: simplexml_load_file() [function.simplexml-load-file]: I/O warning : failed to load external entity "" on line 6
Notice: Trying to get property of non-object on line 7
Now I'm using this: $xml = simplexml_load_file(file_get_contents($url));
And I would love to continue using it because installing cURL it's not an option right now. Do you know of a better (or a working) way to get this done? Or how to fix this error?
My full code:
error_reporting(E_ALL);
ini_set('display_errors', 1);
//$url = "http://steamcommunity.com/id/CGaKeepoN/?xml=1";
$url = "xml.txt";
ini_set('allow_url_fopen ','ON');
$xml = file_get_contents($url) or die ("file_get_contents failed");
$xml = simplexml_load_string($xml) or die ("simplexml_load_string failed");
$profilepic = $xml->avatarIcon;
$pic = $xml->avatarFull;
$steamID = $xml->steamID;
$lastonline = $xml->stateMessage;
echo $xml;
echo $profilepic;
echo $pic;
echo $steamID;
echo $lastonline;
EDIT:
If I use the internal url it loads the data, but when I try to use any url that uses http protocol just launches the file_get_contents failed error, even if the url is my website's one. I'm willing to use cURL if there's no other solution. I also thought about making a php script that loads the data and saves it in a file in the server (and then run a cronjob every 10 min), but it would use the file_get_contents anyway...

file_get_content returns a string so use simplexml_load_string instead.
This code works for me, tested.
$url = "http://steamcommunity.com/id/CGaKeepoN/?xml=1";
$xml = simplexml_load_string(file_get_contents($url));
$profilepic = $xml->avatarIcon;
$pic = $xml->avatarFull;
$steamID = $xml->steamID;
$lastonline = $xml->stateMessage;
var_dump($url);
var_dump($xml); //-> string(45) "http://steamcommunity.com/id/CGaKeepoN/?xml=1" bool(false)
echo $xml;
echo $profilepic;
echo $pic;
echo $steamID;
echo $lastonline;

Related

PHP Scraping images from bing with panther library show chrome port 9515 already in use

I am trying to to scrape images from a rss feed by passing the first 3 words i am searching on bing for images and trying to scrape the images. My code is working but i always get the error that port 9515 is already in use. I have already added code to kill the port but its not working please help me out. I took reference from this url to build my code https://www.thoughtfulcode.com/php-web-scraping/./ Please help me out.
Code
include ('vendor/autoload.php');
error_reporting(E_ERROR | E_PARSE);
$url="https://timesofindia.indiatimes.com/rssfeedstopstories.cms";
$xml = simplexml_load_file($url);
$array = json_decode(json_encode($xml), true);
$description=array();
$i_size=sizeof($array['channel']['item'])-1;
for($i=0;$i<sizeof($array['channel']['item']);$i++){
$title=$array['channel']['item'][$i]['title'];
$keyword_array=explode(" ",$title);
$keyword=$keyword_array[0].' '.$keyword_array[1].' '.$keyword_array[2];
download_feed_image($keyword);
if($i_size==$i){
echo "done";
}
}
function download_feed_image($keyword){
try {
$client = \Symfony\Component\Panther\Client::createChromeClient();
$crawler = $client->request('GET', 'https://www.bing.com/images/search?q='.$keyword.'&form=HDRSC2&first=1&cw=1349&ch=657');
$fullPageHtml = $crawler->html();
$pageH1 = $crawler->filter('.iusc')->attr('href');
$img_tag=null;
parse_str($pageH1,$img_tag);
$file_name = basename($img_tag['mediaurl']);
file_put_contents( $file_name,file_get_contents($img_tag['mediaurl']));
} catch (Exception $e) {
echo $e->getMessage();
} finally {
$client->quit();
}
exec("kill -9 $(lsof -t -i:9515)");
}
Just restart computer help to kill Chrome browser instance or
if you use Ubuntu type in terminal pkill chrome
$client->quit();
Not works when Browser is busy and not answered :(
Before you quit from Chrome browser you need to delete temporary data
$client->close();
$client->quit();
P.s. You can see Chrome browser status here
http://127.0.0.1:9515/status

XML from as2 to PHP

I am working on some old Flash as2 application that worked fine until something happened.
Most likely it's Flash 13 upgrade but I can't figure out why.
PHP version on server didn't changed.
I have following function in Flash that packs XML with another function and send to printcard.php:
var xmlDoc:Object=toXML();
xmlDoc.send(_global.phpPath + "printcard.php","_blank");
printcard.php should take $_POST XML and do some work with it ...
$data = GET_POST_XML();
$xml = new XML($data);
$arrCardPage = $xml->getBranches("card", "CardPage");
$cardPage = $arrCardPage[0];
And really ancient GET_POST_XML() function that worked fine until recently:
global $HTTP_POST_VARS, $HTTP_RAW_POST_DATA;
if( $HTTP_RAW_POST_DATA == null || !isset($HTTP_RAW_POST_DATA) ){
$xmldoc = '';
reset($HTTP_POST_VARS);
while(list($k, $v) = each($HTTP_POST_VARS)) {
$xmldoc.=$k.'='.$v;
};
$xmldoc = stripslashes($xmldoc);
$xmldoc = str_replace('<?php xml_version', '<?php xml version', $xmldoc);
return $xmldoc;
} else {
return $HTTP_RAW_POST_DATA;
};
Problem is that $data is empty - I have no XML.
On phpinfo I have:
_POST["<card_id"]:
\"0\" shared=\"0\" doubleside=\"1\" BgColorPicker=\"0\" bwColors=\"1\" showBg=\"1\" name=\"\"><CardPage h=\"17.99\" w=\"46.99\"><layerFront><CardLayer bg=\"16777215\" bgImageURL=\"\"><elements><OvalElement bgAlpha=\"100\" lineAlpha=\"100\" bgColor=\"16777215\" lineColor=\"0\" lineSize=\"0.35\" useFill=\"true\" useLine=\"true\" rotation=\"0\" h=\"7.76\" w=\"22.93\" y=\"4.58\" x=\"22.57\" /></elements></CardLayer></layerFront><layerBack><CardLayer bg=\"16777215\" bgImageURL=\"\"><elements /></CardLayer></layerBack></CardPage></card>
What did I missed?
use
$data = file_get_contents('php://input');
instead of
$data = GET_POST_XML();
OK, here's update in case someone stack with same issue:
Send command is deprecated from Flash version 13.
It doesn't send RAW POST DATA anymore.
However, SendAndLoad still works fine.
Couldn't find anything on Google or official Adobe release notes about this.

No json data being returned, when URL contains data

I built a very basic webAPI that when called prints some json data on the screen.
I'm calling the api with the following
function getEnvironmentList(){
$fullUrl = "localhost/serverList/api/rest.php?action=allenvironments&format=json";
$jsonDataRaw = file_get_contents($fullUrl);
return $jsonDataRaw;
}
$jsonData = getEnvironmentList();
echo "<PRE>";
var_dump(json_decode($jsonData, true));
echo "</PRE>";
I get the error Warning: file_get_contents(localhost/serverList/api/rest.php?action=allenvironments&format=json): failed to open stream: No error in C:\path\inc\getJSONdata.php on line 6
Yet when I visit that URL I see this
{"1":{"environmentID":"1","envName":"UAT","envCreatedBy":"mhopkins","envCreatedDtTm":"2013-06-30 00:34:57","envUpdatedBy":"mhopkins","envUpdatedDtTm":"2013-06-30 00:34:57"},"3":{"environmentID":"3","envName":"Platinum","envCreatedBy":"mhopkins","envCreatedDtTm":"2013-06-30 00:37:38","envUpdatedBy":"phpsense","envUpdatedDtTm":"2013-06-30 00:37:38"}}
I'm really confused why the code can't seem to realize there is json data there...
You forgot the http.
$fullUrl = "http://localhost/serverList/api/rest.php?action=allenvironments&format=json";

How do I get data from requested server page?

I got two php pages:
client.php and server.php
server.php is on my web server and what it does is open my amazon product page and get price data and serialize it and return it to client.php.
Now the problem I have is that server.php is getting the data, but when I return it and do echo after using unserialize(), it shows nothing. But if I do echo in server.php, it shows me all the data.
Why is this happening? Can anyone help me please?
This the code I have used:
client.php
$url = "http://www.myurl.com/iec/Server.php?asin=$asin&platform=$platform_variant";
$azn_data = file_get_contents($url);
$azn_data = unserialize($azn_data);
echo "\nReturned Data = $azn_data\n";
server.php
if(isset($_GET["asin"]))
{
$asin = $_GET["asin"];
$platform = $_GET["platform"];
echo "\nASIN = $asin\nPlatform = $platform";
//Below line gets all serialize price data for my product
$serialized_data = amazon_data_chooser($asin, $platform);
return($serialized_data);
}
else
{
echo "Warning: No Data Found!";
}
On server.php , you need to replace your following line:
return($serialized_data);
for this one:
echo $serialized_data;
because client.php reads the output of server.php, return is used to pass information from functions to caller code.
UPDATE:
Apart from the fixes above, you're hitting a bug in unserialize() function that presents with some special combination of characters, which your data seems to have, the solution is to workaround the bug by base64() encoding the data prior to passing it to serialize() , like this:
In client.php:
$azn_data = unserialize(base64_decode($azn_data));
In server.php:
echo base64_encode($serialized_data);
Source for this fix here .
You are not serializing your data on server side so there is nothing to deserialize on client side.
return(serialize($serialized_data));
Edit:
if(isset($_GET["asin"]))
{
$asin = $_GET["asin"];
$platform = $_GET["platform"];
echo "\nASIN = $asin\nPlatform = $platform";
//Below line gets all serialize price data for my product
$serialized_data = amazon_data_chooser($asin, $platform);
die(serialize($serialized_data));
}
else
{
echo "Warning: No Data Found!";
}

Grabbing Twitter Friends Feed Using PHP and cURL

So in keeping with my last question, I'm working on scraping the friends feed from Twitter. I followed a tutorial to get this script written, pretty much step by step, so I'm not really sure what is wrong with it, and I'm not seeing any error messages. I've never really used cURL before save from the shell, and I'm extremely new to PHP so please bear with me.
<html>
<head>
<title>Twitcap</title>
</head>
<body>
<?php
function twitcap()
{
// Set your username and password
$user = 'osoleve';
$pass = '****';
// Set site in handler for cURL to download
$ch = curl_init("https://twitter.com/statuses/friends_timeline.xml");
// Set cURL's option
curl_setopt($ch,CURLOPT_HEADER,1); // We want to see the header
curl_setopt($ch,CURLOPT_TIMEOUT,30); // Set timeout to 30s
curl_setopt($ch,CURLOPT_USERPWD,$user.':'.$pass); // Set uname/pass
curl_setopt($ch,CURLOPT_RETURNTRANSER,1); // Do not send to screen
// For debugging purposes, comment when finished
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,0);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,0);
// Execute the cURL command
$result = curl_exec($ch);
// Remove the header
// We only want everything after <?
$data = strstr($result, '<?');
// Return the data
$xml = new SimpleXMLElement($data);
return $xml;
}
$xml = twitcap();
echo $xml->status[0]->text;
?>
</body>
</html>
Wouldn't you actually need everything after "?>" ?
$data = strstr($result,'?>');
Also, are you using a free web host? I once had an issue where my hosting provider blocked access to Twitter due to people spamming it.
note that if you use strstr the returend string will actually include the needle-string. so you have to strip of the first 2 chars from the string
i would rather recommend a combination of the function substr and strpos!
anways, i think simplexml should be able to handle this header meaning i think this step is not necessary!
furthermore if i open the url i don't see the like header! and if strstr doesnt find the string it returns false, so you dont have any data in your current script
instead of $data = strstr($result, '<?'); try this:
if(strpos('?>',$data) !== false) {
$data = strstr($result, '?>');
} else {
$data = $result;
}

Categories