Save XML retrieved with rest and php - php

I can retrieve certain information with a rest command, the data it shows (in the browser) is already an XML. How do I save it to an XML file on the server after retrieving the information.
I have already tried it with the $dom-save command but I seem to do something wrong. Any help would be appreciated. See below for code (I want to save the $response to XML.)
<?php
require_once 'includes/rest_connector.php';
require_once 'includes/session.php';
// check to see if we start a new session or maintain the current one
checksession();
$rest = new RESTConnector();
$url = "/api/tax_codes/0/";
$rest->createRequest($url,"GET", null, $_SESSION['cookies'][0]);
$rest->sendRequest();
$response = $rest->getResponse();
$error = $rest->getException();
// save our session cookies
if ($_SESSION['cookies']==null)
$_SESSION['cookies'] = $rest->getCookies();
// display any error message
if ($error!=null)
echo $error;
// display the response
if ($response!=null)
echo $response;
else
echo "There was no response.";
?>

The RESTConneconnector is a specific class Lightspeed. I solved it by using this:
ibxml_use_internal_errors(true);//load if improperly formatted
file_put_contents("exportProduct.xml", $responseProduct);
So it was very easy in the end :)

I dont know any about RESTConnector class. But I suppose, you can try something like it:
$dom = new DOMDocument('1.0','utf-8');
$dom->preserveWhiteSpace = false;
$dom->formatOutput = true;
$dom->loadXML($response->asXML());
$dom->save($this->fileexport);

Related

PHP getElementById not working

So I'm trying to write a short function using PHP to check whether a server (or the back up) is available. The service provides two servers to use, and a page within the server that simply has "OK" in an element with id "server_status". I basically took their code that they provided and adjusted it so that it provides the kind of output I need. I want to get an array of true or false (depending on whether one of the sites is available), and the correct page if it is. Right now the output every time is (false, "e404.html"), which is what I set it up to output if no conditions are met. Here is my code:
function checkURL() {
$servers = array('tpeweb.paybox.com', // primary URL
'tpeweb1.paybox.com'); // backup URL
foreach($servers as $server){
$doc = new DOMDocument();
$doc->loadHTMLFile('https://'.$server.'/load.html');
$server_status = "";
$element = $doc->getElementById('server_status');
if($element){
$server_status = $element->textContent;
}
if($server_status == "OK"){
// Server is up and services are available
return array(true, 'https://'.$server.'/cgi/MYchoix_pagepaiement.cgi');
}
}
return array(false, 'e404.html');
}
Doing some output testing, it appears that I'm loading the document into $doc, but it doesn't fill $element. I'm new to PHP so I'm not quite sure what is wrong.
EDIT:
This is the original code that the service provided to make this check, I adjusted it because I needed to be able to actually output the link to use:
<?php
$servers = array('urlserver.paybox.com', // primary URL
'urlserver1.paybox.com'); // backup URL
$serverOK = "";
foreach($servers as $server){
$doc = new DOMDocument();
$doc->loadHTMLFile('https://'.$server.'/load.html');
$server_status = "";
$element = $doc->getElementById('server_status');
if($element){
$server_status = $element->textContent;
}
if($server_status == "OK"){
// Server is up and services are available
$serverOK = $server;
break;
}
// else : Server is up but services are not available .
}
if(!$serverOK){
die("Error : no server found");
}
?>
//echo 'Connecting to https://'.$server.'/cgi/MYchoix_pagepaiement.cgi';
Thanks,
Adrian
Does your html file have a doctype declared?
from http://php.net/manual/en/domdocument.getelementbyid.php
For this function to work, you will need either to set some ID attributes with DOMElement::setIdAttribute or a DTD which defines an attribute to be of type ID.
It should be sufficient to include <!DOCTYPE html> at the very top of your html files, and set
$doc->validateOnParse = true; before calling the getElementByID function.

loadHTMLFile loads, but is empty? PHP

so I tried to get a fix for this earlier but I think we were all going in the wrong direction. I'm trying to check two servers to make sure that at least one of them are active to make a call to. The service provides me with a page for each that simply has "OK" under a div with id="server_status". When I try to loadHTMLFile into a variable, it returns true, but I can never pull the element I need from it. After doing some output testing with saveHTML(), it appears that the variable holding the DOMDocument is empty. Here's my code:
servers = array('tpeweb.paybox.com', // primary URL
'tpeweb1.paybox.com'); // backup URL
foreach($servers as $server){
$doc = new DOMDocument();
$doc->validateOnParse = true;
$doc->loadHTMLFile('https://'.$server.'/load.html');
$server_status = "";
$docText = $doc->saveHTML();
if($doc) {
echo "HTML should output here: ";
echo $docText;
}
if(!$doc) {
echo "HTML file not loaded";
}
$element = $doc->getElementById('server_status');
if($element){
$server_status = $element->textContent;
}
if($server_status == "OK"){
// Server is up and services are available
return array(true, 'https://'.$server.'/cgi/MYchoix_pagepaiement.cgi');
}
}
return array(false, 'e404.html');
All I get as output is "HTML should output here: " twice, and then it returns the array at the bottom. This is the code that they provided:
$servers = array('tpeweb.paybox.com', // primary URL
'tpeweb1.paybox.com'); // backup URL
$serverOK = "";
foreach($servers as $server){
$doc = new DOMDocument();
$doc->loadHTMLFile('https://'.$server.'/load.html');
$server_status = "";
$element = $doc->getElementById('server_status');
if($element){
$server_status = $element->textContent;
}
if($server_status == "OK"){
// Server is up and services are available
$serverOK = $server;
break;
}
// else : Server is up but services are not available .
}
if(!$serverOK){
die("Error : no server found");
}
echo 'Connecting to https://'.$server.'/cgi/MYchoix_pagepaiement.cgi';
This also seems to be having the same problem. Could it be something with my PHP configuration? I'm on version 5.3.6.
Thanks,
Adrian
EDIT:
I tried it by inputting the HTML as a string instead of calling it to the server and it worked fine. However, calling the HTML into a string to use in the PHP function results in the same issue. Fixes??

Enjin API (JSON) to PHP

I'm trying to get data from:
http://natomilcorp.com/api/get-users
But when I try it nothing is working.
$url = "http://natomilcorp.com/api/get-users";
$jsonString = file_get_contents($url);
$obj = json_decode($jsonString);
echo $obj->"username";
I try JQuery but I was not able to make it work.
I will like to get the username, lastseen and datejoined.
If someone could help me with this.
The things that could have gone wrong are:
The website is not responding
The JSON is invalid
There is some error in your code
This code should tell your what is happening:
error_reporting(-1); // Turn on error reporting
$url = "http://natomilcorp.com/api/get-users";
$jsonString = file_get_contents($url);
if ( ! $jsonString) {
die('Could not get data from: '.$url);
}
$obj = json_decode($jsonString);
if ( ! $obj) {
// See the following URL to figure out what the error is
// Be sure to look at the comments, as there is a compatibility
// function there
// http://php.net/manual/en/function.json-last-error-msg.php
die('Something is wrong with the JSON');
}
echo $obj->"username";

How do I get data from requested server page?

I got two php pages:
client.php and server.php
server.php is on my web server and what it does is open my amazon product page and get price data and serialize it and return it to client.php.
Now the problem I have is that server.php is getting the data, but when I return it and do echo after using unserialize(), it shows nothing. But if I do echo in server.php, it shows me all the data.
Why is this happening? Can anyone help me please?
This the code I have used:
client.php
$url = "http://www.myurl.com/iec/Server.php?asin=$asin&platform=$platform_variant";
$azn_data = file_get_contents($url);
$azn_data = unserialize($azn_data);
echo "\nReturned Data = $azn_data\n";
server.php
if(isset($_GET["asin"]))
{
$asin = $_GET["asin"];
$platform = $_GET["platform"];
echo "\nASIN = $asin\nPlatform = $platform";
//Below line gets all serialize price data for my product
$serialized_data = amazon_data_chooser($asin, $platform);
return($serialized_data);
}
else
{
echo "Warning: No Data Found!";
}
On server.php , you need to replace your following line:
return($serialized_data);
for this one:
echo $serialized_data;
because client.php reads the output of server.php, return is used to pass information from functions to caller code.
UPDATE:
Apart from the fixes above, you're hitting a bug in unserialize() function that presents with some special combination of characters, which your data seems to have, the solution is to workaround the bug by base64() encoding the data prior to passing it to serialize() , like this:
In client.php:
$azn_data = unserialize(base64_decode($azn_data));
In server.php:
echo base64_encode($serialized_data);
Source for this fix here .
You are not serializing your data on server side so there is nothing to deserialize on client side.
return(serialize($serialized_data));
Edit:
if(isset($_GET["asin"]))
{
$asin = $_GET["asin"];
$platform = $_GET["platform"];
echo "\nASIN = $asin\nPlatform = $platform";
//Below line gets all serialize price data for my product
$serialized_data = amazon_data_chooser($asin, $platform);
die(serialize($serialized_data));
}
else
{
echo "Warning: No Data Found!";
}

Grabbing Twitter Friends Feed Using PHP and cURL

So in keeping with my last question, I'm working on scraping the friends feed from Twitter. I followed a tutorial to get this script written, pretty much step by step, so I'm not really sure what is wrong with it, and I'm not seeing any error messages. I've never really used cURL before save from the shell, and I'm extremely new to PHP so please bear with me.
<html>
<head>
<title>Twitcap</title>
</head>
<body>
<?php
function twitcap()
{
// Set your username and password
$user = 'osoleve';
$pass = '****';
// Set site in handler for cURL to download
$ch = curl_init("https://twitter.com/statuses/friends_timeline.xml");
// Set cURL's option
curl_setopt($ch,CURLOPT_HEADER,1); // We want to see the header
curl_setopt($ch,CURLOPT_TIMEOUT,30); // Set timeout to 30s
curl_setopt($ch,CURLOPT_USERPWD,$user.':'.$pass); // Set uname/pass
curl_setopt($ch,CURLOPT_RETURNTRANSER,1); // Do not send to screen
// For debugging purposes, comment when finished
curl_setopt($ch,CURLOPT_SSL_VERIFYPEER,0);
curl_setopt($ch,CURLOPT_SSL_VERIFYHOST,0);
// Execute the cURL command
$result = curl_exec($ch);
// Remove the header
// We only want everything after <?
$data = strstr($result, '<?');
// Return the data
$xml = new SimpleXMLElement($data);
return $xml;
}
$xml = twitcap();
echo $xml->status[0]->text;
?>
</body>
</html>
Wouldn't you actually need everything after "?>" ?
$data = strstr($result,'?>');
Also, are you using a free web host? I once had an issue where my hosting provider blocked access to Twitter due to people spamming it.
note that if you use strstr the returend string will actually include the needle-string. so you have to strip of the first 2 chars from the string
i would rather recommend a combination of the function substr and strpos!
anways, i think simplexml should be able to handle this header meaning i think this step is not necessary!
furthermore if i open the url i don't see the like header! and if strstr doesnt find the string it returns false, so you dont have any data in your current script
instead of $data = strstr($result, '<?'); try this:
if(strpos('?>',$data) !== false) {
$data = strstr($result, '?>');
} else {
$data = $result;
}

Categories