How can i make flexible XML data files for other websites? - php

I would like to know how can i give other website parsers an xml file or response based on arguments they request?
For example i have show_data.php file that can take range of params and then apply it to mysql query and then form valid xml 1.0 string.
So by this point i have finished with data fetching + xml formating based on request params.
Now how would i share that xml with other websites for their xml parsers?
Do i simply output my xml string in php file with appropriate headers or somehow else?
Example:
1)www.example.com request www.mypage.com/show_data.php?show=10
2)www.mypage.com/show_data.php send xml data back to www.example.com
It's really hard to explain since i have not worked with xml and stuff before. Hope it makes some sense.
Thanks.

Well, when example.com does the initial request, your page will process it and return the xml as the result. There's nothing special that you'll need to do.
$xml = "";
// process the xml (build it - do what you need to do)
// returning the xml to the requester
header ("Content-Type:text/xml");
echo $xml;

Related

PHP won't receieve POST XML data

I'm building a service backend that is being sent a "delivery report" after successfully sending a SMS to a user.
The report itself is XML POSTed to our "endpoint" with the content-type application/xml.
I'm using Postman to make sure that everything is working correctly. I've made a test using regular JSON and can return the data without issues, however, no matter what I try with XML I basically get no indication that anything is being sent to the server.
(Test with JSON)
(Test with XML)
Here's my simple PHP script:
<?php
header('Content-Type: application/xml; charset=utf-8');
print_r(json_decode(file_get_contents("php://input"), true));
print_r($HTTP_RAW_POST_DATA);
?>
I feel like I've tried everything. Been looking at past issues posted to SO and other places, it simply won't work for me. I'm hoping for some answers that at least points me in the right direction here.
Cheers.
Your trying to json_decode XML data. You should use something like SimpleXML.
Instead of...
print_r(json_decode(file_get_contents("php://input"), true));
You should use ...
$xml = new SimpleXMLElement(file_get_contents("php://input"));
echo $xml->asXML();
You should be able to get the information by (for example)...
echo (string)$xml->id;
json_decode can't read XML, seems like you're trying to parse XML with json_decode. if you want to output the received XML, use echo (or if it's for debugging purposes, use var_dump, eg var_dump(file_get_contents("php://input"));), or if you want to parse the XML, use DOMDocument.

How to Parse XML from a URL using PHP

Im learning how to parse XML elements into an html document.
This takes a url with an xml, reads the elements but it ain't working...also
I want to take it a bit further but I simply haven't been able to, how can I make it so I read the xml from a url? and use an xml element as filename to create an html document using a template?
////EDIT this is what I tried! /////EDIT/////EDIT/////EDIT/////EDIT/////EDIT/////EDIT
I tried this just for the sake of me knowing what Im doing(...apparently nothing haha) so I could echo if the information was right....
<?php
$url = "http://your_blog.blogspot.com/feeds/posts/default?alt=rss";
$xml = simplexml_load_file($url);
print_r($xml);
?>
Thank you for your time!
Generally, "cross-domain" requests would be forbidden by web browsers, per the same origin security policy.
However, there is a mechanism that allows JavaScript on a web page to make XMLHttpRequests to another domain called Cross-origin resource sharing (CORS).
Read this about CORS:
http://en.wikipedia.org/wiki/Cross-origin_resource_sharing
Check this article out about RSS feeds:
http://www.w3schools.com/php/php_ajax_rss_reader.asp

Load/Modify/Save XML with javascript

Before this question gets stamped as a duplicate, I am sorry! I've read ALL the duplicate questions and if anything, it has confused me even more. So maybe this question is slightly different.
I've written a little Javascript library that makes ajax calls and fetches and parses information from the graph facebook API.
This enables me to pretty much show all my page status' on my web page. However I'm just about to launch, and I have done as much testing as I can.
However. I'm sure errors will occur, and I've written many error catches blah blah blah.
What I want to do, is save all my errors in a xml file.
So when an error occurs, I want the javascript to load the xml file from the server, add the errors, then save the changes.
I know how to load the xml doc using XmlHttpRequests, And I'm sure I can figure out how to modify the xml just by using dom manipulation.
All i really want to know is. How do i save these changes? does it save automatically?
Or do i have to "somehow" pass the updated xml version to php and get that to save it?
Im not quite sure how to go about it.
I would use mySQL and php but that means "somehow" passing the error information to php, then saving it.
However id much prefer XML seeing as I'm the only person that will be reading the xml file.
Thanks very much.
Alex
Or do i have to "somehow" pass the updated xml version to php and get that to save it?
Yes, you'll want to use an XML HTTP request to send the XML DOM to the server where PHP can save it:
function postXML(xmlDOM, postURL, fileName, folderPath){
try{
// Create XML HTTP Request Object
oXMLReq = new ActiveXObject("MSXML2.XMLHTTP.3.0");
// Issue Synchronous HTTP POST
oXMLReq.open("POST",postURL,false);
// Set HTTP Request Headers
if(fileName != null){ oXMLReq.setRequestHeader("uploadFileName", fileName); } // What should file be named when saved on server?
if(folderPath != null){ oXMLReq.setRequestHeader("uploadDir", folderPath); } // What folder should file be saved in on server?
// SEND XML
///WScript.Echo(xmlDOM.xml);
oXMLReq.send(xmlDOM.xml);
return oXMLReq.responseText;
}catch(e){
return "postXML failed - check network connection to server";
}
}

Get all content with file_get_contents()

I'm trying to retrieve an webpage that has XML data using file_get_contents().
$get_url_report = 'https://...'; // GET URL
$str = file_get_contents($get_url_report);
The problem is that file_get_contents gets only the secure content of the page and returns only some strings without the XML. In Windows IE, if I type in $get_url_report, it would warn it if I want to display everything. If I click yes, then it shows me the XML, which is what I want to store in $str. Any ideas on how to retrieve the XML data into a string from the webpage $get_url_report?
You should already be getting the pure XML if the URL is correct. If you're having trouble, perhaps the URL is expecting you to be logged in or something similar. Use a var_dump($str) and then view source on that page to see what you get back.
Either way, there is no magic way to get any linked content from the XML. All you would get is the XML itself and would need further PHP code to process and get any links/images/data from it.
Verify if openssl is enable on your php, a good exemple of how to do it:
How to get file_get_contents() to work with HTTPS?

How to detect if a page is an RSS or ATOM feed

I'm currently building a new online Feed Reader in PHP. One of the features I'm working on is feed auto-discovery. If a user enters a website URL, the script will detect that its not a feed and look for the real feed URL by parsing the HTML for the proper <link> tag.
The problem is, the way I'm currently detecting if the URL is a feed or a website only works part of the time, and I know it can't be the best solution. Right now I'm taking the CURL response and running it through simplexml_load_string, if it can't parse it I treat it as a website. Here is the code.
$xml = #simplexml_load_string( $site_found['content'] );
if( !$xml ) // this is a website, not a feed
{
// handle website
}
else
{
// parse feed
}
Obviously, this isn't ideal. Also, when it runs into an HTML website that it can parse, it thinks its a feed.
Any suggestions on a good way of detecting the difference between a feed or non-feed in PHP?
I would sniff for the various unique identifiers those formats have:
Atom: Source
<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
RSS 0.90: Source
<rdf:RDF
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns="http://my.netscape.com/rdf/simple/0.9/">
Netscape RSS 0.91
<rss version="0.91">
etc. etc. (See the 2nd source link for a full overview).
As far as I can see, separating Atom and RSS should be pretty easy by looking for <feed> and <rss> tags, respectively. Plus you won't find those in a valid HTML document.
You could make an initial check to tell HTML and feeds apart by looking for <html> and <body> elements first. To avoid problems with invalid input, this may be a case where using regular expressions (over a parser) is finally justified for once :)
If it doesn't match the HTML test, run the Atom / RSS tests on it. If it is not recognized as a feed, or the XML parser chokes on invalid input, fall back to HTML again.
what that looks like in the wild - whether feed providers always conform to those rules - is a different question, but you should already be able to recognize a lot this way.
I think your best choice is getting the Content-Type header as I assume that's the way firefox (or any other browser) does it. Besides, if you think about it, the Content-Type is indeed the way server tells user agents how to process the response content. Almost any decent HTTP server sends a correct Content-Type header.
Nevertheless you could try to identify rss/atom in the content as a second choice if the first one "fails"(this criteria is up to you).
An additional benefit is that you only need to request the header instead of the entire document, thus saving you bandwidth, time, etc. You can do this with curl like this:
<?php
$ch = curl_init("http://sample.com/feed");
curl_setopt($ch, CURLOPT_NOBODY, true); // this set the HTTP Request Method to HEAD instead GET(default) and the server only sends HTTP Header(no content).
curl_exec($ch);
$conType = curl_getinfo($ch, CURLINFO_CONTENT_TYPE);
if (is_rss($conType)){ // You need to implement is_rss($conType) function
// TODO
}elseif(is_html($conType)) { // You need to implement is_html($conType) function
// Search a rss in html
}else{
// Error : Page has no rss/atom feed
}
?>
Why not try to parse your data with a component built specifically to parse RSS/ATOM Feed, like Zend_Feed_Reader ?
With that, if the parsing succeeds, you'll be pretty sure that the URL you used is indeed a valid RSS/ATOM feed.
And I should add that you could use such a component to parse feed in order to extract their informations, too : no need to re-invent the wheel, parsing the XML "by hand", and dealing with special cases yourself.
Use the Content-Type HTTP response header to dispatch to the right handler.

Categories