I am reading an xml file and I am loading it in my php file as
$xml = simplexml_load_file($url);
where $url is an external link.
How can I echo if the url fails to load & if it is not responding at a specific time?
Related
I have an url provided by a wholesaler. Url generates xml file which I need to save on my server.
I use PHP - file_get_contents and file_put_contents to do that:
$savepath = "path_to_my_server_folder";
$xmlurl = "http://usistema.eurodigital.lt/newxml/xmlfile.aspx?xml=labas&code=052048048048051057049050048049052";
file_put_contents($savepath.'eurodigital.xml', file_get_contents($xmlurl));
File is generated on my server, but its content is empty. I have no problems with other xml files if I provide direct xml url, but in this situation file is generated by aspx file dynamically. Xml url is actual url I use. When I open xmlurl in browser, xml file gets saved to device.
Can you help me with moving xml to my server? What function should I use? Is it even possible to do that using PHP 5? "allow_url_fopen" is ON.
Thank you in advance!
I need to load a gzipped xml from an external link but I don't know how to do it.
Now I'm loading this xml file to my server and then I change every 3 days the xml adress in the variable.
The code I'm using now for loading xml is this:
$xmlFile= new SimpleXMLElement('xml/abcd_201407281213_12074_28203833.xml', 0, true);
The link gived to me to download the xml file from their server is like this:
http://nameofthewebsite.com/folder/28203833C78648187.xml?ticket=BA57F0D9FF910FE4DB517F4EC1A275A2&gZipCompress=yes
Someone can help me please?
I am new to PHP. I am downloading an XML file from a web service using PHP. I can download the file using this code:
$sourcefile = "http...com?querystring=string";
$destinationfile = 'data\description.xml';
$xml = file_get_contents($sourcefile);
file_put_contents($destinationfile, $xml);
But when I open the XML file, it has < where < should be and > where > should be.
I added this line of code to decode it before saving it to file, which fixes the above problem:
$xml = html_entity_decode($xml);
This doesn't seem to me to be the right way to go about it. Also, I am getting a weird piece of text showing up in the XML file, which prevents me from parsing the XML file:
I tried using str_replace($xml) right before decoding it (and tried it after decoding it), but that wouldn't get rid of it.
What is the correct way to download an XML file using GET from a web service in PHP and will it get rid of that weird string ()?
I load an XML file from a service provider, and then my HTML displays the images in the necessary place. However, I wish to cache all of these files locally, instead of having the browser load them from the remote server each time.
Here is a sample of my XML file...
feed.xml
<URI>http://imt.boatwizard.com/images/1/14/77/3801477_-1_20120229071449_0_0.jpg</URI>
<URI>http://imt.boatwizard.com/images/1/40/6/3794006_-1_20120814035230_16_0.jpg</URI>
<URI>http://imt.boatwizard.com/images/1/21/74/4012174_-1_20120706051335_21_0.jpg</URI>
Can someone please help me write the PHP to loop through the XML, and download each image.
1) Download image
2) Rename image URL in XML, to match local file.
3) Save XML
Thanks!
I guess you should do something like this
// xmlize your... ehm... xml
$xml = new SimpleXMLElement($xml_content);
// extract uri elements
$result = $xml->xpath('/URI');
// loop through uris
while(list( , $node) = each($result)) {
// with curl functions, download every image
curl_stuff_i_dont_remember($node);
// move it to your folder
rename($downloaded_img, $newpath_img);
// if everything went ok, add a new line into the output xml
$outxml = $outxml . '<URI>' . basename($newpath_img) . '</URI>';
}
// dump the outxml
$fp = fopen('newxml.xml', 'w+');
fwrite($fp, $outxml);
I don't want to download the whole web page. It will take time and it needs lot of memory.
How can i download portion of that web page? Then i will parse that.
Suppose i need to download only the <div id="entryPageContent" class="cssBaseOne">...</div>. How can i do that?
You can't download a portion of a URL by "only this piece of HTML". HTTP only supports byte ranges for partial downloads and has no concept of HTML/XML document trees.
So you'll have to download the entire page, load it into a DOM parser, and then extract only the portion(s) you need.
e.g.
$html = file_get_contents('http://example.com/somepage.html');
$dom = new DOM();
$dom->loadHTML($html);
$div = $dom->getElementById('entryPageContent');
$content = $div->saveHTML();
Using this:
curl_setopt($ch, CURLOPT_RANGE, "0-10000");
will make cURL download only the first 10k bytes of the webpage. Also it will only work if the server side supports this. Many interpreted scripts (CGI, PHP, ...) ignore it.