I am trying to use an rss feed from a domain that does not have a crossdomain file and because of that I am going to use a web service in the middle where I will be just getting the rss feed from a url (let's say the url is: www.example.com/feed) and then just print it to a page.
The service would work like: www.mywebservice.com/feed.php?word=something) and that will just go print the rss feed for: www.example.com/feed&q=word).
I used:
<?php
$word = $_GET["word"];
$ch=curl_init("http://example.com/feed.php?word=".$word."");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, 0);
$data = curl_exec($ch);
curl_close($ch);
print $data;
?>
But this did not work, it gives me (SYSTEM ERROR: we're sorry but a serious error has occurred in the system). I am on shared hosting
Any help?
The readfile function reads a file and writes it to the output buffer.
readfile('http://example.com/feed.rss');
A URL can be used as a filename with this function if the fopen wrappers have been enabled. See fopen() for more details on how to specify the filename. See the List of Supported Protocols/Wrappers for links to information about what abilities the various wrappers have, notes on their usage, and information on any predefined variables they may provide.
If you need to do anything with the XML, use one of PHP's many XML libraries, preferably DOM, but there is also SimpleXml or XMLReader. As an alternative, you could use Zend_Feed from the Zend Framework as a standalone component to work with the RSS feed.
If you cannot enable allow_url_fopen on your server, try cURL like Matchu suggested or go with Artefacto's suggestion.
Consider doing this with mod_rewrite (using the P flag) or setting up a reverse proxy with ProxyPass.
Since you say you can't do the fancy URL-file-opening shortcuts due to server restrictions, you will need to use PHP's cURL module to send an HTTP request.
If you want to also parse XML and process it further, be sure to look into SimpleXML. Let's you parse and manipulate the feed.
So at the end I ended up doing this:
$word = $_GET["word"];
$url = "http://www.example.com/feed.php?q=".$word;
$curl = #curl_init ($url);
#curl_setopt ($curl, CURLOPT_HEADER, FALSE);
#curl_setopt ($curl, CURLOPT_RETURNTRANSFER, TRUE);
#curl_setopt ($curl, CURLOPT_FOLLOWLOCATION, TRUE);
#curl_setopt ($curl, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
$source = #curl_exec ($curl);
#curl_close ($curl);
print $source;
I hope this is considered as an answer not an edit (if an edit please tell me so I can just delete this answer and edit the post)
Related
I have a URL like this https://facebook.com/5 , I want to get HTML of that page, just like view source.
I tried using file_get_contents but that didn't returned me correct stuff.
Am I missing something ?
Is there any other function that I can utilize ?
If I can't get HTML of that page, what special thing did the developer do while coding the site to avoid this thing ?
Warning for being off topic
But does this task have be done using PHP?
Since this sounds like a task of web-scraping, I think you would gain more use in casperjs.
With this, you can target with precision what you would want to retrieved from the fb-page rather than grabbing the whole content, which I assume as of this writing is generated by multiple requests of content and rendered to you through a virtual DOM.
Please note that I haven't tried retrieving content from facebook, but I've done this with multiple services.
Good luck!
You may want to use curl instead: http://php.net/manual/en/curl.examples.php
Edit:
Here is an example of mine:
$url = 'https://facebook.com/5';
$ssl = true;
$ch = curl_init();
$timeout = 3;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, $ssl);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
$data = curl_exec($ch);
curl_close($ch);
Note that depending on the websites vhost configuration a slash at the end of the url can make a difference.
Edit: Sorry for the undefined variable.. I copied it out of a helper method i used. Now it should be alright.
Yet another Edit:
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
By adding this option you will follow the redirects that are apperently happening in your example. Since you said it was an example I actually didnt run it before. Now I did and it works.
I want to be able to allow user to enter in variable URL which file they would like to download from remote server URL e.g /download.php?url=fvr_anim_foxintro_V4_01.jpg
<?php
$url = $_GET['url'];
header("Location: http://fvr.homestead.com/files/animation/" . $url);
?>
The above is purely an example I grabbed from google images. The problem is I do not want the end user to be allowed to see where the file is originally coming from so it would need to get the file download to the server and the server passes it along to the end user. Is there a method of doing this?
I find many examples for files hosted on the server but no examples for serving files hosted on a remote server. In other words I would be passing them along. The files would be quite large (up to 100MB)
Thanks in advance!
You can use cURL for this:
<?php
$url = "http://share.meebo.com/content/katy_perry/wallpapers/3.jpg";
$ch = curl_init();
$timeout = 0;
curl_setopt ($ch, CURLOPT_URL, $url);
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
// Getting binary data
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
$image = curl_exec($ch);
curl_close($ch);
// output to browser
header("Content-type: image/jpeg");
echo $image;
?>
Source: http://forums.phpfreaks.com/topic/120308-solved-curl-get-image/
Of course, this example is just for an image (as you've suggested) but you can use cURL for all kinds of remote data retrieval via HTTP GET, POST, PUT, DELETE, etc. Search around the web for "php curl" and you'll find an endless supply of information.
The ideal solution would be to use PHP's cURL Library, but if you're using shared hosting keep in mind this library may be disabled.
Assuming you can use cURL, you simply echo the Content-type header with the appropriate MIME Type and echo the results from curl_exec().
To get a basic idea of how to use the cURL library, look at the example under the curl_init() function.
I've got a site "mysite.com", this has a rss/xml-feed located at mysite.com/feed.
I have to read this feed in a php-script but simplexml_load_file returns an empty result. How can I get the "real" path to the feed? I assume this file is created using some clever .htaccess and such.
I think it has nothing to do with .htaccess. to get the feed you have two options.
one is using file_get_content, the other is using cURL.
$xml = file_get_contents('http://mysite.com/feed');
and
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,'http://mysite.com/feed');
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$xml = curl_exec($ch);
curl_close($ch);
now the $xml hold the xml file, so you can use simplexml_load_file to parse it.
An alternative is you can use Rss PHP. more details are here
There is an epic lack of PHP cURL love on the Internet for beginners like me. I was wondering how to use cURL to download & display an ICS file (They're plain text to me...) in my PHP code. Unless fopen() is 1,000 times easier, I'd like to stick with cURL for this one.
If your webserver allows it, file_get_contents() is even easier.
echo file_get_contents('http://www.example.com/path/to/your/file.ics');
If you can not open URLs with file_get_contents() check out all the stuff on Stack Overflow, which I believe should be fine for a beginner.
If remote file_get_contents is not enabled, cURL can indeed do this.
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, 'http://example.com/file.ics');
// this is the key option - sets curl_exec to return the HTTP response
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
$file_contents = curl_exec($curl);
This question is simple. What function would I use in a PHP script to load data from a URL into a string?
CURL is usually a good solution: http://www.php.net/curl
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://www.example.com/");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// grab URL and pass it to the browser
$html = curl_exec($ch);
// close cURL resource, and free up system resources
curl_close($ch);
I think you are looking for
$url_data = file_get_contents("http://example.com/examplefile.txt");
With file wrappers you can use file_get_contents to access http resources (pretty much just GET requests, no POST). For more complicated http requests you can use the curl wrappers if you have them installed. Check php.net for more info.
Check out Snoopy, a PHP class that simulates a web browser:
include "Snoopy.class.php";
$snoopy = new Snoopy;
$snoopy->fetchtext("http://www.example.com");
$html = $snoopy->results;