I want to get the audio and logo URLs for each episode of the podcast using php.
First I try to use simplepie to parse feed. But, it so complex for me. I only found how to get podcast episode's logo. but I can't find the way to get URL of the audio file from RSS.
Next I try to use the package link below. But when I try to use it, I get an error like that " Error Call to undefined function Lukaswhite\PodcastFeedParser\Parser().".
https://github.com/lukaswhite/podcast-feed-parser
PS. If you are try for this. This repository's redme file is incorrect.
composer require lukaswhite/php-feed-parser
use this.
composer require lukaswhite/podcast-feed-parser
Related
I was trying to parse the rss of the tag PHP, from http://stackoverflow.com and tried to use something other than DOM Model, So I looked into SimpleXML. THis is my code:
<?php
error_reporting(-1);
$xml = file_get_contents('https://stackoverflow.com/feeds/tag/php');
$loaded = simplexml_load_string($xml) or die("There is a problem");
$str1 = $loaded["entry"]["entry"][0]->title;
echo $str1;
?>
But nothing is displayed on the screen, and also no error is displayed!
The sample data from https://stackoverflow.com/feeds/tag/php can be found at
http://gourabt2.cloudapp.net/sample-data/sample_data.xml
Any Help would be very much appreciated! Thanks! Cheers!
You use array-access in SimpleXML to access attributes so:
$loaded["entry"]
returns the attribute named "entry" from the document element.
use arrow-access to get the element named "entry" instead:
$loaded->entry
this returns the element named "entry".
Additionally take care with namespaces. Parsing a feed with SimpleXML has been outlined already in existing Q&A material, please relate to it.
Try this out.
<?php
$xml = file_get_contents('http://stackoverflow.com/feeds/tag/php');
$loaded = simplexml_load_string($xml) or die("There is a problem");
foreach($loaded->entry as $post) {
echo $post->title . "\n";
}
Output:
join 2 tables - group by id order by date price asc
using php variable in an sql query
Clear browser cache memory by php code
There is a error in parsing the rss from stackoverflow.com. using SimpleXML in PHP
Modify Laravel Validation message response
chained dropdown from database
Php database handling
How to load model with Codeigniter Webservice - Nusoap Server?
multiple report download php adwords api
Unable to confirm Amazon SNS subscription for SES bounces
Comparing if values exist in database
Better way to CURL to WCF
PHP SaaS Facebook app: Host Page Tab ID and User ID
PHP and Mysql - running over PDOStatement
How to change form textbox value in Zend form annotation?
connect Android with PHP, MySQL , unfortunately app has stopped in android emulator
Auto increment a SESSION key ID
Call PHP function in a class from a HTML form
PHP SQL Preventing Duplicate User names: Catching Exception vs Select Query
I am only able to grab the first group of text in between the tr need helping fixing my code
How to run an external program in php
How to connect to php in android using an async task?
PHP Return HTML (Laravel / Lumen Framework)
prestashop smarty if statement
Cakephp OR condition Implementation
Progress bar HTML5/PHP
preg_match file url from jwplayer
Does Cloudflare Cache HTML5 Video Embedded on PHP Page?
how to INSERT INTO JOIN
PHP web service returns "403 forbidden" error
That's because you have the wrong path. Here is the correction.
(string)$loaded->entry->title;
I'm trying to learn REST, and thought it might be good to start with a PHP REST client such as Httpful. I just can't seem to get it to work. I downloaded the httpful.phar file and placed it in my working directory. Then created a simple php file with the following contents from an example on the their site:
<?php
// Point to where you downloaded the phar
include('httpful.phar');
$uri = "https://www.googleapis.com/freebase/v1/mqlread?query=%7B%22type%22:%22/music/artist%22%2C%22name%22:%22The%20Dead%20Weather%22%2C%22album%22:%5B%5D%7D";
$response = Request::get($uri)->send();
echo 'The Dead Weather has ' . count($response->body->result->album) . " albums.\n";
?>
I've tried multiple examples on the site, but only get a blank page when I load it in my browser.
Thanks for any help you can give!
This library uses Namespaces. Either use a complete classname or use the class
With a complete Classname:
\Httpful\Request::get($uri)->send();
With a use:
use Httpful\Request;
Request::get($uri)->send();
The sample code sadly is very incomplete on the website, but you can get the hint from sample below topic "INSTALL OPTION 1: PHAR" or from the actual source code inside the phar.
http://phphttpclient.com/
I have a MediaWiki installation and I'm writing a custom script that reads some database entries and produces a custom output for client.
However, the text are in wiki format, and I need to convert them to HTML. Is there some PHP API I could call -- well there must be, but what and how exactly?
What files to include and what to call?
You use the global object $wgParser to do this:
<?php
require(dirname(__FILE__) . '/includes/WebStart.php');
$output = $wgParser->parse(
"some ''wikitext''",
Title::newFromText('Some page title'),
new ParserOptions());
echo $output->getText();
?>
Although I have no idea whether doing it this way is a good practice, or whether there is some better way.
All I found is dumpHTML.php that will dump all your mediawiki ; or may be better API:Parser wiki text which tells :
If you are interested in simply getting the rendered content of a
page, you can bypass the api and simply add action=render to your url,
like so: /w/index.php?title=API:Parsing_wikitext&action=render
Once you add action=render it seems you can get the html page ; dont you think ?
hope this could help.
regards.
iam trying to develop a content grabber using php curl, i need to retrieve content from an url eg:http://mashable.com/2011/10/31/google-reader-backlash-sharebros-petition/ and store it in a csv file. for eg: if i enter a url to extract data, it should store the title, content, tags in the csv and subsequent for the next url. Is their any snippet like that?
the following code generates all the content, i need to specifically call in the title, content of the post
<?php
$homepage = file_get_contents('http://mashable.com/2011/10/28/occupy-wall-street-donations/');
echo strip_tags($homepage);
?>
There are so many ways. De facto, you want to parse a HTML file. strip_tags is one way, but a dirty one.
I recommend you to use the DOMDocument class for this (There should be many other ways here on so.com). The rest is standard php, writing and reading from a CSV is well documented on php.net
Example for getting links on a website (not by me):
http://php.net/manual/en/class.domdocument.php#95894
I'm getting the following using the PHP client on my server (connecting via FBML). I've included the appropriate php files (facebook etc..)
Fatal error:
Call to undefined method FacebookRestClient::feed_publishUserAction()
in ..../index.php on line 50
I'm trying to use the example given.
Any ideas?
You might want to take a quick browse/grep through your Facebook API files (facebookapi_php5_restlib.php) and make sure that the feed_publishUserAction() method exists. Perhaps you're using an older version of the API library?
OMG I found the answer
Because the facebookapi_php5_restlib.php that facebook.com provided you is a piece of outdated shit
i.e. you won't be able to find the word feed_publishUserAction in the facebookapi_php5_restlib.php file
HOWEVER, the official facebook smiley demo from this facebook wiki page , contained a more completed facebookapi_php5_restlib.php , along with the feed_publishUserAction function
Which finger would you like to show to the facebook developer staffs?