I want to read the rss feed for my videos from Vimeo using a PHP script so I can update a database table with video data, however even though I can read the feed in my browser, Vimeo returns a 500 error.
I looked at their API documentation which is about as clear as mud and wants me to create an API. I don't want an API for visitors to use, I just want to read a feed (which would normally be simple using curl.
I haven't found any help at all from their documentation, hopefully someone could make it a bit clearer for me please.
Thanks in advance.
I finally managed to resolve this. Feeling rather dumb now, the content of the curled rss feed from Vimeo did not display on the screen when echoed because of some of the content. It was however returned. I needed to view the screen source to see the content.
Related
I read some RSS feeds and they give as Text/News only a short peace of the Original Message.
I tried to take the given URL to original Article and try to get the content.
But i don't know how i can extract the relevant text part of the page.
Without Menu, Advertising, footer text etc.
Has anybody a idea how i can make this better?
Maybe there exists a PHP library which do this very good?
For example Facebook makes here a good job, when i post a link of my Websites (without any Facebook specific code), they detect the relevantt text part automatically.
Hope someone can help me a little bit.
Basically I am trying to implement the youtube spfjs (
https://youtube.github.io/spfjs/documentation/start/) dynamic navigation with progress bar.
I need a php/jquery version of this ruby solution -http://lorefnon.me/2015/11/26/boost-your-content-focussed-site-with-structured-page-fragments.html
Having issues handling the ?spf=navigate request and returning json files etc. My json file contents isnt loading over the initial page contents.
I've had a look through all online docs etc but really limited information currently out there. Hopefully this will help quite a few people as it seems an awesome framework. Thanks
I'm looking for examples in reddit API. I want to pull images from a certain subreddit (http://www.reddit.com/r/VillagePorn) and put them on a webpage. I've seen other websites do it (mainly, imgur.com/r/*) and I can't figure out how.
I tried http://www.reddit.com/r/VillagePorn/.xml but that just returns the Thumbnail of the picture. Not even the link itself.
What should I do?
You can check out the Reddit API if you'd like. Any link can add /.json to the end of it and it fetches the information for that, including the source link for the picture.
I'm not sure how you're creating this page, but this question might help you out too.
I'm building a website and am looking for a way to implement a certain feature that Facebook has. The feature that am looking for is the link inspector. I am not sure that is what it is called, or what its called for that matter. It's best I give you an example so you know exactly what I am looking for.
When you post a link on Facebook, for example a link to a youtube video (or any other website for that matter), Facebook automatically inspects the page that it leads you and imports information like page title, favicon, and some other images, and then adds them to your post as a way of giving (what i think is) a brief preview of the page to anyone reading that post.
I already have a feature that allows users to share a link (or URLs). What I want is to do something useful with the url, to display something other than just a plain link to a webpage, to give someone viewing a shared link (in the form if a post) some useful insight into the page that the url leads to.
What I'm looking for is a script, or tutorial, or at the very least someone to point me in the right direction, so that it can help me accomplish this (using PHP preferably).
I've tried googling it but I don't know exactly what such a feature would be called and google isn't helpful when you don't exactly know what you're looking for.
I figure someone out there, in this vast knowledge basket called stackoverflow, can help me with this. Can anyone help me?
You would first scan the page for URLs using regex, then you would parse the pages those links reference with a php DOMDocument. You could use the parsed document to obtain any information you need from the webpage.
DOMDocument:
http://php.net/manual/en/class.domdocument.php
DOMDocument->load (loads a file, aka a webpage):
http://php.net/manual/en/domdocument.load.php
the link goes through http://www.facebook.com/l.php
You pass a URL to this and facebook filters it.
Can someone tell me what happens when i enter a link into the Facebook Status Update Form and it loads up a mini info kinda thing of the website (I'm guessing its RSS or something?)
How do i implement this on my site using PHP?
What do i need to learn to be able to implement that?
It scrapes the page you are linking to. It doesn't have anything to do with RSS.
By looking at the HTML of the page it can get the page title for you and find all the images that can be used as a thumbnail.
Take a look at HTTP or cURL in the PHP manual for methods to get webpage content.