Using the Vimeo API & caching responses in modx - php

Our site uses the Vimeo PHP library (https://github.com/vimeo/vimeo.php).
Currently I'm calling the library within snippets, e.g.:
require_once("____/autoload.php");
$vimeo = new \Vimeo\Vimeo(____AuthKeys, etc.___);
...
$videos = $vimeo->request('/me/albums/____)['body']['data']
...
But this means way more calls to the API than necessary ... right?
Vimeo recommends caching the response, but I'm not sure how to do that in modx.
I'm guessing the first 3 lines only need to be run once, then cached ... until we make changes to our Vimeo account (add videos, albums, etc.)
What's the best way to accomplish this?
The only part that changes from snippet to snippet is the $vimeo->request... portion ... is there a way to only have that at the start of our snippets?

You can use getCache to cache the complete output for a longer period of time, but if you want to cache data inside your snippet, you can use the modCacheManager for that.
For example, that might look like this:
require_once("____/autoload.php");
$vimeo = new \Vimeo\Vimeo(____AuthKeys, etc.___);
...
$cacheManager = $modx->getCacheManager();
$videos = $cacheManager->get('vimeo_videos');
if (empty($videos)) {
$videos = $vimeo->request('/me/albums/____')['body']['data']
$cacheManager->set('vimeo_videos', $videos, 3600);
}
// Process $videos further
That will cache the data for one hour (note the 3600 in the set call).

Related

How to display Now Playing in my website with Icecast2 and Liquidsoap

I couldn't find any better answer to this online, even the document of liquidsoap isn't helpful. What I want to happen is to grab the current song title and artist being played in my streaming server(icecast). I found in some forum that they were able to do it but they didn't explain it how, here's the liquidsoap script that they used:
def apply_metadata(m) =
title = m["title"]
artist = m["artist"]
album = m["album"]
[("artist","#{artist}"),("title","#{album} - #{title}")]
end
centovacast.callback_autodj := fun(s) -> map_metadata(apply_metadata,s)
This script i believe is also for centova and autodj only. While I do not use those technology (I'm using Ubuntu 16.04, Icecast2, Liquisoap, PHP, HTML5/CSS),
is this possible to do using the tools I'm currently using?
I used to use websockets to get the metadata but I found it frustrating that it was always out of sync.
The only way to solve it is to have the metadata encoded into the stream so you receive it at the same time as the audio.
I did a bit of digging around to find out how the icecast servers do it and put together a service worker script which adds the necessary header to your request to obtain a stream including metadata and then extracts it for you.
The code is here and there is a simple demo here
I hope this helps, anyway, I think that here, weserWEB, has already somehow said something similar.
If you're using Icecast version 2.4.4 you can consume the metadata from this endpoint: (if not, consider downloading that version)
http://<your-ipaddress-or-domain>:<port_number>/status-json.xsl
Just put that in any browser and you will get a JSON with the name of the song and title you are streaming, you have to configure properly your streaming client first, of course.
Then you can get the metadata from that endpoint without problems in your PHP. You can use CURL to get the JSON data, liquidsoap not necessary.
I'm not sure why you are dragging the source client into this.
A proper stream sent to an Icecast mountpoint will have metadata for currently playing audio.
This has been pointed out elsewhere. Icecast since 2.4.1 provides a proper JSON metadata export.
Querying JSON from within a website is very much a solved problem and considered an exercise for the inclined reader.
Why don't you grab this at icecast directly?
PHP:
function get_icecast_info($server_ip, $server_port, $admin_user, $admin_password) {
$index = #file_get_contents("http://".$admin_user.":".$admin_password."#".$server_ip.":".$server_port."/admin/stats.xml");
if($index) {
$xml = new DOMDocument(); if(!$xml->loadXML($index)) return false; $arr = array(); $listItem = $xml->getElementsByTagName("source");
foreach($listItem as $element) {
if($element->childNodes->length) {
foreach($element->childNodes as $i){ $arr[$element->getAttribute("mount")][$i->nodeName] = $i->nodeValue; }
}
}
return $arr;
} return false;
}
And this is the output (array):
$arr = get_icecast_info($ice_host, $ice_aport, $ice_user, $ice_pass);

how can i do multithreading in php

i am trying to rewrite my code to support multithreading ,it is a simple code but i can't figure out how to do it,basically what it do is
request the first webpage with curl --> to get a unique id
use the unique id to request another page --> to get a session
use the session to request another page --->sleep() then do it again
now this is what a single thread do,but i want to create a lot of threads in the same time
what i did is ,create 3 sperate files
the first one create 10 sessions and save them in a txt file with other parameters (session1|unique_id1|paramter1|anotherparameter1)
the second file contain this code
$sessions = file('sessions.txt');
$WshShell = new COM("WScript.Shell");
foreach($sessions as $kk => $session) {
if (!empty($session)) {
$oExec = $WshShell - > Run("php requests.php $kk", 0, false);
}
}
it open the txt file,and foreach line it open the requests file with the line number in argv
and in the third file,it take the line number ,and open the sessions file ,retreive the paramater of the session and send requests with that session
so this is how i did my multithreading,but i feel like i wrote a php code with rocks
now i want to rewrite it without having to open 10 sperate php process
There really isn't a native way to do threading in PHP. The approach you took works, but I would approach it differently. It's possible to fork processes in PHP. This I've done and works well.
One approach is to use some messaging system like RabbitMQ and distribute the work that way. Basically an Actor or Pub-sub model.
Another approach that might work well for you would be "pthreads". http://php.net/manual/en/book.pthreads.php
I've not tried this method myself so I cannot give you details as to how well it does or doesn't work.
Hope this helps!

Defining good caching scenarios

I need to know if I can improve the way I cache my api calls from the inside of my CodeIgniter app.The way I do it right now is like this, in a hmvc pattern:
Controller HOME == calls to => module app/application/modules/apis/controllers/c_$api == loads library => app/application/libraries/$api ==> Library returns response to module's controller_X, the controller invokes the view with the data it has
//Note: My app does not use twitter api, but others
Inside the apis module is where the all the apc caching is happening, like so:
// Load up drivers
$this->load->library('driver');
$this->load->driver('cache', array('adapter' => 'apc'));
// Get Tweets from Cache
$tweets = $this->cache->get('my_tweets');
if ( ! $tweets)
{
// No tweets in the cache, so get new ones.
$url = 'http://api.twitter.com/1/statuses/user_timeline.json?screen_name=gaker&count=5';
$tweets = json_decode(file_get_contents($url));
$this->cache->save('my_tweets',$tweets, 300);
}
return $tweets;
as explained in this article: http://www.gregaker.net/2011/feb/12/codeigniter-reactors-caching-drivers/
So I was wondering:
Having 3 scenarios: home, query, result; in each module apis's controller, do you think it would be a good idea to implement cache for each controller with all the scenarios? example:
//for each api1, api2 ... apiX, apply this:
//home
$this->cache->save('api_home',$api_home, 300);
//query
$this->cache->save("api_$query", $api_{$query}, 300); // I don't know for sure if $api_{$query} works or not, so don't hang me because I haven't tried it.
//result
$this->cache->save("api_$queryId", $api_{$queryId}, 300);
Even though I cached the api call, do you think I should cache the result in the controller that is calling the api module controller, with the same 3 scenarios (home, query and result)? Like so:
//modules/{home,fetch,article}/controllers/{home,fetch,article}.php
//home
$homeData['latest'][$api] = modules::run("apis/c_$api/data", array('action'=>'topRated'));
$this->cache->save('home_data', $home_data, 300);
//query
$searchResults[$api] = modules::run("apis/c_$api/data", $parameters);
$this->cache->save("search_results_$query", $search_results_{$query}, 300);
//article page
$result = modules::run("apis/c_$api/data", $parameters);
$this->cache->save("$api_article_$id", ${$api}_article_{$id}, 300);
So, what do you think? Is it a good practice the mentioned above, or just an awful stupid one?
//Note, the suggested caching ideas were not tested... so, I don't know if ${$api}_article_{$id} will work or not (even though I suppose it will)
IMHO It is a good idea to cache api results if you don't need real time data. If you don't care that you won't see new data for an hour, then by all means cache it for an hour. So for your first question, you just need to ask yourself: "How fresh does the content need to be for my application?" and implement caching accordingly.
For the second question: I don't see a lot of value in caching content if it's only been manipulated in simple ways. At that point you're using up space in your cache and not getting a lot of value. But if there are database, or other api calls being made using that data, then yes they should be cached using a technique similar to the above.
If you're that worried about processor load (the only reason to cache content after manipulation) you're best bet is to look at something like Varnish or CloudFront.

PHP Parsing with simple_html_dom, please check

I made a simple parser for saving all images per page with simple html dom and get image class but i had to make a loop inside the loop in order to pass page by page and i think something is just not optimized in my code as it is very slow and always timeouts or memory exceeds. Could someone just have a quick look at the code and maybe you see something really stupid that i made?
Here is the code without libraries included...
$pageNumbers = array(); //Array to hold number of pages to parse
$url = 'http://sitename/category/'; //target url
$html = file_get_html($url);
//Simply detecting the paginator class and pushing into an array to find out how many pages to parse placing it into an array
foreach($html->find('td.nav .str') as $pn){
array_push($pageNumbers, $pn->innertext);
}
// initializing the get image class
$image = new GetImage;
$image->save_to = $pfolder.'/'; // save to folder, value from post request.
//Start reading pages array and parsing all images per page.
foreach($pageNumbers as $ppp){
$target_url = 'http://sitename.com/category/'.$ppp; //Here i construct a page from an array to parse.
$target_html = file_get_html($target_url); //Reading the page html to find all images inside next.
//Final loop to find and save each image per page.
foreach($target_html->find('img.clipart') as $element) {
$image->source = url_to_absolute($target_url, $element->src);
$get = $image->download('curl'); // using GD
echo 'saved'.url_to_absolute($target_url, $element->src).'<br />';
}
}
Thank you.
I suggest making a function to do the actual simple html dom processing.
I usually use the following 'template'... note the 'clear memory' section.
Apparently there is a memory leak in PHP 5... at least I read that someplace.
function scraping_page($iUrl)
{
// create HTML DOM
$html = file_get_html($iUrl);
// get text elements
$aObj = $html->find('img');
// do something with the element objects
// clean up memory (prevent memory leaks in PHP 5)
$html->clear(); // **** very important ****
unset($html); // **** very important ****
return; // also can return something: array, string, whatever
}
Hope that helps.
You are doing quite a lot here, I'm not surprised the script times out. You download multiple web pages, parse them, find images in them, and then download those images... how many pages, and how many images per page? Unless we're talking very small numbers then this is to be expected.
I'm not sure what your question really is, given that, but I'm assuming it's "how do I make this work?". You have a few options, it really depends what this is for. If it's a one-off hack to scrape some sites, ramp up the memory and time limits, maybe chunk up the work to do a little, and next time write it in something more suitable ;)
If this is something that happens server-side, it should probably be happening asynchronously to user interaction - i.e. rather than the user requesting some page, which has to do all this before returning, this should happen in the background. It wouldn't even have to be PHP, you could have a script running in any language that gets passed things to scrape and does it.

RSS generator with caching function

Do you happen to know any good rss generator script with caching function. All the script I have found over the net so far doesn't support caching! I need the the content of rss to be generated automatically from database in a specified period of time.
Thanks in advance
First, to add caching to the script, it seems like it wouldn't be too hard to put Zend_Feed and Zend_Cache together - or just wrap your current generation script with Zend_Cache.
Just setup the cache with your lifetime:
$frontendOptions = array(
'lifetime' => 7200, // cache lifetime of 2 hours
'automatic_serialization' => true
);
Then check if the cache is still valid:
if(!$feed = $cache->load('myfeed')) {
//generate feed
$cache->save($feed, 'myfeed');
}
//output $feed
I don't know how you form your RSS, but you can import an array structure to Zend_Feed:
$rssFeedFromArray = Zend_Feed::importArray($array, 'rss');
Of course the best way may be to just use your current feed generator and save the output to a file. Use that file as the RSS feed, then use cron/web hooks/queue/whatever to generate the static file. That would be simpler, and use less resources, than having the generation script do the caching.
//feedGen.php
//may require some output buffering if the feed generator outputs directly
$output = $myFeedGenerator->output();
file_put_contents('feed.rss', $output);
Now the feed link is /feed.rss, and you just run feedGen.php whenever it needs to be refreshed. Serving the static file (not even parsed by php) means less for your server to do.

Categories