I wrote a PHP script home-made in order to get last FB posts from a page, which is simply executed from an jQuery/AJAX code when my document is ready :
$("my_object").load("fb_feed.php");
Running on local PHP server as on a VPS, to Time To First Byte is very long : 10s at least, 20s on average, or more
ini_set("allow_url_fopen", 1);
function getGraphUrl($param){
return "https://graph.facebook.com/". $param ."access_token=THE_TOKEN";
}
function getContent($param){
$url = getGraphUrl($param);
return json_decode(file_get_contents($url));
}
$feed = getContent("ID_OF_THE_PAGE/feed?");
$posts = array();
foreach ($feed->data as $i => $post) {
if(isset($post->message)){
$object_id = getContent($post->id . "?fields=object_id&")->object_id;
if(isset($object_id)){
$img_url = "https://graph.facebook.com/" . $object_id . "/picture";
$short = (getimagesize($img_url)[1] < 170 ? true : false);
if($short){
$img_url = "images/empty.png";
}
array_push($posts, array('id' => $post->id,'message' => $post->message,
'img_url' => $img_url, 'date' => $post->created_time));
}
}
if(count($posts) == 6){
break;
}
}
The extract of the PHP script which load posts 👆
Everything works, the only problem is the loading duration which is very very long
Is there a way to load faster, rather than build my own cache ?
Is my request poorly made ?
Related
I have created this PHP script that print outs the specified number of latest youtube videos from a youtube channel. I'm setting the maxResults to 10 but it returns hundreds of results. Please go through the code and help.
<?php
$API_Url = 'https://www.googleapis.com/youtube/v3/';
$API_Key = '...';
$channelId = 'UCX6OQ3DkcsbYNE6H8uQQuVA';
$parameter = [
'id'=> $channelId,
'part'=> 'contentDetails',
'key'=> $API_Key
];
$channel_URL = $API_Url . 'channels?' . http_build_query($parameter);
$json_details = json_decode(file_get_contents($channel_URL), true);
$playlist=$json_details['items'][0]['contentDetails']['relatedPlaylists']['uploads'];
$parameter = [
'part'=> 'snippet',
'playlistId' => $playlist,
'maxResults'=> 10,
'key'=> $API_Key
];
$channel_URL = $API_Url . 'playlistItems?' . http_build_query($parameter);
$json_details = json_decode(file_get_contents($channel_URL), true);
$my_videos = [];
foreach($json_details['items'] as $video){
//$my_videos[] = $video['snippet']['resourceId']['videoId'];
$my_videos[] = array( 'v_id'=>$video['snippet']['resourceId']['videoId'], 'v_name'=>$video['snippet']['title'] );
}
while(isset($json_details['nextPageToken'])){
$nxt_page_URL = $channel_URL . '&pageToken=' . $json_details['nextPageToken'];
$json_details = json_decode(file_get_contents($nxt_page_URL), true);
foreach($json_details['items'] as $video)
$my_videos[] = $video['snippet']['resourceId']['videoId'];
}
print_r($my_videos);
//foreach($my_videos as $video){
//if(isset($video)){
//echo '<a href="https://www.youtube.com/watch?v='. $video['v_id'] .'">
//<div>'. $video['v_name'] .'</div>
//</a><br><br><br>';
//}
//}
And the extra results it is returning don't even have a title or id. See the image for yourself
Here's a screenshot:
The reason you get so many items is that after the first request, you have a while loop that makes subsequent requests to the YouTube API for the next page of data until nextPageToken is not found in the response payload, meaning you can be making anything from 1 to tens or hundreds of requests depending on how many pages of data are available each bringing back up to 10 additional items.
Each individual request will be bringing back 10 items at a time as you specified, but the cumulative total is obviously a lot more than that. Since you're only interested in the first 10, simply removing the while block should stop the code from making any more requests after the first one and thus won't add them to the array.
I am working on project in php with Google Photos API. i have an issue, if i pass optional parameters like pageSize, it doesn't work still get all images.
$optParams = array(
'pageSize' => 1,
);
$response = $photosLibraryClient->listMediaItems($optParams);
foreach ($response->iterateAllElements() as $item) {
$id = $item->getId();
$description = $item->getDescription();
$mimeType = $item->getMimeType();
$productUrl = $item->getProductUrl();
$filename = $item->getFilename();
echo '<br>';
echo $filename;
}
I'm not a 100% sure of this, but it seems the iterateAllElements literally iterates over all elements available in the account, ignoring your specified pageSize (even the default pageSize) by requesting everything from the API without any boundaries.
You can iterate over the returned pages replacing iterateAllElements with iteratePages, but it also doesn't seems to work properly without a albumId, the API returns an irregular page sizing, like the example below:
$optParams = array(
'pageSize' => 5 // Changed
);
$response = $photosLibraryClient->searchMediaItems($optParams); // Changed the method
foreach ($response->iteratePages() as $key => $page) {
echo "Page #{$key}<br>";
foreach ($page as $item) {
$id = $item->getId();
$description = $item->getDescription();
$mimeType = $item->getMimeType();
$productUrl = $item->getProductUrl();
$filename = $item->getFilename();
echo '<br>';
echo $filename;
}
}
if the search or list were called without providing a albumId, the example above would return something like this:
[
[{...},{...},{...}],
[{...},{...},{...},{...},{...}],
[{...}],
[],
[{...},{...},{...},{...},{...}],
[]
]
If you find a good solution for this specific problem, please let me know.
Ps.: Their API behavior and it's documentation are very weird and confusing.
i have following problem:
in a function, i put in an array with at least 700 names. I get out an array with all information about their releases from the last 10 days.
The function gets via iTunes API a json response, which i want to use for further analyzings.
Problem:
- while executing function, it takes about 3mins to finish it.
- homepage is not reachable for others, while i execute it:
(Error on Server: (70007)The timeout specified has expired: AH01075: Error dispatching request to : (polling)) --> Running out of memory?
Questions:
- how to code this function more efficient?
- how to code this function without using to much memory, shall i use unset(...) ??
Code:
function getreleases($artists){
# print_r($artists);
$releases = array();
foreach( $artists as $artist){
$artist = str_replace(" ","%20",$artist);
$ituneslink = "https://itunes.apple.com/search?term=".$artist."&media=music&entity=album&limit=2&country=DE";
$itunesstring = file_get_contents($ituneslink);
$itunesstring = json_decode($itunesstring);
/*Results being decoded from json to an array*/
if( ($itunesstring -> resultCount)>0 ){
foreach ( $itunesstring -> results as $value){
if( (date_diff(date_create('now'), date_create( ($value -> releaseDate )))->format('%a')) < 10) {
#echo '<br>Gefunden: ' . $artist;
$releases[] = $value;
}
}
}else{
echo '<br><span style="color:red">Nicht gefunden bei iTunes: ' . $artist .'</span>';
}
unset($ituneslink);
unset($itunesstring);
unset($itunesstring2);
}
return $releases;
}
The problem lies in the fact that every time that function is executed, your server needs to make 700+ API Calls, parse the data, and work your logic on it.
One potential solution is to use Wordpress's transients to 'cache' the value (or perhaps even the whole output), this way, it won't have to execute that strenuous function on every connection, it'll just pull the data from the transient. You can set an expiry date for a transient, so you can have it refetch the information every X days/hours.
Take a look at this article from CSS Tricks that walks you through a simple example using transients.
But the problem is not fixed. While updating the stuff and getting 700 items from iTunes API and while Running in the for-loop, the homepage is getting out of memory. although homepage is not reachable from my computer. I just tried for a "timeout" or "sleep" sothat the script is searching for stuff every few seconds. But it doesn't change it.
I just improved: Changed "foreach" to "for" because of memory reasons. Now variables are not being copied. Are there more problems :-/ ??
I've got to for-loops in there. Maybe $itunesstring is being copied ?
if(!function_exists('get_contents')){
function get_contents(&$url){
// if cURL is available, use it...
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$cache = curl_exec($ch);
curl_close($ch);
return $cache;
}
}
function getfromituneslink(&$link,&$name){
$name = str_replace("'","",$name);
$name = substr($name, 0, 14);
$result = get_transient("getlink_itunes_{$name}");
if(false === $result){
$result = get_contents($link);
set_transient("getlink_itunes_{$name}",$result, 12*HOUR_IN_SECONDS);
}
return $result;
}
function getreleases(&$artists){
$releases= array();
while( 0 < count($artists)){
$itunesstring = array();
$artist = array_shift($artists);
$artist = str_replace(" ","%20",$artist);
$ituneslink = "https://itunes.apple.com/search?term=".$artist."&media=music&entity=album&limit=2&country=DE";
$itunesstring = getfromituneslink($ituneslink,$artist);
unset($ituneslink);
$itunesstring = json_decode($itunesstring);
if( ($itunesstring -> resultCount)>0 ){
#for($i=0; $i< (count($itunesstring -> results))-1; ++$i)
while( 0 < count(($itunesstring -> results))){
$value = array_shift($itunesstring -> results);
#$value = &$itunesstring[results][$i];
#foreach ( $itunesstring -> results as $value)
if( (date_diff(date_create('now'), date_create( ($value -> releaseDate )))->format('%a')) < 6) {
$releases[] = array($value->artistName, $value->collectionName, $value->releaseDate, str_replace("?uo=4","",$value -> collectionViewUrl));
unset($value);
}
}
}else{
echo '<br><span style="color:red">Nicht gefunden bei iTunes: ' . str_replace("%20"," ",$artist) .'</span>';
}
unset($ituneslink);
unset($itunesstring);
}
return $releases;
}
I don't know, where the problem is. :-(
Any other possibilty to let the function run to get the information one by another
I'm on the free tier of AWS and using the laravel framework and the facebook v.2.5 SDK (Web). I'm trying to get the latests 10 posts from facebook for approximately 600 users. Which would be 6000 posts max. Every time I run the query it runs through about 10 loops and then the app completely crashes and goes offline. Then returns after a few minutes. Laravel isn't showing me any errors.
My code is:
/**
* Get facebook users posts
* #return \SammyK\LaravelFacebookSdk\LaravelFacebookSdk;
*/
public function posts(\SammyK\LaravelFacebookSdk\LaravelFacebookSdk $fb)
{
// get posts
$profiles_to_get = DB::table('facebook_profiles')->distinct('username')->get();
$fb_admin_profile = DB::table('profiles')->where('social_media_type', "facebook")->first();
$admin_fb_access_token = $fb_admin_profile->oauth_token;
foreach ($profiles_to_get as $profile_to_get) {
try {
$response = $fb->get('/'.$profile_to_get->username.'?fields=posts.limit(10)', $admin_fb_access_token);
$userNode = $response->getGraphUser();
$posts = json_decode($userNode['posts']);
foreach ($posts as $post)
{
isset($post->message) ? $fb_posts[] = array('account_id' => $profile_to_get->id,
'facebook_id' => $userNode->getID(),
'message_id' => $post->id,
'message' => $post->message,
'created_time' => $post->created_time->date,
'created_at' => Carbon::now(),
'updated_at' => Carbon::now(),
) : null;
foreach ($fb_posts as $fb_post)
{
$postDuplicateChecker = DB::table('facebook_posts')->where('message_id', $fb_post['message_id'])->get();
if($postDuplicateChecker == !null)
{
DB::table('facebook_posts')->where('message_id', $fb_post['message_id'])->update($fb_post);
$notification = "First notification";
}
else
{
DB::table('facebook_posts')->insert( $fb_post );
$notification = "Second notification";
}
}
if ($post > 0 && $post % 10 == 0)
{
sleep(5);
}
}
} catch(\Facebook\Exceptions\FacebookSDKException $e) {
dd($e->getMessage());
}
}
return Redirect::route('someroute',[ 'notification' => $notification]);
}
I've tried setting the query timeout at 300 so it doesn't time out and also making the loop sleep after every 10 requests so that it reduces the load. Also I have other apps running on the same server but they never go offline when this app crashes.
My question is is there any way to optimize the code so that I don't have to upgrade the server or is my only choice to upgrade the server?
The answer was to batch the query using array_chunk and chunk the process into smaller pieces which could be handled easier.
array array_chunk ( array $array , int $size [, bool $preserve_keys = false ] )
Reference: http://php.net/manual/en/function.array-chunk.php
I am trying to download a rapidshare file using its "download" subroutine as a free user. The following is the code that I use to get response from the subroutine.
function rs_download($params)
{
$url = "http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=download&fileid=".$params['fileid']."&filename=".$params['filename'];
$reply = #file_get_contents($url);
if(!$reply)
{
return false;
}
$result_arr = array();
$result_keys = array(0=> 'hostname', 1=>'dlauth', 2=>'countdown_time', 3=>'md5hex');
if( preg_match("/DL:(.*)/", $reply, $reply_matches) )
{
$reply_altered = $reply_matches[1];
}
else
{
return false;
}
foreach( explode(',', $reply_altered) as $index => $value )
{
$result_arr[ $result_keys[$index] ] = $value;
}
return $result_arr;
}
For instance; trying to download this...
http://rapidshare.com/files/440817141/AutoRun__live-down.com_Champ.rar
I pass the fileid(440817141) and filename(AutoRun__live-down.com_Champ.rar) to rs_download(...) and I get a response just as rapidshare's api doc says.
The rapidshare api doc (see "sub=download") says call the server hostname with the download authentication string but I couldn't figure out what form the url should take.
Any suggestions?, I tried
$download_url = "http://$the-hostname/$the-dlauth-string/files/$fileid/$filename"
and a couple other variations of the above, nothing worked.
I use curl to download the file, like the following;
$cr = curl_init();
$fp = fopen ("d:/downloaded_files/file1.rar", "w");
// set curl options
$curl_options = array(
CURLOPT_URL => $download_url
,CURLOPT_FILE => $fp
,CURLOPT_HEADER => false
,CURLOPT_CONNECTTIMEOUT => 0
,CURLOPT_FOLLOWLOCATION => true
);
curl_setopt_array($cr, $curl_options);
curl_exec($cr);
curl_close($cr);
fclose($fp);
The above curl code doesn't seem to work, nothing gets downloaded. Probably its the download url that is incorrect.
Also tried this format for the download url:
"http://rs$serverid$shorthost.rapidshare.com/files/$fileid/$filename"
With this curl writes a file entry but that is all it does(writes a 0/1 kb file).
Here is the code that I use to get the serverid, shorthost, among a few other values from rapidshare.
function rs_checkfile($params)
{
$url = "http://api.rapidshare.com/cgi-bin/rsapi.cgi?sub=checkfiles_v1&files=".$params['fileids']."&filenames=".$params['filenames'];
// the response from rapishare would a string something like:
// 440817141,AutoRun__live-down.com_Champ.rar,47768,20,1,l3,0
$reply = #file_get_contents($url);
if(!$reply)
{
return false;
}
$result_arr = array();
$result_keys = array(0=> 'file_id', 1=>'file_name', 2=>'file_size', 3=>'server_id', 4=>'file_status', 5=>'short_host'
, 6=>'md5');
foreach( explode(',', $reply) as $index => $value )
{
$result_arr[ $result_keys[$index] ] = $value;
}
return $result_arr;
}
rs_checkfile(...) takes comma seperated fileids and filenames(no commas if calling for a single file)
Thanks in advance for any suggestions.
You start by requesting ?sub=download&fileid=X&filename=Y, and it returns $hostname,$dlauth,$countdown,$md5hex.. since you're a free user you have to delay for $countdown seconds, and then call ?sub=download&fileid=X&filename=Y&dlauth=Z to perform the download.
There's a working implementation in python here that would probably answer any of your other questions.