How to get Google +1 count for current page in PHP? - php

I want to get count of Google +1s for current web page ? I want to do this process in PHP, then write number of shares or +1s to database. That's why, I need it. So, How can I do this process (getting count of +1s) in PHP ?
Thanks in advance.

This one works for me and is faster than the CURL one:
function getPlus1($url) {
$html = file_get_contents( "https://plusone.google.com/_/+1/fastbutton?url=".urlencode($url));
$doc = new DOMDocument(); $doc->loadHTML($html);
$counter=$doc->getElementById('aggregateCount');
return $counter->nodeValue;
}
also here for Tweets, Pins and Facebooks
function getTweets($url){
$json = file_get_contents( "http://urls.api.twitter.com/1/urls/count.json?url=".$url );
$ajsn = json_decode($json, true);
$cont = $ajsn['count'];
return $cont;
}
function getPins($url){
$json = file_get_contents( "http://api.pinterest.com/v1/urls/count.json?callback=receiveCount&url=".$url );
$json = substr( $json, 13, -1);
$ajsn = json_decode($json, true);
$cont = $ajsn['count'];
return $cont;
}
function getFacebooks($url) {
$xml = file_get_contents("http://api.facebook.com/restserver.php?method=links.getStats&urls=".urlencode($url));
$xml = simplexml_load_string($xml);
$shares = $xml->link_stat->share_count;
$likes = $xml->link_stat->like_count;
$comments = $xml->link_stat->comment_count;
return $likes + $shares + $comments;
}
Note: Facebook numbers are the sum of likes+shares and some people said plus comments (I didn't search this yet), anyway use the one you need.
This will works if your php settings allow open external url, check your "allow_url_open" php setting.
Hope helps.

function get_plusones($url) {
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "https://clients6.google.com/rpc");
curl_setopt($curl, CURLOPT_POST, 1);
curl_setopt($curl, CURLOPT_POSTFIELDS, '[{"method":"pos.plusones.get","id":"p","params":{"nolog":true,"id":"' . $url . '","source":"widget","userId":"#viewer","groupId":"#self"},"jsonrpc":"2.0","key":"p","apiVersion":"v1"}]');
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_HTTPHEADER, array('Content-type: application/json'));
$curl_results = curl_exec ($curl);
curl_close ($curl);
$json = json_decode($curl_results, true);
return intval( $json[0]['result']['metadata']['globalCounts']['count'] );
}
echo get_plusones("http://www.stackoverflow.com")
from internoetics.com

The cURL and API way listed in the other posts here no longer works.
There is still at least 1 method, but it's ugly and Google clearly doesn't support it. You just rip the variable out of the JavaScript source code for the official button with a regular expression:
function shinra_gplus_get_count( $url ) {
$contents = file_get_contents(
'https://plusone.google.com/_/+1/fastbutton?url='
. urlencode( $url )
);
preg_match( '/window\.__SSR = {c: ([\d]+)/', $contents, $matches );
if( isset( $matches[0] ) )
return (int) str_replace( 'window.__SSR = {c: ', '', $matches[0] );
return 0;
}

The next PHP script works great so far for retrieving Google+ count on shares and +1's.
$url = 'http://nike.com';
$gplus_type = true ? 'shares' : '+1s';
/**
* Get Google+ shares or +1's.
* See out post at stackoverflow.com/a/23088544/328272
*/
function get_gplus_count($url, $type = 'shares') {
$curl = curl_init();
// According to stackoverflow.com/a/7321638/328272 we should use certificates
// to connect through SSL, but they also offer the following easier solution.
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
if ($type == 'shares') {
// Use the default developer key AIzaSyCKSbrvQasunBoV16zDH9R33D88CeLr9gQ, see
// tomanthony.co.uk/blog/google_plus_one_button_seo_count_api.
curl_setopt($curl, CURLOPT_URL, 'https://clients6.google.com/rpc?key=AIzaSyCKSbrvQasunBoV16zDH9R33D88CeLr9gQ');
curl_setopt($curl, CURLOPT_POST, 1);
curl_setopt($curl, CURLOPT_POSTFIELDS, '[{"method":"pos.plusones.get","id":"p","params":{"nolog":true,"id":"' . $url . '","source":"widget","userId":"#viewer","groupId":"#self"},"jsonrpc":"2.0","key":"p","apiVersion":"v1"}]');
curl_setopt($curl, CURLOPT_HTTPHEADER, array('Content-type: application/json'));
}
elseif ($type == '+1s') {
curl_setopt($curl, CURLOPT_URL, 'https://plusone.google.com/_/+1/fastbutton?url='.urlencode($url));
}
else {
throw new Exception('No $type defined, possible values are "shares" and "+1s".');
}
$curl_result = curl_exec($curl);
curl_close($curl);
if ($type == 'shares') {
$json = json_decode($curl_result, true);
return intval($json[0]['result']['metadata']['globalCounts']['count']);
}
elseif ($type == '+1s') {
libxml_use_internal_errors(true);
$doc = new DOMDocument();
$doc->loadHTML($curl_result);
$counter=$doc->getElementById('aggregateCount');
return $counter->nodeValue;
}
}
// Get Google+ count.
$gplus_count = get_gplus_count($url, $gplus_type);

Google does not currently have a public API for getting the +1 count for URLs. You can file a feature request here. You can also use the reverse engineered method mentioned by #DerVo. Keep in mind though that method could change and break at anytime.

I've assembled this code to read count directly from the iframe used by social button.
I haven't tested it on bulk scale, so maybe you've to slow down requests and/or change user agent :) .
This is my working code:
function get_plusone($url)
{
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "https://plusone.google.com/_/+1/fastbutton?
bsv&size=tall&hl=it&url=".urlencode($url));
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
$html = curl_exec ($curl);
curl_close ($curl);
$doc = new DOMDocument();
$doc->loadHTML($html);
$counter=$doc->getElementById('aggregateCount');
return $counter->nodeValue;
}
Usage is the following:
echo get_plusones('http://stackoverflow.com/');
Result is: 3166

I had to merge a few ideas from different options and urls to get it to work for me:
function getPlusOnes($url) {
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, "https://plusone.google.com/_/+1/fastbutton?url=".urlencode($url));
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
$html = curl_exec ($curl);
curl_close ($curl);
$doc = new DOMDocument();
$doc->loadHTML($html);
$counter=$doc->getElementById('aggregateCount');
return $counter->nodeValue;
}
All I had to do was update the url but I wanted to post a complete option for those interested.
echo getPlusOnes('http://stackoverflow.com/')
Thanks to Cardy for using this approach, then I just had to just get a url that worked for me...

I've released a PHP library retrieving count for major social networks. It currently supports Google, Facebook, Twitter and Pinterest.
Techniques used are similar to the one described here and the library provides a mechanism to cache retrieved data. This library also have some other nice features: installable through Composer, fully tested, HHVM support.
http://dunglas.fr/2014/01/introducing-the-socialshare-php-library/

Related

Error when trying to get Instagram Embed page HTML code

I'm trying to get the HTML Code of the Instagram's Embed pages for my API, but it returns me a strange error and I do not know what to do now, because I'm new to PHP. The code works on other websites.
I tried it already on other websites like apple.com and the strange thing is that when I call this function on the 'normal' post page it works, the error only appears when I call it on the '/embed' URL.
This is my PHP Code:
<?php
if (isset($_GET['url'])) {
$filename = $_GET['url'];
$file = file_get_contents($filename);
$dom = new DOMDocument;
libxml_use_internal_errors(true);
$dom->loadHTML($file);
libxml_use_internal_errors(false);
$bodies = $dom->getElementsByTagName('body');
assert($bodies->length === 1);
$body = $bodies->item(0);
for ($i = 0; $i < $body->children->length; $i++) {
$body->remove($body->children->item($i));
}
$stringbody = $dom->saveHTML($body);
echo $stringbody;
}
?>
I call the API like this:
https://api.com/get-website-body.php?url=http://instagr.am/p/BoLVWplBVFb/embed
My goal is to get the body of the website, like it is when I call this code on the https://apple.com URL for example.
You can use direct url to scrape the data if you use CURL and its faster than file_get_content. Here is the curl code for different urls and this will scrape the body data alone.
if (isset($_GET['url'])) {
// $website_url = 'https://www.instagram.com/instagram/?__a=1';
// $website_url = 'https://apple.com';
// $website_url = $_GET['url'];
$website_url = 'http://instagr.am/p/BoLVWplBVFb/embed';
$curl = curl_init();
//curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, FALSE);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($curl, CURLOPT_URL, $website_url);
curl_setopt($curl, CURLOPT_REFERER, $website_url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($curl, CURLOPT_USERAGENT, 'Mozilla/5.0(Windows NT 6.1; rv:8.0) Gecko/20100101 Firefox/66.0');
$str = curl_exec($curl);
curl_close($curl);
$json = json_decode($str, true);
print_r($str); // Just taking tha page as it is
// Taking body part alone and play as your wish
$dom = new DOMDocument;
libxml_use_internal_errors(true);
$dom->loadHTML($str);
libxml_use_internal_errors(false);
$bodies = $dom->getElementsByTagName('body');
foreach ($bodies as $key => $value) {
print_r($value);// You will all content of body here
}
}
NOTE: Here you don't want to use https://api.com/get-website-body.php?url=....

Github api request with php curl

I am trying to get the latest commit from github using the api, but I encounter some errors and not sure what the problem is with the curl requests. The CURLINFO_HTTP_CODE gives me 000.
What does it mean if I got 000 and why is it not getting the contents of the url?
function get_json($url){
$base = "https://api.github.com";
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $base . $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false);
//curl_setopt($curl, CONNECTTIMEOUT, 1);
$content = curl_exec($curl);
echo $http_status = curl_getinfo($curl, CURLINFO_HTTP_CODE);
curl_close($curl);
return $content;
}
echo get_json("users/$user/repos");
function get_latest_repo($user) {
// Get the json from github for the repos
$json = json_decode(get_json("users/$user/repos"),true);
print_r($json);
// Sort the array returend by pushed_at time
function compare_pushed_at($b, $a){
return strnatcmp($a['pushed_at'], $b['pushed_at']);
}
usort($json, 'compare_pushed_at');
//Now just get the latest repo
$json = $json[0];
return $json;
}
function get_commits($repo, $user){
// Get the name of the repo that we'll use in the request url
$repoName = $repo["name"];
return json_decode(get_json("repos/$user/$repoName/commits"),true);
}
I use your code and it will work if you add an user agent on curl
curl_setopt($ch, CURLOPT_USERAGENT,'YOUR_INVENTED_APP_NAME');

Get more than 10 results by google search API in php

I am trying to get 10 pages result listed using the following cod below. When i run the URL directly i get a json string but using this in code it does not returns anything. Please tell me where i am doing wrong.
$url = "http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=CompTIA A+ Complete Study Guide Authorized Courseware site:.edu&start=20";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$body = curl_exec($ch);
curl_close($ch);
$json = json_decode($body,true);
print_r($json);
Now i am using the following code but it outputs only four entries of a page. Please tell me where i am doing wrong.
$term = "CompTIA A+ Training Kit Microsoft Press Training Kit";
for($i=0;$i<=90;$i+=10)
{
$term = $val.' site:.edu';
$query = urlencode($term);
$url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=' . $query . '&start='.$i;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$body = curl_exec($ch);
curl_close($ch);
$json = json_decode($body,true);
//print_r($json);
foreach($json['responseData']['results'] as $data)
{
echo '<tr><td>'.$i.'</td><td>'.$url.'</td><td>'.$k.'</td><td>'.$val.'</td><td>'.$data['visibleUrl'].'</td><td>'.$data['unescapedUrl'].'</td><td>'.$data['content'].'</td></tr>';
}
}
Just try with urlencode
$query = urlencode('CompTIA A+ Complete Study Guide Authorized Courseware site:.edu');
$url = 'http://ajax.googleapis.com/ajax/services/search/web?v=1.0&q=' . $query . '&start=20';

Get Final URL From Double Shortened URL (t.co -> bit.ly -> final)

I couldn't convert a double shortened URL to expanded URL successfully using the below function I got from here:
function doShortURLDecode($url) {
$ch = #curl_init($url);
#curl_setopt($ch, CURLOPT_HEADER, TRUE);
#curl_setopt($ch, CURLOPT_NOBODY, TRUE);
#curl_setopt($ch, CURLOPT_FOLLOWLOCATION, FALSE);
#curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$response = #curl_exec($ch);
preg_match('/Location: (.*)\n/', $response, $a);
if (!isset($a[1])) return $url;
return $a[1];
}
I got into trouble when the expanded URL I got was again a shortened URL, which has its expanded URL.
How do I get final expanded URL after it has run through both URL shortening services?
Since t.co uses HTML redirection through the use of JavaScript and/or a <meta> redirect we need to grab it's contents first. Then extract the bit.ly URL from it to perform a HTTP header request to get the final location. This method does not rely on cURL to be enabled on server and uses all native PHP5 functions:
Tested and working!
function large_url($url)
{
$data = file_get_contents($url); // t.co uses HTML redirection
$url = strtok(strstr($data, 'http://bit.ly/'), '"'); // grab bit.ly URL
stream_context_set_default(array('http' => array('method' => 'HEAD')));
$headers = get_headers($url, 1); // get HTTP headers
return (isset($headers['Location'])) // check if Location header set
? $headers['Location'] // return Location header value
: $url; // return bit.ly URL instead
}
// DEMO
$url = 'http://t.co/dd4b3kOz';
echo large_url($url);
Finally found a way to get the final url of a double shortened url. The best way is to use longurl api for it.
I am not sure if it is the correct way, but i am at last getting the output as the final url needed :)
Here's what i did:
<?php
function TextAfterTag($input, $tag)
{
$result = '';
$tagPos = strpos($input, $tag);
if (!($tagPos === false))
{
$length = strlen($input);
$substrLength = $length - $tagPos + 1;
$result = substr($input, $tagPos + 1, $substrLength);
}
return trim($result);
}
function expandUrlLongApi($url)
{
$format = 'json';
$api_query = "http://api.longurl.org/v2/expand?" .
"url={$url}&response-code=1&format={$format}";
$ch = curl_init();
curl_setopt ($ch, CURLOPT_URL, $api_query );
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($ch, CURLOPT_HEADER, false);
$fileContents = curl_exec($ch);
curl_close($ch);
$s1=str_replace("{"," ","$fileContents");
$s2=str_replace("}"," ","$s1");
$s2=trim($s2);
$s3=array();
$s3=explode(",",$s2);
$s4=TextAfterTag($s3[0],(':'));
$s4=stripslashes($s4);
return $s4;
}
echo expandUrlLongApi('http://t.co/dd4b3kOz');
?>
The output i get is:
"http://changeordie.therepublik.net/?p=371#proliferation"
The above code works.
The code that #cryptic shared is also correct ,but i could not get the result on my server (maybe because of some configuration issue).
If anyone thinks that it could be done by some other way, please feel free to share it.
Perhaps you should just use CURLOPT_FOLLOWLOCATION = true and then determine the final URL you were directed to.
In case the problem is not a Javascript redirect as in t.co or a <META http-equiv="refresh"..., this is reslolving stackexchange URLs like https://stackoverflow.com/q/62317 fine:
public function doShortURLDecode($url) {
$ch = #curl_init($url);
#curl_setopt($ch, CURLOPT_HEADER, TRUE);
#curl_setopt($ch, CURLOPT_NOBODY, TRUE);
#curl_setopt($ch, CURLOPT_FOLLOWLOCATION, FALSE);
#curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$response = #curl_exec($ch);
$cleanresponse= preg_replace('/[^A-Za-z0-9\- _,.:\n\/]/', '', $response);
preg_match('/Location: (.*)[\n\r]/', $cleanresponse, $a);
if (!isset($a[1])) return $url;
return parse_url($url, PHP_URL_SCHEME).'://'.parse_url($url, PHP_URL_HOST).$a[1];
}
It cleans the response of any special characters, that can occur in the curl output before cuttoing out the result URL (I ran into this problem on a php7.3 server)

Save facebook profile image using cURL

I'm trying to save a users profile image on facebook using CURL. When I use the code below, I save a jpeg image but it has zero bytes in it. But if I exchange the url value to https://fbcdn-profile-a.akamaihd.net/hprofile-ak-snc4/211398_812269356_2295463_n.jpg, which is where http://graph.facebook.com/' . $user_id . '/picture?type=large redirects the browser, the image is saved without a problem. What am I doing wrong here?
<?php
$url = 'http://graph.facebook.com/' . $user_id . '/picture?type=large';
$file_handler = fopen('pic_facebook.jpg', 'w');
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_FILE, $file_handler);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_exec($curl);
curl_close($curl);
fclose($file_handler);
?>
There is a redirect, so you have to add this option for curl
// safemode if off:
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
but if you have safemode if on, then:
// safemode if on:
<?php
function curl_redir_exec($ch)
{
static $curl_loops = 0;
static $curl_max_loops = 20;
if ($curl_loops++ >= $curl_max_loops)
{
$curl_loops = 0;
return FALSE;
}
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($ch);
#list($header, $data) = #explode("\n\n", $data, 2);
$http_code = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($http_code == 301 || $http_code == 302)
{
$matches = array();
preg_match('/Location:(.*?)\n/', $header, $matches);
$url = #parse_url(trim(array_pop($matches)));
if (!$url)
{
//couldn't process the url to redirect to
$curl_loops = 0;
return $data;
}
$last_url = parse_url(curl_getinfo($ch, CURLINFO_EFFECTIVE_URL));
if (!$url['scheme'])
$url['scheme'] = $last_url['scheme'];
if (!$url['host'])
$url['host'] = $last_url['host'];
if (!$url['path'])
$url['path'] = $last_url['path'];
$new_url = $url['scheme'] . '://' . $url['host'] . $url['path'] . (#$url['query']?'?'.$url['query']:'');
return $new_url;
} else {
$curl_loops=0;
return $data;
}
}
function get_right_url($url) {
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
return curl_redir_exec($curl);
}
$url = 'http://graph.facebook.com/' . $user_id . '/picture?type=large';
$file_handler = fopen('pic_facebook.jpg', 'w');
$curl = curl_init(get_right_url($url));
curl_setopt($curl, CURLOPT_FILE, $file_handler);
curl_setopt($curl, CURLOPT_HEADER, false);
curl_exec($curl);
curl_close($curl);
fclose($file_handler);
If you can't process the redirect, try this instead:
Make the request to https://graph.facebook.com/<USER ID>?fields=picture and parse the response, which will be in JSON format and look like this - e.g. for Zuck you get this response:
{
"picture": "http://profile.ak.fbcdn.net/hprofile-ak-snc4/157340_4_3955636_q.jpg"
}
Then make your curl request directly to retrieve the image from that cloud storage URL
set
CURLOPT_FOLLOWLOCATION to true
so that it follows the 301/302 redirect the reads the image file from final location.
i.e.
curl_setopt($curl, CURLOPT_FOLLOWLOCATION, true);
I managed to do it this way, works perfectly fine:
$data = file_get_contents('https://graph.facebook.com/[App-Scoped-ID]/picture?width=378&height=378&access_token=[Access-Token]');
$file = fopen('fbphoto.jpg', 'w+');
fputs($file, $data);
fclose($file);
You just need an App Access Token (APPID . '|' . APPSECRET), and you can specify width and height.
You can also add "redirect=false" to the URL, to get a JSON object with the URL (For example: https://fbcdn-profile-a.akamaihd.net/hprofile-ak-xpa1...)
CURLOPT_FOLLOWLOCATION has been removed in PHP5.4, so it´s not really an option anymore.

Categories