I'm using the ZohoCRM PHP SDK to attempt to pull all Account records from the CRM and manipulate them locally (do some reports). The basic code looks like this, which works fine:
$account_module = ZCRMModule::getInstance('Accounts');
$response = $account_module->getRecords();
$records = $response->getData();
foreach ($records as $record) {
// do stuff
}
The problem is that the $records object only has 200 records (out of about 3000 total). I can't find any docs in the (minimally / poorly documented) SDK documentation showing how to paginate or get bigger result sets, and the Zoho code samples in the dev site don't seem to be using the same SDK for some reason.
Does anyone know how I can paginate through these records?
The getRecords() method seems to accept 2 parameters. This is used within some of their examples. You should be able to use those params to set/control pagination.
$param_map = ["page" => "20", "per_page" => "200"];
$response = $account_module->getRecords($param_map);
#dakdad was right that you can pass in the page and per page values into the param_map. You also should use the $response->getInfo()->getMoreRecords() to determine if you need to paginate. Something like this seems to work:
$account_module = ZCRMModule::getInstance('Accounts');
$page = 1;
$has_more = true;
while ($has_more) {
$param_map = ["page" => $page, "per_page" => "200"];
$response = $account_module->getRecords($param_map);
$has_more = $response->getInfo()->getMoreRecords();
$records = $response->getData();
foreach ($records as $record) {
// do stuff
}
$page++;
}
Related
I have a problem with my results array, what I initially intended to have is something like this
$promises = [
'0' => $client->getAsync("www.api.com/opportunities?api=key&page=1fields=['fields']"),
'1' => $client->getAsync("www.api.com/opportunities?api=key&page=2fields=['fields']"),
'2' => $client->getAsync("www.api.com/opportunities?api=key&page=3fields=['fields']")
];
An array of request promises, I will use it because I want to retrieve a collection of data from the API that I am using. This is what the API first page looks like
In my request I want to get page 2,3,4.
This is how page 2 looks like
I made a do while loop on my PHP script but it seems to run an infinite loop
This is how it should work. First I run the initial request then get totalRecordCount = 154 and subtract it to recordCount = 100 if difference is != 0 it run it again and change the $pageNumber and push it to the promises.
This is my function code.Here's my code
function runRequest(){
$promises = [];
$client = new \GuzzleHttp\Client();
$pageCounter = 1;
$globalCount = 0;
do {
//request first and then check if has difference
$url = 'https://api.com/opportunities_dates?key='.$GLOBALS['API_KEY'].'&page='.$pageCounter.'&fields=["fields"]';
$initialRequest = $client->getAsync($url);
$initialRequest->then(function ($response) {
return $response;
});
$initialResponse = $initialRequest->wait();
$initialBody = json_decode($initialResponse->getBody());
$totalRecordCount = $initialBody->totalRecordCount;//154
$recordCount = $initialBody->recordCount;//100
$difference = $totalRecordCount - $recordCount;//54
$pageCounter++;
$globalCount += $recordCount;
array_push($promises,$url);
} while ($totalRecordCount >= $globalCount);
return $promises;
}
$a = $runRequest();
print_r($a); //contains array of endpoint like in the sample above
There is an endless loop because you keep looping when the total record count equals the global count. Page 3 and above have 0 records, so the total will be 154. Replacing the >= with a > will solve the loop.
However, the code will still not work as you expect it to do. For each page, you prepare a request with getAsync() and immediately do a wait(). The then statement does nothing. It returns the response, which it already does by default. So in practice, these are all sync requests.
Given that the page size is constant, you can calculate the pages you need based on the information given on the first request.
function runRequest(){
$promises = [];
$client = new \GuzzleHttp\Client();
$url = 'https://api.com/opportunities_dates?key='.$GLOBALS['API_KEY'].'&fields=["fields"]';
// Initial request to get total record count and page count
$initialRequest = $client->getAsync($url.'&page=1');
$initialResponse = $initialRequest->wait();
$initialBody = json_decode($initialResponse->getBody());
$promises[] = $initialRequest;
$totalRecordCount = $initialBody->totalRecordCount;//154
$pageSize = $initialBody->pageSize;//100
$nrOfPages = ceil($totalRecordCount / $pageSize);//2
for ($page = 2; $page <= $nrOfPages; $page++) {
$promises[] = $client->getAsync($url.'&page='.$page);
}
return $promises;
}
$promises = runRequest();
$responses = \GuzzleHttp\Promise\unwrap($promises);
Note that the function now returns promises and not URLs as strings.
It doesn't matter that the first promise is already settled. The unwrap function will not cause another GET request for page 1, but return the existing response. For all other pages, the requests are done concurrently.
I am trying to update a google spreadsheet using PHP. Currently the code reliably connects and prints values, but when I try to update values, I get:
Fatal error: Uncaught exception 'Google_Exception' with message '(update) missing required param: 'spreadsheetId''
$service = new Google_Service_Sheets($client);
$spreadsheetId = '[MyID]';
$range = 'Sheet1!A2:E';
$response = $service->spreadsheets_values->get($spreadsheetId, $range);
$values = $response->getValues();
if (count($values) == 0) {
print "No data found.\n";
} else {
foreach ($values as $row) {
// Print columns A and E, which correspond to indices 0 and 4.
printf("%s, %s<br>", $row[0], $row[4]);
}
}
$range = 'Sheet1!A2:E2';
$values = [1,2,3,4,5];
$body = new Google_Service_Sheets_ValueRange(['values'=>$values]);
$service->spreadsheets_values->update($spreadsheetId,'Sheet1!A2:E',$body,'raw');
The get() call works perfectly, using the same spreadsheet ID. The update call says that it is missing the spreadsheet ID parameter, but prints the correct spreadsheet ID in the call stack.
Is there an issue with the way I am passing the ID in the update call?
Issue was not actually with spreadsheet ID, but with the way $values and the value input option was passed in. $values should be a two dimensional array, and value input option should be an array not a string. posted the corrected parts of code below for posterity.
$range = 'Sheet1!A2:E2';
$values = [[1,2,3,4,5]];
$inputoption = ['valueInputOption' => "RAW"];
$body = new Google_Service_Sheets_ValueRange(['values'=>$values]);
$service->spreadsheets_values->update($spreadsheetId,$range,$body,$inputoption);
It looks like you are not passing any Oauth2 login credentials, which if I'm not mistaken, is required to add or update information through a Google API (though reading the information does not require this).
https://developers.google.com/sheets/api/quickstart/php
OR, you are not passing the correct information in $value when trying to update (it may be looking for a specific spreadsheet ID, not a range.
Hi, is there a way to download the BibTeX entry for something from Google Scholar using PHP without having to download the BibTeX manually one by one? For example, setting a search value like "research" and then downloading the related BibTeX from the links automatically through code.
Any help would be appreciated. I tried to get the HTML page, but as I try to get the page contents the "Import to BibTeX" link disappears on the retrieved page contents.
My code:
<?php
$url = 'http://scholar.google.com/scholar?q=honors+college&hl=en&btnG=Search& amp;as_sdt=1%2C4&as_sdtp=on';
$needle = 'Import into bibtex';
$contents = file_get_contents($url);
echo $contents;
if(strpos($contents, $needle)!== false) {
echo 'found';
} else {
echo 'not found';
}
?>
The short answer is No you cannot do this
Google does not provide API's for search / scholar and uses firm rate-limitation. The problem is that for each BibTex entry you need 2 additional requests (1 for the query, 1 for the 'import link' and a final one to get the actual BibTex entry content)
I wrote a script that scrapes google scholar results and finds the BibTex links and saves the results. However, due to the rate limit is not viable and will get blocked almost instantly.
Code can be viewed here: https://gist.github.com/Tessmore/11099509 and is free of use, but at your own risk.
As Tessmore said - you can't. But you can make it work by using Google Scholar Organic Results API from SerpApi that bypasses quota limits and blocks from search engines so you don't have to think about how to reduce the chance of being blocked.
Example:
Install google-search-results-php package first via composer:
$ composer require serpapi/google-search-results-php:2.0
Code to integrate and full example in the online IDE:
<?php
ini_set("display_errors", 1);
ini_set("display_startup_errors", 1);
error_reporting(E_ALL);
require __DIR__ . "/vendor/autoload.php";
function getResultIds () {
$result_ids = array();
$params = [
"engine" => "google_scholar", // parsing engine
"q" => "biology" // search query
];
$search = new GoogleSearch(getenv("API_KEY"));
$response = $search->get_json($params);
foreach ($response->organic_results as $result) {
// print_r($result->result_id);
array_push($result_ids, $result->result_id);
}
return $result_ids;
}
function getBibtexData () {
$bibtex_data = array();
foreach (getResultIds() as $result_id) {
$params = [
"engine" => "google_scholar_cite", // parsing engine
"q" => $result_id
];
$search = new GoogleSearch(getenv("API_KEY"));
$response = $search->get_json($params);
foreach ($response->links as $result) {
if ($result->name === "BibTeX") {
array_push($bibtex_data, $result->link);
}
}
}
return $bibtex_data;
}
print_r(json_encode(getBibtexData(), JSON_PRETTY_PRINT | JSON_UNESCAPED_SLASHES));
?>
Output:
[
"https://scholar.googleusercontent.com/scholar.bib?q=info:KNJ0p4CbwgoJ:scholar.google.com/&output=citation&scisdr=CgXjqB_WGAA:AAGBfm0AAAAAYkm8amenawYn_EBidiCQT5QBh0L1KJEX&scisig=AAGBfm0AAAAAYkm8at9X4P3eIWKUCOc6UriCEDKVsQE0&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:6zRLFbcxtREJ:scholar.google.com/&output=citation&scisdr=CgWhqfi6GAA:AAGBfm0AAAAAYkm8bDoIhTlfTkQFCOzYGax54Bst576o&scisig=AAGBfm0AAAAAYkm8bMe_7Nq4e4pB5lg_eR9jmeGrO8ek&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:6Yb0qOX88FMJ:scholar.google.com/&output=citation&scisdr=CgXn_4MdGAA:AAGBfm0AAAAAYkm8bi8ypCZcFDNEQZYZeoSlvx-U1OSk&scisig=AAGBfm0AAAAAYkm8bnFMnwTWGfkfJDCNEx0C4n-aQwql&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:HFdEElNr3IgJ:scholar.google.com/&output=citation&scisdr=CgXKCFpQGAA:AAGBfm0AAAAAYkm8byukcQCl4WHQx-nSNp2pC1gUFSKG&scisig=AAGBfm0AAAAAYkm8b8EReTVkLwtxfth_pjwMyyY3dqts&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:bs-D_MeC14YJ:scholar.google.com/&output=citation&scisdr=CgXEUXwWGAA:AAGBfm0AAAAAYkm8bwwfMNJrffe16EaGypsem9JlmGTi&scisig=AAGBfm0AAAAAYkm8b6nWlPOQL63fXg6dV2U-JQbpyQyS&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:Rn1qFVLRfKwJ:scholar.google.com/&output=citation&scisdr=CgU-HswkGAA:AAGBfm0AAAAAYkm8cHE1YRK23eHV8nzF89Eem-Bsuz72&scisig=AAGBfm0AAAAAYkm8cDEj8ZrzZjAo2bNX-tjYYYJYQZay&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:d8thHtTwq6YJ:scholar.google.com/&output=citation&scisdr=CgXj7oe9GAA:AAGBfm0AAAAAYkm8cTYamCKGKImjdg5MQdgbxUIIHAEY&scisig=AAGBfm0AAAAAYkm8cTcop1ceKzKYvKAKtvlSQ1EdEtSN&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:IUmhOhGaDaEJ:scholar.google.com/&output=citation&scisdr=CgU0qZ2_GAA:AAGBfm0AAAAAYkm8ctCPwoihZkjbNcdEqSnwa0J3jwDy&scisig=AAGBfm0AAAAAYkm8cingBcYnEp8YRqFDFdN-FAEBgDT7&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:PWsf8O5OMQEJ:scholar.google.com/&output=citation&scisdr=CgVBAJxXGAA:AAGBfm0AAAAAYkm8c3CDKQG0Wh_lWsXU_DZxEJkwZz5y&scisig=AAGBfm0AAAAAYkm8c6I-HjAxD1Gy6FLFDRdxH_qU4OBr&scisf=4&ct=citation&cd=-1&hl=en",
"https://scholar.googleusercontent.com/scholar.bib?q=info:yGvgHH8ROuIJ:scholar.google.com/&output=citation&scisdr=CgXFuhOkGAA:AAGBfm0AAAAAYkm8dD0rcSR4LQF8GgTxx865BADtXNDN&scisig=AAGBfm0AAAAAYkm8dIQhodz3rHF9IUdaCSRlhdudACNQ&scisf=4&ct=citation&cd=-1&hl=en"
]
Bibtex data from the first URL:
#article{woese2004new,
title={A new biology for a new century},
author={Woese, Carl R},
journal={Microbiology and molecular biology reviews},
volume={68},
number={2},
pages={173--186},
year={2004},
publisher={Am Soc Microbiol}
}
Disclaimer, I work for SerpApi.
I am working on project where I need to implement SphinxSearch with Cake php. So I am simply trying to use a component and behaviour into it. The link to it, is :-
http://bakery.cakephp.org/articles/eugenioclrc/2010/07/10/sphinx-component-and-behavior
I am requesting Sphinx API like below :
$sphinx = array('matchMode' => SPH_MATCH_ALL, 'sortMode' => array(SPH_SORT_EXTENDED => '#relevance DESC'));
$results = $this->ModelName->find('all', array('search' => 'Search_Query', 'sphinx' => $sphinx));
pr($result);
For above it is working fine ,but when I tried to minimise the response time querying to a particular field of the table (using extended match modes,i.e. SPH_MATCH_EXTENDED2) , Sphinx just fails to output any result. The extended query which I used is given below :-
$sphinx = array('matchMode' => SPH_MATCH_EXTENDED2, 'sortMode' => array(SPH_SORT_EXTENDED => '#relevance DESC'));
$results = $this->ModelName->find('all', array('search' => '#Field_name Search_Query', 'sphinx' => $sphinx));
pr($results);
Can anyone recognise where am I going wrong with it? Please help if I am going wrong some where.
Thanks in advance.
Btw, when you use EXTENDED2 mode make sure your rank mode is set accordingly.
Edit:
Anyway back to you problem, looking at that component/behavior code you can see right away that no error checking is done whatsoever. Try changing the code a bit so you can at least see the errors and/or warnings.
Component
if(!isset($query['search'])){
$result = self::$sphinx->Query('', $indexes);
} else {
$result = self::$sphinx->Query($query['search'], $indexes);
}
if ($result === false) {
// throw new SphinxException();
die(self::$sphinx->GetLastError());
}
$warn = self::$sphinx->GetLastWarning();
if ($warn) echo $warn;
Behavior
$result=$this->runtime[$model->alias]['sphinx']->search($s);
if ($result === false) {
die($this->runtime[$model->alias]['sphinx']->GetLastError());
}
$warn = $this->runtime[$model->alias]['sphinx']->GetLastWarning();
if ($warn) echo $warn;
I hope that helps.
As you said ,
Sphinx just fails to output any result.
That means it's an error :
Please check whether you have added the specific field to the indexing by using sql_query
Also check if the field you are searching for is not an attribute
As per the sphinx documentation :
Attributes, unlike the fields, are not full-text indexed. They are stored in the index, but it is not possible to search them as full-text, and attempting to do so results in an error.
Okay normally I'm all fine about the facebook API but I'm having a problem which just keeps me wondering. (I think it's a bug (Check ticket http://bugs.developers.facebook.net/show_bug.cgi?id=13694) but I wanted to throw it here if somebody has an idea).
I'm usng the facebook PHP library to count all attendees for a specific event
$attending = $facebook->api('/'.$fbparams['eventId'].'/attending');
this works without a problem it correctly returns an array with all attendees...
now heres the problem:
This event has about 18.000 attendees right now.
The api call returns a max number of 992 attendees (and not 18000 as it should).
I tried
$attending = $facebook->api('/'.$fbparams['eventId'].'/attending?limit=20000');
for testing but it doesn't change anything.
So my actual question is:
If I can't get it to work by using the graph api what would be a good alternative? (Parsing the html of the event page maybe?) Right now I'm changing the value by hand every few hours which is tedious and unnecessary.
Actually there are two parameters, limit and offset. I think that you will have to play with both and continue making calls until one returns less than the max. limit.
Something like this, but in a recursive approach (I'm writting pseudo-code):
offset = 0;
maxLimit = 992;
totalAttendees = count(result)
if (totalAttendees >= maxLimit)
{
// do your stuff with each attendee
offset += totalAttendees;
// make a new call with the updated offset
// and check again
}
I've searched a lot and this is how I fixed it:
The requested URL should look something like this.
Here is where you can test it and here is the code I used:
function events_get_facebook_data($event_id) {
if (!$event_id) {
return false;
}
$token = klicango_friends_facebook_token();
if ($token) {
$parameters['access_token'] = $token;
$parameters['fields']= 'attending_count,invited_count';
$graph_url = url('https://graph.facebook.com/v2.2/' . $event_id , array('absolute' => TRUE, 'query' => $parameters));
$graph_result = drupal_http_request($graph_url, array(), 'GET');
if(is_object($graph_result) && !empty($graph_result->data)) {
$data = json_decode($graph_result->data);
$going = $data->attending_count;
$invited = $data->invited_count;
return array('going' => $going, 'invited' => $invited);
}
return false;
}
return false;
}
Try
SELECT eid , attending_count, unsure_count,all_members_count FROM event WHERE eid ="event"