Course catalog of course era using rest api - php

I am using open api given by course era.
Problem is that its giving only list of 100 courses,i checked on website there are 1294 courses listed.then why it is giving 100 courses on request.
my code is
<?php
$url = "https://api.coursera.org/api/courses.v1";
$result = file_get_contents($url);
print_r($result);
?>
what should i do to fetch whole course catalog and store it in mysql db

From their docs:
https://building.coursera.org/app-platform/catalog/
To paginate through a result set, use integer start and limit query
parameters.
curl "https://api.coursera.org/api/courses.v1?start=300&limit=10"
So just use start and limit for pagination

The API will only return 100 records on each call. In order to get all records, you will need to call the method multiple times and concatenate/store all of the responses.
The response returns the following value, which can be used to modify the subsequent call.
paging":{"next":"101","total":1948},"linked":{}}
Which can then be used accordingly;
https://api.coursera.org/api/courses.v1?start=101

From the documentation:
curl "https://api.coursera.org/api/courses.v1?start=300&limit=10"
You will need to loop through and increase the start until you do not get any more results.
Documentation

Related

Codeigniter Commands out of sync; you can't run this command now

I've been spent hours trying to figure out how I'm supposed to get around this error in my scenario. I'm trying to run a few queries in sequence.
I have read: codeigniter : Commands out of sync; you can't run this command now, however I am not able to update /system/database/drivers/mysqli/mysqli_result.php
I've tried:
$this->db->reset_query();
$this->db->close();
$this->db->initialize();
$this->db->reconnect();
mysqli_next_result( $this->db->conn_id );
$query->free_result();
But they either give me the same error, or different errors which I will detail in the comments of my code.
The way my code is organized- I have a make_query method that takes a bunch of search options and figures out which tables to join and fields to search based on those. Sometimes, I just want to count results, sometimes I want all of the resulting data, sometimes I just want distinct values for certain fields or to group by certain fields. So I call make_query with my options, and then decide what to select afterwards. I have also been saving my query text to report, so that users can see what query is being run.
One interesting thing I have noted is that when I have no options set (ie there are no WHERE clauses in my SQL), I do not get this error and the multiple queries are able to run with no problem!
Here is my code:
//Get rows from tblPlots based on criteria set in options array
public function get_plot_data($options=array()){
$this->db->save_queries = TRUE;
log_message('debug','before make query 1 get plot data');
$this->make_query($options,'plot');
log_message('debug','after make query 1 get plot data');
$this->db->distinct();
//Now, set select to return only requested fields
if (!empty($options['fields']) and $options['fields']!="all" ){
//Because other tables could be joined, add table name to each select
array_walk($options['fields'], function(&$value, $key) { $value = 'tblPlots.'.$value;} );
$this->db->select($options['fields']);
}
if (!empty($options['limit']) and $options['limit']>0){
$this->db->limit($options['limit'], $options['offset']);
}
//Get the resulting data
$result=$this->db->get('tblProgram')->result();
$query_text = $this->db->last_query(); //tried removing this but didn't help
log_message('debug','query text '.$query_text);
$this->db->save_queries = FALSE;
//get the number of rows
//$this->db->reset_query();
//$this->db->close();
//$this->db->initialize();
//$this->db->reconnect();
//mysqli_next_result( $this->db->conn_id );
//$this->db->free_result(); //Call to undefined method CI_DB_mysqli_driver::free_result()
//$result->free_result(); //Call to a member function free_result() on array
log_message('debug','before make query 2');
$this->make_query($options,"plot");
$this->db->select('pkProgramID');
log_message('debug','before count results');
//this is where my code errors out trying to do the next step:
$count=$this->db->count_all_results('tblProgram');
}
I'm not including the code for make_query because it is long and calls other functions, but it runs successfully the first time and outputs the SQL query I would expect. If it would be helpful, I can include that code as well.
I'm not sure how to correctly call free_result() given that the CI documentation only gives an example using query('SQL QUERY') and I am building the query using Active Record, so I'm not sure how to use free result in my scenario? Maybe that's the issue?
$query2 = $this->db->query('SELECT name FROM some_table');
$query2->free_result();
Thank you for any help!
I ended up posting on a CI forum that suggested https://www.youtube.com/watch?v=yPiBhg6r5B0 which worked! Also I ended up having to join my tables differently to avoid timeouts (I think I was having trouble because I was using AJAX to call many queries asynchronously and one of them was timing out). Hope this helps someone!

SugarCRM 6.5.26 CE - contacts export using SugarBean [php]

I've got SugarCrm plugin which is exporting data to external service. I'm using logic hooks for updated/deleted/new Contacts, but I've got problem with synchronizing already existing data. I have to extract all the data from the SugarCRM and there are two SugarBean methods I've tried to use: get_full_list() and get_list(). First one gives me the full Contact list, but I need to send it in batches 1000 Contacts in one Json max, the second method returns only first page of the Contacts (depends on config settings 10 - 1000max entries).
I'm using this method ATM:
// prepare contacts data from SugarBean
$bean = BeanFactory::getBean($module);
$contactResults = $bean->get_full_list();
Then foreach on $contactResults and save the data I want to the required format and send it as a Json via postrequest. I've tried to find the solution to split it into batches, but Im stuck :( Neither get_full_list or get_list seems to work for me.
Any suggestions? Maybe someone solved this issue already?
Thanks in advance!
It sounds to me like your problem is creating batches? If not please be more specific about what isn't working.
For splitting an array into batches, you may want to have a look at https://php.net/manual/en/function.array-chunk.php
Also get_list supports retrieving later pages. It is defined like this: function get_list($order_by = "", $where = "", $row_offset = 0, $limit=-1, $max=-1, $show_deleted = 0, $singleSelect=false, $select_fields = array()).
That means for the second page you could specify $row_offset = 1000, for the third page make it 2000, etc. So basically run a loop that calls get_list with $limit = 1000 and increases an initial $row_offset of 0 by 1000 after each iteration, until less than 1000 records or null is returned by the function.
Here are some general hints if you run into problems with processing those beans:
If the problem you're having is incomplete data, try loading each bean manually by using its ID. Some Sugar functions don't load all (special) fields by default.
If things seem to just fail for no reason, make sure to check your PHP log for errors. Maybe loading as many beans at once could possibly cause problems with your PHP's max_execution_time or memory_limit.

Nested API calls PHP

I make an API call which return a list of items. I use a foreach loop to iterate the JSON data:
foreach($decoded_results['items'] as $item) {
$image_link = $item['thumb'];
$link_url = $item['url'];
$subject = $item['title'];
}
I need some additional information for each item and in order to get that I need to call another endpoint.
In my first response every item in the JSON object has a property with an ID that corresponds to a property with the same ID in the other JSON object, that I get as a response when I call the second endpoint. This ID is needed as a parameter in the query string for the second API call.
What would be the best approach to accomplish this? I've looked into curl_multi a little bit but not sure if this is suitable for the situation.
The best solution would be if your second API has an endpoint that accepts a list of IDs and can return all the information you need in one call, otherwise it is known as the N+1 problem in database management, it is also the same here and is not an optimal solution.

Google Big Query + PHP -> How to fetch a large data set without running out of memory

I am trying to run a query in BigQuery/PHP (using google php SDK) that returns a large dataset (can be 100,000 - 10,000,000 rows).
$bigqueryService = new Google_BigqueryService($client);
$query = new Google_QueryRequest();
$query->setQuery(...);
$jobs = $bigqueryService->jobs;
$response = $jobs->query($project_id, $query);
//query is a syncronous function that returns a full dataset
The next step is to allow the user to download the result as a CSV file.
The code above will fail when the dataset becomes too large (memory limit).
What are my options to perform this operation with lower memory usage ?
(I figured an option is to save the results to another table with BigQuery and then start doing partial fetch with LIMIT and OFFSET but I figured a better solution might be available..)
Thanks for the help
You can export your data directly from Bigquery
https://developers.google.com/bigquery/exporting-data-from-bigquery
You can use PHP to run a API call that does the export (you dont need the BQ tool)
You need to set the jobs configuration.extract.destinationFormat see the reference
Just to elaborate on Pentium10's answer
You can export up to a 1GB file in json format.
Then you can read the file line by line which will minimize the memory used by your application and then you can use json_decode the information.
The suggestion to export is a good one, I just wanted to mention there is another way.
The query API you are calling (jobs.query()) does not return the full dataset; it just returns a page of data, which is the first 2 MB of the results. You can set the maxResults flag (described here) to limit this to a certain number of rows.
If you get back fewer rows than are in the table, you will get a pageToken field in the response. You can then fetch the remainder with the jobs.getQueryResults() API by providing the job ID (also in the query response) and the page token. This will continue to return new rows and a new page token until you get to the end of your table.
The example here shows code (in java in python) to run a query and fetch the results page by page.
There is also an option in the API to convert directly to CSV by specifying alt='csv' in the URL query string, but I'm not sure how to do this in PHP.
I am not sure do you still using the PHP but the answer is:
$options = [
'maxResults' => 1000,
'startIndex' => 0
];
$jobConfig = $bigQuery->query($query);
$queryResults = $bigQuery->runQuery($jobConfig, $options);
foreach ($queryResults as $row) {
// Handle rows
}

Understanding Twitter API's "Cursor" parameter

I don't really get how to use the Curesor parameter in Twitter's API, for an exmaple - here.
Am I supposed to make a new API call for each 100 followers?
I'd love it if someone could provide a PHP example for getting a full list of followers assuming I have more than 100...
Thanks in advance!
You need to pass the cursor value back to the API to get the next "chunk" of followers. Then you take the cursor parameter from that chunk and pass it back to get the next chunk. It is like a "get the next page" mechanism.
Since this question was asked, Twitter API has changed in many ways.
Cursor is used to paginate API responses with many results. For example, a single API call to obtain followers will retrieve a maximum of 5000 ids.
If you want to obtain all the followers of a user you will have to make a new API call, but this time you have to indicate the "next_cursor" number that was in your first response.
If it's useful, the following python code will retrieve followers from a given user.
It will retrieve a maximum of pages indicated by a constant.
Be careful not to be banned (i.e.: don't make more than 150 api calls/hour in anonymous calls)
import requests
import json
import sys
screen_name = sys.argv[1]
max_pages = 5
next_cursor = -1
followers_ids = []
for i in range(0,max_pages):
url = 'https://api.twitter.com/1/followers/ids.json?screen_name=%s&cursor=%s' % (screen_name, next_cursor)
content = requests.get(url).content
data = json.loads(content)
next_cursor = data['next_cursor']
followers_ids.extend(data['ids'])
print "%s have %s followers!" % (screen_name, str(len(followers_ids)))
Despite you asked it some time ago, I hope this will be the more accurated answer.
Twitter does not offer info about what next_cursor means. Only how cursor works.
It is only an easy way for Twitter to manage pagination.
«The function is intended to be used linearly, one cursor set at a time per user.» Source
«It's potentially less efficient for you but much more efficient
for us.» Twitter Staff
BUT...
Someone ask before in a broken link «are cursors persistents? seem that the answer is "yes"»
This means you can save your last cursor before 0 and continue with it next time.
Check out http://code.google.com/p/twitter-boot/source/browse/trunk/twitter-bot.php
foreach ($this->twitter->getFollowers(,0 ) as $follower)//the 0 is the page
{
if ($this->twitter->existsFriendship($this->user, $follower['screen_name'])) //If You Follow this user
continue; //no need to follow now;
try
{
$this->twitter->createFriendship($follower['screen_name'], true); // If you dont Follow Followit now
$this->logger->debug('Following new follower: '.$follower['screen_name']);
}
catch (Exception $e)
{
$this->logger->debug("Skipping:".$follower['screen_name']." ".$e->getMessage());
}
}
}

Categories