Get Number of Open issues from GitHub using GitHub API - php

I am developing a php application where user input a link to any public GitHub repository and output will be
(i) Total number of open issues
(ii) Number of open issues that were opened in the last 24 hours
(iii) Number of open issues that were opened more than 24 hours ago but less than 7 days ago
(iv) Number of open issues that were opened more than 7 days ago
Code for printing the (i) Total number of open issues is given below using the github api and php curl and it is working fine.
But I have no idea how to print the other above three points (ii),(iii) and (iv).
Any help regarding this will be appreciated. Thanks in advance.
<?php
//Test url
$url = "https://api.github.com/repos/anandkgpt03/test";
//Initiate curl
$ch = curl_init();
//Set the url
curl_setopt($ch, CURLOPT_URL,$url);
//Set the User Agent as username
curl_setopt($ch, CURLOPT_USERAGENT, "anandkgpt03");
//Accept the response as json
curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Accept: application/json'));
//Will return the response, if false it print the response
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Execute
$result=curl_exec($ch);
// Closing
curl_close($ch);
//Decode the json in associative array
$new_result=json_decode($result,true);
echo "Total Number of Open Issues:".$new_result["open_issues_count"];
?>

You can get what you want by using the GitHub API.
Follow these steps:
Visit the issues url with the option state open:
https://api.github.com/repos/{name}/support/issues?q=state:open.
In the results (in JSON format) look for the created_at timestamp.
Compare the aforementioned values with the current timestamp and sort these outputs by using any datetime function.
Hope this helps!

I have used since paramater available in GitHub API that return only issues updated at or after this time that helps in getting number of open issues opened after any time.
This url can be helpful to read more about it: https://developer.github.com/v3/issues/
Below is the proper working code of my question.
<html>
<head>
<title></title>
</head>
<body>
<form action="" method="POST">
<input type="text" name="url" placeholder="Full URL of GitHub repository" size="60">
<input type="submit" name="submitButton">
</form>
</body>
</html>
<?php
if(isset($_POST['submitButton']))
{
//Example-> https://github.com/Shippable/support/issues
$input_url = $_POST['url'];
//Break the input url in array format
$input_url_array = explode('/',$input_url);
//Validate the input url
if(strcmp($input_url_array[0],"https:")||strcmp($input_url_array[1],"")||strcmp($input_url_array[2],"github.com")||empty($input_url_array[3])||empty($input_url_array[4]))
{
die("</br>Invalid Url !!! Url should be in format <b>https://github.com/{org_name or username}/{repo_name}/</b><br>");
}
//url for the github Api, $input_url_array[3] contain organisation or username, put_url_array[3] contain repository name
$url = "https://api.github.com/repos/".$input_url_array[3]."/".$input_url_array[4];
//call the function and receive the result in associative array format
$result = curlRequestOnGitApi($url);
//Get total no of open issues using the $result array
$total_open_issues = $result["open_issues_count"];
echo "<br>Total Open Issues:<b>".$total_open_issues."</b><br>";
//Date and Time 1 day or 24 hours ago in ISO 8601 Format
$time_last24hr = date('Y-m-d\TH:i:s.Z\Z', strtotime('-1 day', time()));
//url for the github Api with since parameter equal to time of last 24 hrs that return only issues updated at or after this time
$url = "https://api.github.com/repos/".$input_url_array[3]."/".$input_url_array[4]."/issues?since=".$time_last24hr;
//call the function and receive the result in associative array format
$result = curlRequestOnGitApi($url);
//Get no of open issues that were opened in last 24 hours
$issues_last24hr = count($result);
echo "Number of open issues that were opened in the last 24 hours:<b>".$issues_last24hr."</b><br>";
//Date and Time 1 day or 24 hours ago in ISO 8601 Format
$time_7daysago = date('Y-m-d\TH:i:s.Z\Z', strtotime('-7 day', time()));
//url for the github Api with since parameter equal to time of 7 days ago that return only issues updated at or after this time
$url = "https://api.github.com/repos/".$input_url_array[3]."/".$input_url_array[4]."/issues?since=".$time_7daysago;
//call the function and receive the result in associative array format
$result = curlRequestOnGitApi($url);
//Get no of open issues that were opened in 7 days ago
$issues_last7days = count($result);
echo "Number of open issues that were opened more than 24 hours ago but less than 7 days ago:<b>".($issues_last7days-$issues_last24hr)."</b><br>";
echo "Number of open issues that were opened more than 7 days ago:<b>".($total_open_issues-$issues_last7days)."</b><br>";
}
function curlRequestOnGitApi($url)
{
$ch = curl_init();
//Set the url
curl_setopt($ch, CURLOPT_URL,$url);
//Set the User Agent as username
curl_setopt($ch, CURLOPT_USERAGENT, "anyusername");
//Accept the response as json
curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'Accept: application/json'));
//Will return the response, if false it print the response
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Execute
$result=curl_exec($ch);
// Closing
curl_close($ch);
//Decode the json in array
$new_result=json_decode($result,true);
//Return array
return $new_result;
}
?>

Related

Problems to extract data from an external web page in PHP

I have a script that is responsible for extracting names of people from an external web page by passing an ID as a parameter.
Note: The information provided by this external website is public access, everyone can check this data.
This is the code that I created:
function names($ids)
{
$url = 'https://www.exampledomain.com/es/query_data_example?name=&id='.$ids;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HTTPHEADER,array("Accept-Lenguage: es-es,es"));
curl_setopt($ch, CURLOPT_TIMEOUT,10);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION,1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$html = curl_exec($ch);
$error = curl_error($ch);
curl_close($ch);
preg_match_all('/<tr class="odd"><td><a href="(.*?)">/',$html ,$matches);
if (count($matches[1] == 0))
{
$result = "";
}
else if(count($matches[1] == 1))
{
$result = $matches[1][0];
$result = str_replace('/es/person/','', $result);
$result = substr($result, 0,-12);
$result = str_replace('-', ' ', $result);
$result = ucwords($result);
}
return $result;
}
Note2: in the variable $ url I have placed an example url, it is not the real url. It's just an exact example of the original URL that I use in my code.
I make the call to the function, and I show the result with an echo:
$info = names('8476756848');
echo $info;
and everything is perfect, I extracted the name of the person to whom that id belongs.
The problem arises when I try to query that function within a for(or while) loop, since I have an array with many ids
$myids = ["2809475460", "2332318975", "2587100534", "2574144252", "2611639906", "2815870980", "0924497817", "2883119946", "2376743158", "2387362041", "2804754226", "2332833975", "258971534", "2574165252", "2619016306", "2887098054", "2449781007", "2008819946", "2763767158", "2399362041", "2832047546", "2331228975", "2965871534", "2574501252", "2809475460", "2332318975", "2587100534", "2574144252", "2611639906", "2815870980", "0924497817", "2883119946", "2376743158", "2387362041", "2804754226", "2332833975", "258971534", "2574165252", "2619016306", "2887098054", "2449781007", "2008819946", "2763767158", "2399362041", "2832047546", "2331228975", "2965871534", "2574501252", "2809475460", "2332318975", "2587100534", "2574144252", "2611639906", "2815870980", "0924497817", "2883119946", "2376743158", "2387362041", "2804754226", "2332833975", "258971534", "2574165252", "2619016306", "2887098054", "2449781007", "2008819946", "2763767158", "2399362041", "2832047546", "2331228975", "2965871534", "2574501252"];
//Note: These data are for example only, they are not the real ids.
$size = count($myids);
for ($i=0; $i < $size; $i++)
{
//sleep(20);
$data = names($myids[$i]);
echo "ID IS: " . $myids[$i] . "<br> THE NAME IS: " . $data . "<br><br>";
}
The result is something like this:
ID IS: 258971534
THE NAME IS:
ID IS: 2883119946
THE NAME IS:
and so on. I mean, it shows me the Ids but the names do not extract them from the names function.
It shows me the whole list of ids but in the case of the names it does not show me any, as if the function names does not work.
If I put only 3 ids in the array and run the for loop again, then it gives me the names of those 3 ids, because they are few. But when the array contains many ids, then the function already returns no names. It is as if the multiple requests do not accept them or limit them, I do not know.
I have placed the function set_time_limit (0) at the beginning of my php file; to avoid that I get the error of excess time of 30 seconds.
because I thought that was why the function was not working, but it did not work. Also try placing a sleep (20) inside the cycle, before calling the function names to see if it was that it was making many requests very quickly to said web page but it did not work either.
This script is already in production on a server that I have hired and I have this problem that prevents my script from working properly.
Note: There may be arrays with more than 2000 ids or I am even preparing a script that will read files .txt and .csv that will contain more than 10000 ids, which I will extract from each file and call the function name, and then those ids and the names will be saved in a table from a mysql database.
Someone will know why names are not extracted when there are many ids but when they are few for example 1 or 10 the function name does work?

PHP Put request (update ECWID e-commerce order with tracking)

I am working with the Ecwid API, and now moving towards updating my order from our fulfillment site with tracking info and shipping status.
Fulfillment Operation is going to export a xml file of the order update.
I have first created the basic script to update a product and this works fine.
// Post Tracking number and change Status to shipped
// trackingNumber : ""
// fulfillmentStatus : "SHIPPED"
$storeID = "";
$myToken = "";
$data = array("trackingNumber" => "9405503699300250719362", "fulfillmentStatus" => "SHIPPED", "orderNumber" => "7074");
$data_string = json_encode($data);
$url = "https://app.ecwid.com/api/v3/".urlencode($storeID)."/orders/".$data['orderNumber']."?token=".$myToken;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json','Content-Length: ' . strlen($data_string)));
$response = curl_exec($ch);
curl_close($ch);
I've also created the script to pull in the xml file and convert to json to 'put' the data over to the shopping cart.
<?php
// The file data.xml contains an XML document with a root element
// and at least an element /[root]/title.
if (file_exists('data.xml')) {
$xml = simplexml_load_file('data.xml');
print_r($xml);
} else {
exit('Failed to open data.xml.');
}
$data_string = json_encode($xml);
echo '<br> br>';
echo "<pre>";
print_r($data_string);
?>
Now this is where i am lost to put the two parts together so that it would loop through the xml file (json content) with multiple "orderNumber(s)" and update the trackingNumber and fulfillmentStatus of each order.
Vitaly from Ecwid team here.
I see that you want to update orders in your Ecwid store via API from an XML file.
So the whole process is:
get details of XML file
parse data in it, find out the total number of orders there
form a loop for each order in the file
make a request to Ecwid API to update order in each loop
In your second code snippet, I see print_r($data_string); - what does it print to the screen?
I imagine the next steps would be:
Manage to correctly find order details in the XML file (order
number, tracking number) while in the loop
Make each loop update specific order in the store
For the step 1, I suggest saving data from XML file to a convenient format for you in PHP, e.g. object or array.
For example, if it was an array, it will be something like this:
Array = [recordArray 1, recordArray 2, recordArray 3]
recordArray = [ orderNumber, trackingNumber ]
For the step 2: So each loop will go through an recordArray in the Array and then get the necessary orderNumber and trackingNumber for the request.
Then the request will use this data to update an order in your Ecwid store, just like you shown in the code snippet above. However the values: 9405503699300250719362 and 7074 will be dynamic and different for each loop.
If you have any questions, please feel free to contact me: http://developers.ecwid.com/contact
Thank you.

php timeout with file_get_html

i been trying to fetch some data from wikia website by using simple_html_dom lib for php. basically what i do is to use the wikia api to convert into html render and extract data from there. After extracting, i will pump those data into mysql database to save. My problem is that, usually i will pull 300 records and i will stuck on 93 records with file_get_html being null which will cause my find() function to fail. I am not sure why is it stopping at 93 records but i have tried various solution such as
ini_set( 'default_socket_timeout', 120 );
set_time_limit( 120 );
basically i will have to access wikia page for 300 times to get those 300 records. But mostly i will manage to get 93 records before file_get_html gets to null. Any idea how can i tackle this issue?
i have test curl as well and have the same issue.
function test($url){
$ch=curl_init();
$timeout=5;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$result=curl_exec($ch);
curl_close($ch);
return $result;
}
$baseurl = 'http://xxxx.wikia.com/index.php?';
foreach($resultset_wiki as $name){
// Create DOM from URL or file
$options = array("action"=>"render","title"=>$name['name']);
$baseurl .= http_build_query($options,'','&');
$html = file_get_html($baseurl);
if($html === FALSE) {
echo "issue here";
}
// this code for cURL but commented for testing with file_get_html instead
$a = test($baseurl);
$html = new simple_html_dom();
$html->load($a);
// find div stuff here and mysql data pumping here.
}
$resultsetwiki is an array with the list of title to fetch from wikia, basically resultsetwiki data set is load from db as well before performing the search.
practically i will it this type of error
Call to a member function find() on a non-object in
answered my own issue, seems to be the URL that i am using and i have changed to curl with post to post the action and title parameter instead

Passing updated value to function (twitter api max_id problems)

I am trying to work with the Twitter search API, I found a php library that does authentication with app-only auth and I added the max_id argument to it, however, I would like to run 450 queries per 15 minutes (as per the rate-limit) and I am not sure about how to pass the max_id. So I run it first with the default 0 value, and then it gets the max_id result from the API's response and runs the function again, but this time with the retrieved max_id value and does this 450 times. I tried a few things, and I can get the max_id result after calling the function, but I don't know how to pass it back and tell it to call the function with the updated value.
<?php
function search_for_a_term($bearer_token, $query, $result_type='mixed', $count='15', $max_id='0'){
$url = "https://api.twitter.com/1.1/search/tweets.json"; // base url
$q = $query; // query term
$formed_url ='?q='.$q; // fully formed url
if($result_type!='mixed'){$formed_url = $formed_url.'&result_type='.$result_type;} // result type - mixed(default), recent, popular
if($count!='15'){$formed_url = $formed_url.'&count='.$count;} // results per page - defaulted to 15
$formed_url = $formed_url.'&include_entities=true'; // makes sure the entities are included
if($max_id!='0'){$formed_url=$formed_url.'&max_id='.$max_id;}
$headers = array(
"GET /1.1/search/tweets.json".$formed_url." HTTP/1.1",
"Host: api.twitter.com",
"User-Agent: jonhurlock Twitter Application-only OAuth App v.1",
"Authorization: Bearer ".$bearer_token."",
);
$ch = curl_init(); // setup a curl
curl_setopt($ch, CURLOPT_URL,$url.$formed_url); // set url to send to
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); // set custom headers
ob_start(); // start ouput buffering
$output = curl_exec ($ch); // execute the curl
$retrievedhtml = ob_get_contents(); // grab the retreived html
ob_end_clean(); //End buffering and clean output
curl_close($ch); // close the curl
$result= json_decode($retrievedhtml, true);
return $result;
}
$results=search_for_a_term("mybearertoken", "mysearchterm");
/* would like to get all kinds of info from here and put it into a mysql database */
$max_id=$results["search_metadata"]["max_id_str"];
print $max_id; //this gives me the max_id for that page
?>
I know that there are must be some existing libraries that do this, but I can't use any of the libraries, since none of them have updated to the app-only auth yet.
EDIT: I put a loop in the beginning of the script, to run e.g. 3 times, and then put a print statement to see what happens, but it only prints out the same max_id, doesn't access three different ones.
do{
$result = search_for_a_term("mybearertoken", "searchterm", $max_id);
$max_id = $result["search_metadata"]["max_id_str"];
$i++;
print ' '.$max_id.' ';
}while($i < 3);

Looping through query links via CURL and merging result arrays in PHP

The following code is supposed to search for a term on twitter, loop through all the result pages and return one big array with the results from each page appended at each step.
foreach($search_terms as $term){
//populate the obj array by going through all pages
//set up connection
$ch = curl_init();
// go through all pages and save in an object array
for($j=1; $j<16;$j++){
$url ='http://search.twitter.com/search.json?q=' . $term .'&rpp=100&page='.$j.'';
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
$var[$j] = curl_exec($ch);
curl_close($ch);
$obj = array_merge((array)$obj,(array)json_decode($var[$j], true));
}
}
It doesn't quite work though and am getting these errors:
curl_setopt(): 3 is not a valid cURL handle resource
curl_exec(): 3 is not a valid cURL handle resource
curl_close(): 3 is not a valid cURL handle resource
...... and this is repeated all the way from 3-> 7...
curl_setopt(): 7 is not a valid cURL handle resource
curl_exec(): 7 is not a valid cURL handle resource
curl_close(): 7 is not a valid cURL handle resource
//set up connection
$ch = curl_init();
// go through all pages and save in an object array
for($j=1; $j<16;$j++){
You need the call to curl_init() inside your loop since you close it at the end of each iteration.

Categories