Information
I've started using the Asana API to make our own task overview in our CMS. I found an API on github which helps me a great deal with this.
As I've mentioned in an earlier question, I wanted to get all tasks for a certain user. I've managed to do this using the code below.
public function user($id)
{
if (isset($_SERVER['HTTP_X_REQUESTED_WITH']) &&
($_SERVER['HTTP_X_REQUESTED_WITH'] == 'XMLHttpRequest')) {
$this->layout = 'ajax';
}
$asana = new Asana(array(
'apiKey' => 'xxxxxxxxxxxxxxxxxxxx'
));
$results = json_decode($asana->getTasksByFilter(array(
'assignee' => $id,
'workspace' => 'xxxxxxxxxx'
)));
if ($asana->responseCode != '200' || is_null($results)) {
throw new \Exception('Error while trying to connect to Asana, response code: ' . $asana->responseCode, 1);
}
$tasks = array();
foreach ($results->data as $task) {
$result = json_decode($asana->getTaskTags($task->id));
$task->tags = $result->data;
$tasks[] = $task;
}
$user = json_decode($asana->getUserInfo($id));
if ($asana->responseCode != '200' || is_null($user)) {
throw new \Exception('Error while trying to connect to Asana, response code: ' . $asana->responseCode, 1);
}
$this->render("tasks", array(
'tasks' => $tasks,
'title' => 'Tasks for '.$user->data->name
));
}
The problem
The above works fine, except for one thing. It is slower than a booting Windows Vista machine (very slow :) ). If I include the tags, it can take up to 60 seconds before I get all results. If I do not include the tags it takes about 5 seconds which is still way too long. Now, I hope I am not the first one ever to have used the Asana API and that some of you might have experienced the same problem in the past.
The API itself could definitely be faster, and we have some long-term plans around how to improve responsiveness, but in the near-to-mid-term the API is probably going to remain the same basic speed.
The trick to not spending a lot of time accessing the API is generally to reduce the number of requests you make and only request the data you need. Sometimes, API clients don't make this easy, and I'm not familiar with the PHP client specifically, but I can give an example of how this would work in general with just the plain HTTP queries.
So right now you're doing the following in pseudocode:
GET /tasks?assignee=...&workspace=...
foreach task
GET /task/.../tags
GET /users/...
So if the user has 20 tasks (and real users typically have a lot more than 20 tasks - if you only care about incomplete and tasks completed in the last, say, week, you could use ?completed_since=<DATE_ONE_WEEK_AGO>), you've made 22 requests. And because it's synchronous, you wait a few seconds for each and every one of those requests before you start the next one.
Fortunately, the API has a parameter called ?opt_fields that allows you to specify the exact data you need. For example: let's suppose that for teach task, all you really want is to know the task ID, the task name, the tags it has and their names. You could then request:
GET /tasks?assignee=...&workspace=...&opt_fields=name,tags.name
(Each resource included always brings its id field)
This would allow you to get, in a single HTTP request, all the data you're after. (Well, the user lookup is still separate, but at least that's just 1 extra request instead of N). For more information on opt_fields, check out the documentation on Input/Output Options.
Hope that helps!
Related
I'm building a support chat application. It's built on Laravel Echo through Pusher.js.
There are two sides - support/admin and client. When a client starts a chat, support can accept it and they can chat together. It's working like it should be, but there is one thing. When the client goes offline (close browser, leave site, lost internet connection...) it should wait for about a few seconds (to make sure it was not a mistake) and then close the chat. So when he comes back in about an hour, there would not be any active chat.
I'm checking both sides' online status with presence channel with simple code:
this.presence = Echo.join('chat');
this.presence
.listen('.pusher:subscription_error', (result) => {
if(this.debug) {
console.log(result);
}
})
.listen('.pusher:member_added', (result) => {
if(!!result.info.is_admin) {
this.presence_users.push(result.info);
}
})
.listen('.pusher:member_removed', (result) => {
let found = _.find(this.presence_users, ['id', result.id]);
let index = this.presence_users.indexOf(found);
this.presence_users.splice(index, 1);
})
.here((result) => {
this.presence_users = _.filter(result, ['is_admin', true]);
});
On the support side it's a little different, but still the same logic (also don't worry - user's id is not id from database, but unique md5 identifier).
Presence channel is working good. But I can't find anywhere on the internet, how to set up connection_timeout URL? I just think it could be URL, where Pusher.js will post some data when the user goes offline, or connection is lost - my custom id field, for example. As I noted in the start, it should have some "cooldown", when user goes offline by mistake. This would help to close the chat when the user is not available to respond.
Do you have any experience with a similar problem? If so, how did you solve it? Or - is it even possible to solve it with Pusher.js?
Well, 7 days are gone and no answer here, so I think it's not possible the way I describe. But there can be a "hacky" way:
Create a CRON job which runs every 10 minutes
Script will get all chats from database with flag active or pending
When chat has no recent messages (nothing from last 5-10 minutes), then check if users are online
Get users from presence channel
$response = $pusher->get('/channels/chat/users');
if($response['status'] == 200) {
$users = json_decode($response['body'], true)['users'];
}
If there is at least one of them online, skip, otherwise wait for a short time (5 seconds, just to be sure), check online status again and when they are still offline, close the chat.
Haven't tested it, since it is not required yet. Maybe someone will find this helpful.
Is there a way to sideload (load multiple API calls at the same time) API calls to lessen the impact on API call limits, using PHP?
For example, we're using the EchoNest API to gather information on musicians. When the artist page on our site is accessed, we run multiple functions which each call a different API method that returns the specific data that we need. Everything works and looks awesome!
Here are a few (abbreviated) methods that we're calling that each count against our call limit:
function artistPageNews() {
$artist_name = $_GET['artistname'];
$results = iTunes::search($artist_name, array(
'entity' => 'musicVideo'
))->results;
$echonest_api_key = "OUR_API_KEY";
// News Method
$echonest_news = 'http://developer.echonest.com/api/v4/artist/news?api_key='.$echonest_api_key.'&name='.str_replace(" ", "+", $artist_name).'&format=json&results=2&start=0';
$echonest_news_json = file_get_contents($echonest_news);
$news_json = json_decode($echonest_news_json);
$news_entry = $news_json->response->news;
foreach ($news_entry as $news) {
// Do Magic Stuff Here...
}
}
function artistPageVideos() {
$artist_name = $_GET['artistname'];
$results = iTunes::search($artist_name, array(
'entity' => 'musicVideo'
))->results;
$echonest_api_key = "OUR_API_KEY";
// Videos Method
$echonest_videos = 'http://developer.echonest.com/api/v4/artist/video?api_key='.$echonest_api_key.'&name='.str_replace(" ", "+", $artist_name).'&format=json&results=6&start=0';
$echonest_videos_json = file_get_contents($echonest_videos);
$videos_json = json_decode($echonest_videos_json);
$videos_entry = $videos_json->response->video;
foreach ($videos_entry as $video) {
// Do More Magic Stuff Here...
}
}
We have maybe about 7 (or more) of these methods that are called on each Artist page load. Obviously this can mean trouble when lots of people are viewing the artist pages every hour.
I understand that there's a way to store the more static information into a database and use that info instead of calling the API methods on every request. I am currently exploring that option. But I also read here that there may be a way to 'sideload' the API calls so that you can make multiple requests at one time. In that example, they're using Curl. I'm trying to do this with PHP.
curl https://{subdomain}.zendesk.com/api/v2/help_center/fr/articles.json?include=users \
-v -u {email_address}:{password}
Can anyone help me get started with this or perhaps recommend a better way to do this, such as storing this information into a database or table and pulling from that instead of calling the API every time?
Thanks in advance.
We're interested about using Google Custom Search / Google in our project, mostly due to the fact that it's amazing at conjugation & correcting misspelled words.
We know that it can return data in JSON or XML, and we're fine with that. But finding an answer to question:
Can we use that conjugation and mistake correction and search our own database/api?
If you would type drnks with no alcohol it would automatically correct to drinks with no alcohol, and then search our database like this:
http://example.com?search=drinks&alcohol=0, and it could respond like this:
{
"coke": {
"alcohol": 0,
"calories": 300,
"taste": "awesome"
},
"pepsi": {
"alcohol": 0,
"calories": 300,
"taste": "meh"
}
}
And then it would return these two results, in some form.
Solutions using the paid version are fine.
If it's possible to do this, could you provide me with a simple example?
Google provides a REST API for their custom search, you can query it from your server to determine whether there is a better spelling for the search terms or not, and then use that to query your internal database.
In my code I'm using Guzzle, a REST client library to avoid suffering with cURL's ugly and verbose code, but feel free to use cURL if you really need to.
// Composer's autoloader to load the REST client library
require "vendor/autoload.php";
$api_key = "..."; // Google API key, looks like random text
$search_engine = "..."; // search engine ID, looks like "<numbers>:<text>"
$query = "drnks with no alcohol"; // the original search query
// REST client object with some defaults
// avoids specifying them each time we make a request
$client = new GuzzleHttp\Client(["base_url" => "https://www.googleapis.com", "defaults" => ["query" => ["key" => $api_key, "cx" => $search_engine, "fields" => "spelling(correctedQuery)"]]]);
try {
// the actual request, with the search query
$resp = $client->get("/customsearch/v1", ["query" => ["q" => $query]])->json();
// whether Google suggests an alternative spelling
if (isset($resp["spelling"]["correctedQuery"])) {
$correctedQuery = $resp["spelling"]["correctedQuery"];
// now use that corrected spelling to query your internal DB
// or do anything else really, the query is yours now
echo $correctedQuery;
} else {
// Google doesn't have any corrections, use the original query then
echo "No corrections found";
}
} catch (GuzzleHttp\Exception\TransferException $e) {
// Something bad happened, log the exception but act as if
// nothing is wrong and process the user's original query
echo "Something bad happened";
}
Here are some instructions to obtain your API key, and the custom search engine ID can be obtained from the control panel.
If you look carefully you can see I've specified the fields query parameter to request a partial response with only the eventual spelling suggestions, to (hopefully) get better performance as we don't need anything else from the response (but feel free to change/remove it if you do need the complete response).
Note that Google has no clue about what's in your database so the spelling corrections will only be based on the public data Google has about your website, I don't think there is a way to make Google know about your internal DB, not that it's a good idea anyway.
Finally, make sure to handle rate-limits and API failures gracefully by still giving the user the possibility to search using their original query (just act like nothing wrong happened, and only log the error for later review).
In my web application, many users post data and then request calculated result. Since the calculation is time-consuming, I want to cache the result of the first user and let other users read the cache.
if ( Cache::exist( $dataId ) ) {
return Cache::read( $dataId );
}
$result = getDataFromDatabaseAndCalcResult( $dataId );
Cache::write( $dataId, $result );
return $result;
However, when I tried this program, more than 100 requests come in simultaneously and most of them find no cache, so they all call getDataFromDatabaseAndCalcResult method. The server runs out of CPU resource.
Is there any idea for that only the first user calls getDataFromDatabaseAndCalcResult. Should I implement some job-queue or mysql-locking-system or something?
Apologies, since I may not know the terminologies for the salesforce API. I just started programming a connector to interact with salesforce and I am stuck.
I have a requirement, where each time a new entry is added to the Leads section, I will have to retrieve a couple of fields (Firstname and Product Code) and pass it to a different software that makes use of PHP.
<?php
require "conf/config_cleverbridge_connector.inc.php";
require "include/lc_connector.inc.php";
// Start of Main program
// Read basic parameters
if ($LC_Username === "")
{
$LC_Username = readParam("USER");
}
if ($LC_Password === "")
{
$LC_Password = readParam("PASSWORD");
}
$orderID = "";
$customerID = substr(readParam("PURCHASE_ID"), 0, 10);
$comment = readParam("EMAIL")."-".readParam("PURCHASE_ID");
// Create product array
$products = array();
$itemID = readParam("INTERNAL_PRODUCT_ID");
$quantity = 1;
if (!ONCE_PER_PURCHASED_QUANTITY)
{
$quantity = readParam("QUANTITY");
}
// Add product to the product array
$products[] = array (
"itemIdentification" => $itemID,
"quantity" => $quantity,
);
// Create the order
$order = array(
"orderIdentification" => $orderID,
"customerIdentification" => $customerID,
"comment" => $comment,
"product" => $products,
);
// Calling webservice
$ticket = doOrder($LC_Username, $LC_Password, $order);
if ($ticket)
{
Header("HTTP/1.1 200 Ok");
Header("Content-Type: text/plain");
print TICKET_URL.$result->order->ticketIdentification;
exit;
}
else
{
$error = "No result from WSConnector_doOrder";
trigger_error($error, E_USER_WARNING);
printError(500, "Internal Error.");
exit;
}
// End of Main program
?>
Now this is the code that I got and have to work with. And this is hosted on a different remote server.
I am very very new to salesforce and I am not really sure how to trigger calling this php file over a remote site.
The basic idea is:
1. New entry in Lead is created.
2. Immediately 2 fields (custID and prodID) are sent to this PHP file I have pasted above (some of the variables are different)
3. This does its processing and sends 2 fields back to salesforce.
Any help or guidance is appreciated. Even links to read up on is okay as I am completely clueless.
PS: I have another example where it makes use of JSON Messages if that may make any difference.
Thanks
I'll repost the links from my comment :)
https://salesforce.stackexchange.com/questions/23977/is-it-possible-to-get-the-record-id
Web hook in salesforce?
If your PHP endpoint is visible on the open web (not a part of some intranet or just your own localhost) then simplest thing to do would be to send an Outbound Message from Salesforce. No coding required, just some XML document you'll have to parse on the PHP side. Plus it will automatically attempt to resend the messages if the host is unreachable...
If your app can't be accessed from SF servers then I think your PHP app will have to be the "actor". Querying SF every X minutes for new Leads or maybe subscribing to Streaming API... This will mean you'd have to store credentials to SF on your PHP app and remember to either change the password periodically or set on the "integration user"'s profile the "password never expires" checkbox.
So you're getting the notification, you generate your tickets, time to send them back. Will you want to pretend the update of Lead was done by the person that created it or will you want to see "last modified by: Integration User"? Outbound message can contain session id which you can use to act as the person who initiated the action (created the lead and fired the workflow) - at least until they log out or the session timeouts.
For message back you can use SOAP or REST salesforce apis - read the docs to figure out how to send an update command (and if you want to make it clear it was done by special user associated with this PHP app - how to log in to the APIs). I think the user's profile must have "API enabled" ticked before you could reuse somebody's session so maybe it's better to have a dedicated account for integrations like that...
Another thing to keep in mind if it'd be outbound messages is to ignore the messages sent from sandboxes so if somebody makes a test environment you will not call your "production" database of tickets. You can also remember to modify the outbound message and remote site setting every time a sandbox is made so you'll have "prod talking to prod, test talking to test". I know you can include user's session id in the OM - so maybe you can also add organization's id (for production it'll stay the same, every new sandbox will have new id).
The problem with this approach is that it might not scale. If 1000 leads is inserted in one batch (for example with Data Loader) - you'll get spammed with 1000 outbound messages. Your server must be able to handle such load... but it will also mean you're using 1 API request to send every single update back. You can check the limit of API requests in Setup -> Company Information. Developer Edition will have this limit very low, sandboxes are better, production is best (it also depends how many user licenses have you bought). That's why I've asked about some batching them up.
More coding but also more reliable would be to ask SF for changes every X minutes (Streaming API? Normal query? check the "web hook" answer) and send an update of all these records in one go. SELECT Id, Name FROM Lead WHERE Ticket__c = null (note there's nothing about AND LastModifiedDate >= :lastTimeIChecked)...