According to paypal rest docs, for every transaction to paypal I need to submit an 'Authorization:Bearer' which is an access token that is generated from a previous request. I wasn't able to glean from the docs whether or not the 'access_token' can be re-used for multiple requests over it's lifetime? My thought here is to cache the response (on my end) for 6 hours, so I can limit the number of requests to paypal, which should be fine, because the expiry from paypal is 8 hours.
Anyone know if I can re-use the generated access_token for multiple, exclusive transactions?
Yes, the access token can be reused over multiple requests for the lifetime of the token.
In fact, I would recommend it; like you said it cuts down on the number of requests you need to make to us.
I searched for an answer to this as well. It is obvious that this token should be reusable but I failed to find out how and its duration.
Finally, I found a valid resource at the last paragraph here , stating that it is reusable, for the duration provided as the response header (or variable? (NVP)) "expires_in".
But note that nowadays the SDK has the capability to do this for you. Say no more and head straight to this page for more information.
Related
I have an application (php) that handles the API for years.
However a few weeks ago it started giving the error sporadically:
(400) API key not valid. Please pass a valid API key.
In the end the error was continuous and stopped responding.
I related it to Google Calendar API - no longer authorized for reads?
After several days of inactivity, the application worked again for a few days, but the pattern has been repeated again:
It has been giving the error more and more frequently until it has stopped working.
Edited:
The application can be viewed at:
http://intraneteina.unizar.es/intraneteina/index.php?r=calendarioGoo/index
When selecting any option from the dropdown, the application reads information from a google calendar and displays it in html.
It has been working for years, and without touching the code, now it gives the described API error key not valid.
I had been experiencing a similar issue. I found some documentation, eventually, that said there was a limit of 50 (or possibly 100) "active" authorisation tokens allowed for an app unless you are running it via a service account. Typically once the limit is reached Google just drops authorisations off the end of the "active list" it maintains and you are unaware of this happening. This is a problem if you are relying on the expiry of your access token and don't generate a refresh token until you think you need to - i.e. When you generate/refresh your access token you record the expiry date and only refresh it when you system tells you that your token has expired.
Because Google may have disabled that token (dropped it off the active list) in the background and you are unaware of this you try to use that token which hasn't expired - the result is a typically uninformative error message with no indication of what has happened. In our case in the short term setting up a service account was too difficult (implementing RSA256 for the authorisation process) so we got around this by just ignoring the recorded expiry timestamp and requesting a refresh token every time we made a call to the API. I'm sorry I can't link you to the documentation, but I believe I found it via an answer in another S.O. post.
I am migrating to Xero and want to set an invoicing process to run once a month at a specific time using a cron job, I can get the cron job to fire and I have set up a php page based on https://github.com/XeroAPI/xero-php-oauth2-app which I can run manually and it works perfectly.
I've also used https://github.com/XeroAPI/xoauth to retrieve the tokens and store them in the keychain, I can see that they are there.
I've got a bit lost where xoauth says "Piping the access_token, id_token and refresh_token to stdout, so you can use them in a script workflow"
I'm hoping someone has done something similar and can point me in the right direction or even better give me an example as I can't find one online.
I assume I am missing a link between the 2 examples which transfers the token values.
When the cron runs I get the following error
'Fatal error: Uncaught BadMethodCallException: Required parameter not passed: "refresh_token" in /Applications/MAMP/htdocs/vendor/league/oauth2-client/src/Tool/RequiredParameterTrait.php:35'
which is not really a surprise as I'm not giving it a refresh_token as far as I can see.
I am using localhost on a Mac as a development environment.
I have seen a number of questions related to this from more experienced developers but no answers.
Thanks Gordon
thanks for your question. We have gotten this one a lot so I used this as the base for a XeroAPI community-corner video that I will share back here soon that walks through getting access/refresh tokens from xoauth, making api calls, and refreshing to get a new token set.
Answer
What you want to do is after you generate the access token with the xoauth repo. In your PHP script - plug in the access_token & xero-tenant-id (as 2 headers in your api call).
Authorization: "Bearer " + access_token
xero-tenant-id: tenantId
Ensure the API call returns your data. Then create a function in your script that does the following before future API calls
Refreshes for a new token_set
Saves new token_set to a DB or static file
Use that token_set 'access_token' to make your Invoice API call
Repeat step (1-3) at least once every 60 days
NOTE: you will need some kind of persistence to store the continually
refreshed token_set.
Hope this clarifies it for you. I will post back the video for an in depth walkthrough asap.
OAuth2.0 Background:
Essentially our move to simplify and standardize our API authentication came with some challenges in how to setup longstanding API connections for use cases that didn’t need to onboard an increasing number of new users. For instance, a lot of small businesses and accounting firms setup custom processes to batch import/export invoices.
The use case often did not have the need for an application user interface, so standing one up just to get a valid access token was a lot of extra work if the integration only needed to connect to single ‘admin’ type user for a specific Xero Organisation.
This is in relation to the online version of Quickbooks, QBO (not the desktop).
We need our serverside code to be able to log-in and query some data from quickbooks (just like your API provides) and supply this information to our billing system. This would not involved a browser and use something like curl but this means there is no browser and no human to 'log in' and 'request access' each time. I have not found a way to do this yet. Any ideas?
Your question was already answered over here:
https://intuitpartnerplatform.lc.intuit.com/questions/767273-how-can-i-use-api-to-get-quickbooks-data-without-browser-based-oauth
Alas, for the sake of verboseness:
No matter which API you choose, you can do what you're asking.
Regardless of which API you go with (qbXML, or Intuit Anywhere/OAuth) you only need a human to get things connected the very first time you connect.
After that very first time, you can fetch data at any time you want (as you suggest, with CURL) with zero interaction with an actual user. All you have to do is store the OAuth credentials that Intuit gives you. This is how all OAuth implementations work - you store the credentials you get back, so you can request data unattended later.
If that's not the behavior you're seeing, it just means you've implemented something incorrectly (and should probably post your code, so we can help you troubleshoot).
You might want to check out the QuickBooks PHP DevKit, which has examples of doing just what you're asking for:
http://consolibyte.com/quickbooks-open-source/
Best approach is to generate the access token and the refresh token manually via the Quickbooks OAuth playground https://developer.intuit.com/app/developer/playground, save these values and then refresh token every hour.
This process however need to be repeated every 101 days because of refresh token expiry.
I am having trouble figuring this out. Facebook is implementing a new policy https://developers.facebook.com/roadmap/offline-access-removal/ that no longer allows for the simpler "offline_access" tokens that you used to be allowed to get. I am developing an application that needs to access the Graph API every 3 hours with a cronjob, and I am not sure how to set this up so that I dont need to login to access it, since I cant login with a cronjob if I am redirected to a login page. I am assuming I need to use some sort of a Curl call within a php script to get this working. I dont need to post anything, all I am doing is grabbing posts from a few public pages. Any Ideas? I already have a script in place that can do what I want it to, given that I log in first with the login_url. Just need this working with a cronjob.
First of all a reply to the first question - there is nothing to be done on your side. The facebook's change simply means that this kind of applications is no longer possible. The best thing you can do is to request an extended token, which then lasts around 30 (or 60, not sure) days. To request it you need to call the fb api - as shown here and here (albeit not python examples, they are useful pointers). Official FB's explanation is here.
However, this token is going to be invalidated on every occasion the user changes their password, remove the app or log out of facebook. You would need to have a look which requests failed and manually notify the users to renew the token at your side and store the new one.
To your second question about crawling public posts - do you even need an access token? Try using the Graph API without it and see if you can get to the information you are interested in.
I am doing some benchmark testing on my web app and notice that the responses from Facebooks API are a lot slower than Twitters.
** For the record, I am using the twitter-async library for Twitter API integration and Facebooks own library here
With the Twitter library I can save an oAuth token & secret, I then use these to create an instance and make calls, simple. For Facebook, unless I ask for offline_permission, I must store an oAuth code and recreate an oAuth access token each time the user logs into my app.
Given the above I can:
Retrieve a Twitter users timeline in 0.02 seconds.
Get a FB oAuth Access Code in 1.16 seconds, then I can get the users details in 2.31 seconds, totalling 3.47 seconds to get the users details.
These statistics are from using functions Facebook has provided in their PHP API library. I also tried implementing my own CURL functions to get this information via a request and the results are not much better.
Is this the same kind of response times others are getting using the Facebook API?
Besides requesting offline permission and storing the permanent access token, how else can I speed up these requests, is the problem on my end or Facebooks?
Thanks,
Chris
I also have the experience the Facebook API is quite slow. I believe the facebook PHP API does not much more than wrap around CURL in the case of API calls so it makes sense that this didn't improve the speeds.
I work on a canvas page, which means for existing users, I get an access token and fb_UID as he/she comes in. At first, I did a /me graph call and sometimes a /me/friends. The first takes like 0.6 secs, the second usually a bit more. So in that case I can (to some extend) confirm your findings.
That's why I've now switched to storing important stuff locally and updating it only when needed (real time update API). Basically, I don't need any API calls during 'normal' operation.
I realize you are probably integrating FB on your own page, and perhaps use a bit more info than just name, fb-UID & friends, and that this solution is not totally answering your question. But perhaps it can still function as a small piece of the puzzle ;)
I am looking forward to other perspectives on this as well!
My application calls multiple URL's from Facebook. It does take some time :/
This is why I decided to write a function which stores the results in $_SESSION so I can use it again later, along with a timestamp to see if the data is too old.
This doesn't solve the actual problem, it just saves you having to keep fetching it.
What I like to do for end user experience, is forward them to page with a loading .gif - then have javascript request the page that actually fetches data. That way, the user remains on a loading page with a nice gif to stare at, until the next page is ready.