I have the following dilemma, I have an Android Application that POST a request into a mysql db via php using HTTP Client, I'm using JSON to extract the response.
Here's a rough view of that scenario,
I have a ArrayList that gets populated using the response, lets call it the Main Menu
Upon clicking an item from the Main Menu, I would send a POST request and wait for the response from the server,
So here's my question, which one is advisable and more reliable,
Upon running the application/activity, I would download everything and hide the irrelevant items, enabling/disabling them upon request, eg. I'm just showing the first relevant items for the Menu, or
Request to the server the information details of the selected Menu Item only upon request? Like, just download the Main Menu, and the default items for the Main Menu id 1
Are there any other available approach, what are the pros and cons of each approach, which one is reliable and more efficient?
Depends on the length of the list but some things to consider...
How long is it going to take to retrieve and populate the menu before the user gets a response?
Latency is the killer in mobile (3G, 2G etc), so one larger request may be quicker than multiple smaller ones.
Multiple smaller requests will consume more battery power as the phone will have to wake the radio from sleep and this is consumes more power than waking one and a larger response. (there was some good research published on this recently)
Related
I'm making a call at the beginning of each page of my site to Stripe (a payment/subscription service) using its API in PHP. I need to check if at any point the subscription has failed/changed so they lose access to the current page.
The problem is that it seems to be slowing down my pages a lot, causing noticeable issues with JavaScript that's hiding/showing elements. What's the best way to handle this? If I AJAX after the page is loaded then a user could disable JS in their browser and retain access. Is it unusual for a cURL to be noticeably slow?
There can be various reasons behind why pages are loading slowly and/or the API response time is high. In your use case, it does make sense to retrieve the subscription object beforehand. An alternative approach would be to store subscription status in your DB so that you don’t have to make a network request to look it up every time. You can use customer.subscription.updated webhook [0] to keep a track of subscriptions statuses.
To verify if Stripe is the one responding to your GET requests slowly, you can profile your request by logging timestamps before/after the GET to retrieve a Subscription completes. If it is not that high, then your latency might lie elsewhere in your code. Otherwise I’d reach out to Stripe Support with specifics such as the request IDs for those GET requests you made to Stripe’s API [1].
[0] https://stripe.com/docs/api/events/types#event_types-customer.subscription.updated
[1] https://dashboard.stripe.com/test/logs
I'm in the process of writing a RESTful API. For the most part, everything works out great but there are a few cases when I'm not dealing with a resource that things start to break down. While there are a million ways to solve the problem I'm facing, I'm looking for some feedback as to which would be the most ideal.
For simplicity, we'll say that the API is a timer.
A user can only have 1 active timer at a time.
The API has 2 functional endpoints start and stop.
When the user starts the timer they POST some data related to the timer which creates a new timer as long as they don't already have a timer running.
Calling stop on the timer updates the timer to mark it inactive.
I currently have this setup as follows:
Start Timer:
POST /api/v1/timer
Body: [
'thing1' => 'something',
'thing2' => 'somethingelse
]
Response: 204
Stop Timer:
PUT /api/v1/timer/stop
Body:
Response: 204
Since a user can only have 1 timer active, it didn't seem to make sense to return the timer id as you would in a more traditional CRUD call.
I've read some posts that suggest using POST method on the stop call to trigger the stop instead of a PUT. I suppose that makes sense too... this just really breaks down when you're not dealing with a traditional resource.
Of course, I could also rewrite it to return a timer resource but to me that adds overhead of the client having to then track the timer id when they want to stop (or delete) the active timer.
Any input here would be greatly appreciated.
Think about how you would implement this requirement on a website.
You would be looking at some web page, specific to the current user. There would be a link labeled start. You would get that link, and it would bring up a form that gives you the ability to override the default parameters associated with starting the timer.
When you submit the form, the browser would construct a request based on the HTML form processing rules. Since this isn't a safe operation, the method would probably be a post, and the form data would be application/x-www-form-urlencoded into the message body.
Since changing the state of the timer would probably change the representation of the original page, that's likely where the form would tell you to submit the POST. A successful response to the POST request would tell the browser to invalidate it's cached representation of the original page.
When you reload that page, the "start" link would be gone, and there would instead be a "stop" link. The operation of that link would be much the same --> clicking the link takes you to a form, submitting the form back to the original page, invalidating the previous representation. When you reload the page, the timer is off and the start link is available again.
GET /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold/start
POST /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold/stop
POST /DarthVaderYellowMerrigold
GET /DarthVaderYellowMerrigold
There are various things you might do to clean this up (returning the new representation in response to a successful POST, for example, with the appropriate Content-Location headers, so that the client doesn't need to fetch the data), but the basic idea is sound.
Do that in a machine readable way, and you have yourself a REST API.
Doing that mostly means documenting how the machine is supposed to understand what each link is for. "To go to the start timer form, look for this link; to go to the stop timer form, look for that link".
You'll probably leave HTTP and URI alone, but it's reasonable to replace HTML with, for example, one of the hypermedia JSON types. Or to put the links into the HTML headers, rather than in the representation.
Of course, HTML has the immediate advantage that you can just walk the API "by hand" and make sure that everything works using your favorite desktop browser. Trade-offs abound.
I've run into a problem while developing a Wordpress plug-in. Basically the API I'm building the plug-in for limits the requests I need to make to 6 per minute, however when the plug-in activates I need to make more than 6 requests to download the API data I need for the plug-in.
The API is the LimelightCRM API (http://help.limelightcrm.com/entries/317874-Membership-API-Documentation). I'm using the campaign_view method of the API, and what I'm looking to do is potentially make the requests in batches, but I'm not quite sure how to approach the problem.
Idea 1:
Just off the top of my head, I'm thinking I'll need to count the number of requests I'll need to make with PHP on plug-in activation, by using campaign_find_active and then divide that count by the request limit (6), and make 6 campaign_view requests per minute until I have all of the data I require and store them in Wordpress transients. However, say I need to make 30 requests, the user can't just sit around waiting 5 minutes to download the data. Even if I manage to come up with a solution for that, it might require me to set the time limits for the Wordpress transients in such a way that the plug-in will never need to make more than 6 requests. So my next thought is, can I use a Wordpress hook to make the requests every-so-often while checking when the last batch of requests was made? So it's already getting very tricky. I wonder if you guys might be able to point me in the right direction. Do you have any ideas on how I might be able to beat this rate limit?
Idea 2:
Cron jobs that store the values in a database?
//Fetch Campaign ID's
$t_campaign_find_active = get_transient('campaign_find_active');
if(!$t_campaign_find_active){
limelight_cart_campaign_find_active();
$t_campaign_find_active = get_transient('campaign_find_active');
return $t_campaign_find_active;
}
//Fetch Campaign Information for each Campaign ID
$llc_cnames = array();
foreach($llc_cids as $count => $id) {
if(!get_transient('campaign_view_'.$id)) {
limelight_cart_campaign_view($id);
$llc_cnames[$id] = get_transient('campaign_view_'.$id);
}
}
//Merge Campaign ID's and Campaign Info into Key => Value array
$limelight_campaigns = array_combine($llc_cids, $llc_cnames);
Note: The functions limelight_cart_campaign_find_active() and limelight_cart_campaign_view() are not included because they simply make a single API request, return the response, and store it in a Wordpress transient. I can include the code if you guys need it, but for the purposes of this example, that part of the plug-in is working so I did not include it.
I've come up with a solution for this guys, and I should have thought of it before. So I've arrived at the conclusion that downloading all of the API data on activation is simply impossible with the current rate limit. Most people who might use the plug-in would have far too many campaigns to download all of their data at once, and it's inevitable that the rate limit will be used up the majority of the time if I keep the code the way it is. So rather than constantly having that API data ready for the plug-in right after activation, I'm going to give the user the ability to make the API calls on demand as needed using AJAX. So let me explain how it will work.
Firstly, on plug-in activation, no data will initially be downloaded, and the user will need to enter their API credentials, and the plug-in will validate them and give them a check mark if the credentials are valid and API log-in was successful. Which uses one API request.
Now rather than having a pre-populated list of campaigns on the "Add Product" admin page, the user will simply click a button on the "Add Product" page to make the AJAX campaign_find_active request which will fetch the campaign ID's and return a drop-down menu of campaign id's and names. Which only uses one request.
After that drop-down data is fetched, they will need to choose the campaign they want to use, and upon choosing the campaign ID the plug-in will display another button to make a campaign_view request to fetch the campaign data associated with the ID. This will return another drop-down menu which will allow them to choose the product. This will also require a little CSS and jQuery to display/hide the AJAX buttons depending on the drop-down values. Which will only use one API request, and because the request is not automatically made and requires a button click the user will not make several API requests when choosing a campaign ID in the first drop-down menu that was fetched.
The user would then click publish, and have a wordpress indexed product with all of the necessary limelight data attached and cached. All API requests will be stored in transients with a 1 hour time limit, and the reason for the hour is so they don't have to wait 24 hours in case they make updates. I will also include a button on the settings page to clear the transients so they can re-download on demand if necessary. That could also get a bit tricky, but for the purposes of this question it's not a problem.
In total, I'm only using 3-4 API requests. I might also build a counter into it so I can display an error message to the user if they use too many requests at once. Something along the lines of "The API's limit of 10 requests per minute has been reached, please wait 60 seconds and try again."
I welcome any comments, suggestions or critiques. Hope this helps someone struggling with API request limits, AJAX is a great way to work around that if you don't mind giving the user a little more control.
I just made 40 API accounts and randomly choose one for each request.. Works well
$api_acounts = array(
"account1" => "asdfasdfdsaf",
"account2" => "asaasdfasdf",
"account3" => "asdfasdf",
);
$rand = rand(1,count($api_acounts));
$username = "account".$rand;
$password = $api_acounts['account'.$rand];
Is there a way to pass results generated within a PHP page (called into action by an ajax post request) back to the document in bits / intervals?
Here is an example...
I am making an ajax POST to a PHP document with keywords passed by the user which scans a few sites to determine if they have resources for the search. If they do the PHP file returns a link to the site, and continues to the next one, if not I will just continue on to the next one..
with an ajax (I use jQuery) I can make this request and wait for the page to load, and then show all the links together easilly, but am wondering if I can display the links one by one as they load from the PHP file so that I don't have to wait for every page to be checked.
Thank you for your input.
You can implement this by having the client send a request for the first X (5 or whatever) results, display those, and then immediately send the request for the next X records. Your client will simply continue making requests and displaying records until it gets an empty response, at which point retrieval is complete.
To make this work you either need to maintain state on the server so that you know "where" in the search to pick up searching, or the client needs to include sufficient information in each AJAX request for the server to know how to continue processing.
By the way, this seems more like a GET operation than a POST.
I am curious as to how Facebook pushes data to the browser as in the news feed. New data shows up at the top of the feed without reloading the page or clicking a button
Does Facebook achieve this by polling their server through AJAX at a set interval or do they somehow push new data from the server to the client unprovoked?
If so what language or API do they use to do this?
It's actually called 'long polling', or 'comet'. There are different ways to perform server push but the most common way is to keep connection open while it receives data (it has drawbacks as browser have limit of number of open connections to a host). Facebook had open-sourced the Tornado web servers, which can handle a lot of open connections (which can be a problem is you have a lot of users, but you are using apache for example). In the moment you receive AJAX response, you simply perform a new request, waiting for next response.
Essentially the code does an AJAX call to their servers, and either waits for a response which triggers another request, polls on a timer, or they open a websocket to receive data as soon as it's pushed. This of course is for "new" data showing up at the top of the feed. When the page is reached at the bottom, they just do another AJAX call to get the next n items.
They push it with AJAX, and they use (at least they USED to use), infinite scrolling.
So you'd load your page, and they'd make an initial call to the server to load some messages based on who is logged in, say with a framework like JQuery:
http://api.jquery.com/jQuery.ajax/
And then as you scroll down, they make note of when you're close to the bottom of the page and they need to load more so you're not left without data, and then they make another call automatically. This is called infinite scrolling, and keeps track of where you are in the DOM:
Just one example: http://ajaxian.com/archives/implementing-infinite-scrolling-with-jquery