I am working with Sound-Cloud API in my application for these i create some APIs. It was working fine yesterday but now its showing
error: string(47) "The requested URL responded with HTTP code 429.
I checked Sound-Cloud documentation and find HTTP Code 429 related to Too many request.
Here my concern is How i can know count of my all requests and remaning request.
Effective July 1, all requests that result in access to a playable stream are subject to a limit of 15,000 requests per any 24-hour time window. Ref
NOTE
There is no way to count, how many requests are remaining or used.
Solution
You have to check, how many API request you have at one page? Reduce them as much as you can.
You can create multiple API KEYS and use them randomly.
You can make cache of your queries.
Related
I am using some API which is free.
I am using PHP script which is using fopen to download JSON from API.
When I make to many requests(eg. 2 requests every minute) API is blocking my PHP server IP.
Is there a way to solve it and possibility to make more requests (I don't want to DDoS attack)?
Is there better solution than use of many PHP servers with different IP's?
This is a quite abstract question as we don't know the actual api you are talking about.
But, usually, if an api implement a rate limit, it shows this kind of header in it's answer:
X-Rate-Limit-Limit: the rate limit ceiling for that given request
X-Rate-Limit-Remaining: the number of requests left for the 15 minute window
X-Rate-Limit-Reset: the remaining window before the rate limit resets in UTC epoch seconds
Please check the docs (this one is from twitter, https://dev.twitter.com/rest/public/rate-limiting).
I am using the Geocoding API, and am receiving OVER_QUERY_LIMIT and I have enabled my billing account, which should give me over 100k queries. I am doing about 2500 or less. It seems to happen when I am processing many items in a php loop, but not for each item - for example.
OK
OK
OVER_QUERY_LIMIT
OK
OK
So it doesn't appear I am actually over the limit, but that's the XML returned for the transaction. If I process the same transaction in a URL it works with no issue.
Ideas?
Pace your application or submit the requests in smaller groups. Solutions include using a cron job to distribute the requests throughout the day, or adding a small delay between requests.
From: https://developers.google.com/maps/documentation/business/articles/usage_limits
If you exceed the usage limits you will get an OVER_QUERY_LIMIT status
code as a response.
This means that the web service will stop providing normal responses
and switch to returning only status code OVER_QUERY_LIMIT until more
usage is allowed again. This can happen:
Within a few seconds, if the error was received because your
application sent too many requests per second.
I'm writing an application for processing large amounts of google analytics data at once, and I keep bumping on the
(403) Quota Error: User Rate Limit Exceeded
error.
I have done some research and I've found out that while the limit is 10queries/second/user,
it defaults to 1. So I adjusted it to 10 in the Google console, but without any luck.
I've also added sleep(0.5) between every other call I make, which would make it impossible for 10 requests to be done in 1 second, but also without any luck.
This seems very weird to me, and that's why I'm wondering if it might be possible that 1 call with multiple dimensions/metrics/sort filters, might be treated as multiple requests?
Edit: I've also looked into the UserIp and the quotaUser standard query parameters, but I'm unsure how I can add those to my request (I'm using the API to make the calls:
$analytics->data_ga->get($query);
). If I understand correctly, these parameters can be used to split your quota over the users you're querying data for. In my case that won't be helpful at all (Correct me if I'm wrong), because the problem is I'm hitting the per second-cap, and I'm not querying for more than one user in the same second.
Any help would be greatly appreciated
You are correct that userid won't affect the 10 per second quota. In my earlier work, I tried the approach of adding a sleep between calls but learned the hard way that that isn't the correct solution. The correct solution is to look for the quota error and when found then add the sleep call. And if you still get a quota error then add a larger sleep call. I try three times in my code. There are other protocol errors - "backend error" I think - where Google's advice is to just try again.
Who know how many url can i check and what time limit between request i need to use with safebrowsing api. I use it with PHP, but after checking 2k urls, i got
Sorry but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.
are supposed to be 10.000
with both the Safe Browsing Lookup API
https://developers.google.com/safe-browsing/lookup_guide#UsageRestrictions
and Safe Browsing API v2
https://developers.google.com/safe-browsing/developers_guide_v2#Overview
but you could ask for more is free they said.
I understand that they allow you to do 10k request per day. On each request you can query for up to 500 URLs, so, in total they let you lookup 5M URLs daily, not bad.
I currently use Google Safe Browsing API and following are the limitations in the API.
A single API key can make requests for up to 10,000 clients per 24-hour period.
You can query up to 500 URLs in a single POST
request.
I previously used one request per time and ended by exceeding the quota defined by the API. But now per request I set maximum of 500 URLs. It helped me not to exceed the limit of the API and it is super fast too.
I run this website for my dad which pulls tweets from his twitter feed and displays them in an alternative format. Currently, the tweets are pulled using javascript so entirely client-side. Is the most efficient way of doing things? The website has next to no hit rate but I'm just interested in what would be the best way to scale it. Any advice would be great. I'm also thinking of including articles in the stream at some point. What would be the best way to implement that?
Twitter API requests are rate limited to 150 an hour. If your page is requested more than that, you will get an error from the Twitter API (an HTTP 400 error). Therefore, it is probably a better idea to request the tweets on the server and cache the response for a certain period of time. You could request the latest tweets up to 150 times an hour, and any time your page is requested it receives the cached tweets from your server side script, rather than calling the API directly.
From the Twitter docs:
Unauthenticated calls are permitted 150 requests per hour.
Unauthenticated calls are measured against the public facing IP of the
server or device making the request.
I recently did some work integrating with the Twitter API in exactly the same way you have. We ended up hitting the rate limit very quickly, even just while testing the app. That app does now cache tweets at the server, and updates the cache a few times every hour.
I would recommend using client-side to call the Twitter API. Avoid calling your server. The only downfall to using client-side js is that you cannot control whether or not the viewer will have js deactivated.
What kind of article did you want to include in the stream? Like blog posts directly on your website or external articles?
By pulling the tweets server side, you're routing all tweet traffic through your server. So, all your traffic is then coming from your server, potentially causing a decrease in the performance of your website.
If you don't do any magic stuff with those tweets that aren't possible client side, I should stick with your current solution. Nothing wrong with it and it scales tremendously (assuming that you don't outperform Twitter's servers of course ;))
Pulling your tweets from the client side is definitely better in terms of scalability. I don't understand what you are looking for in your second question about adding articles
I think if you can do them client side go for it! It pushes the bandwith usage to the browser. Less load on your server. I think it is scalable too. As long as your client can make a web request they can display your site! doesn't get any easier than that! Your server will never be a bottle neck to them!
If you can get articles through an api i would stick to the current setup keep everythign client side.
For really low demand stuff like that, it's really not going to matter a whole lot. If you have a large number of tasks per user then you might want to consider server side. If you have a large number of users, and only a few tasks (tweets to be pulled in or whatever) per user, client side AJAX is probably the way to go. As far as your including of articles, I'd probably go server side there because of the size of the data you'll be working with..