Rate Limiting
The API is rate limited by partner entity, to ensure that no single partner can use too much of the server resources.
If the limit is reached, additional requests are not processed and response with HTTP status code 429 is returned.
Over time, requests will be allowed again (as indicated by the Retry-After header) and the request can be repeated.
The API uses a 'leaky bucket' algorithm for rate limiting. It allows a sustained rate of 1 request per second, i.e. the bucket 'leaks' one request per second. The bucket has a size of 61, so a divison can execute a burst of 61 requests at once (within one second) before the limit is reached, i.e. the bucket is full. The next request is available after a second, when one request has 'leaked' from the bucket. Each second, another request 'leaks' from the bucket until after 61 seconds without requests, the bucket is empty again.
If you expect to reach these limits, you should implement client-side rate limiting as well as handling such an error.
Handling rate limit errors entails to not retry a request immediately, but to wait at least the time provided by the Retry-After header.
The following rate limiting-related headers are returned:
Ratelimit-Limit: Total number of requests generally allowed at onceRatelimit-Remaining: Number of requests remainingRatelimit-Reset-After: Duration in seconds until the limit will be reset to the maximum if no additional requests are executed.Retry-After: Returned only when the rate limit was exceeded and contains the duration in seconds until the next request is allowed.