Rate limiting

Topics covered on this page

Our primary focus is to provide the highest availability, stability, and security of our systems to all merchants. We are constantly evaluating traffic as it surges and subsides to inform our policies. As part of this effort, we use two primary methods for controlling the number of incoming requests: rate limiting and request-type prioritization.

IP- and account-based rate limiting: We will limit the rate of requests based on IP and account. We temporarily allow for short bursts and spikes, however, after a certain window the limit will be reduced to our baseline according to our system capacity.

Note: token creation on our Vault has a significantly lower rate limit compared to our main API.

Request-type prioritization: All requests will be processed using the following prioritization:

  1. POST/PUT API calls in live mode (e.g. charge creation, charge capture, refund creation)
  2. GET API calls in live mode (e.g. listing transactions, retrieving a single charge)
  3. All API calls in test mode

Errors

If you are receiving an HTTP 429 Too Many Requests response from the API, this indicates you are sending too many requests and reaching our rate limit.

If this policy negatively affects your integration, please get in touch with us.

Load testing

Based on the prioritization listed above, it follows that API requests in test mode will have significantly different performance characteristics compared to live mode. Moreover, test mode requests don't need to interact with upstream payment gateways further altering performance.

For these reasons, we strictly discourage any load test or “benchmarks” on our API, for example, before a big sales event, as you will likely get spurious results.

We highly recommend building your integration in such a way that it can mock requests to our API. If necessary, consider simulating the observed latency of live API requests.

Tips to avoid being rate limited

  • Avoid polling and instead use webhooks.
  • Avoid too many requests in parallel, instead spread over a wider period of time. For example: instead of sending 100 requests per 1 second, consider sending 100 requests over 5 seconds at 20 requests per second.
  • Avoid using more than 10 parallel workers to send requests to our API.
  • If you are expecting a large event with a higher amount of requests, please get in touch with us.
Omise uses cookies to improve your overall site experience and collect information on your visits and browsing behavior. By continuing to browse our website, you agree to our Privacy Policy. Learn more