This page explains how Socket API rate limiting works

To promote stability of Socket's systems, the API endpoints are rate limited. Sending too many requests to an endpoint will result in a 429 error being returned.

Socket's API limits the amount of requests to 600 per minute.

How to handle rate limits

When you call the Socket API repeatedly, you may encounter error messages that say 429: 'Too Many Requests' or RateLimitError. These error messages come from exceeding the API's rate limits.

Rate limits are a common practice for APIs, and they're put in place for a few different reasons.

  • First, they help protect against abuse or misuse of the API. For example, a malicious actor could flood the API with requests in an attempt to overload it or cause disruptions in service. By setting rate limits, Socket can prevent this kind of activity.

  • Second, rate limits help ensure that everyone has fair access to the API. If one person or organization makes an excessive number of requests, it could bog down the API for everyone else. By throttling the number of requests that a single user can make, Socke ensures that everyone has an opportunity to use the API without experiencing slowdowns.

  • Lastly, rate limits can help Socket manage the aggregate load on its infrastructure. If requests to the API increase dramatically, it could tax the servers and cause performance issues. By setting rate limits, Socket can help maintain a smooth and consistent experience for all users.

Although hitting rate limits can be frustrating, rate limits exist to protect the reliable operation of the API for its users.

In this guide, we'll share some tips for avoiding and handling rate limit errors.

Requesting a rate limit increase

If you'd like your organization's rate limit increased, please feel free to reach out to support if you need help with either of these workarounds. with the following information:

  • The estimated rate of requests
  • The reason for the increase

How to avoid rate limit errors

Retrying with exponential backoff

One easy way to avoid rate limit errors is to automatically retry requests with a random exponential backoff. Retrying with exponential backoff means performing a short sleep when a rate limit error is hit, then retrying the unsuccessful request. If the request is still unsuccessful, the sleep length is increased and the process is repeated. This continues until the request is successful or until a maximum number of retries is reached.

This approach has many benefits:

  • Automatic retries means you can recover from rate limit errors without crashes or missing data
  • Exponential backoff means that your first retries can be tried quickly, while still benefiting from longer delays if your first few retries fail
  • Adding random jitter to the delay helps retries from all hitting at the same time

Note that unsuccessful requests contribute to your per-minute limit, so continuously resending a request won’t work.