Matthew C.
—When making HTTP requests to the OpenAI API, you may get a “Too Many Requests” or “Rate limit reached” 429 error, with a message such as the following:
Rate limit reached for gpt-4-0613 in organization org-exampleorgid on tokens per min. Limit: 10000.000000 / min. Current: 10020.000000 / min.
These errors occur when you exceed the rate limit of your project, organization, or model(s). Your rate limit is the maximum number of requests or tokens that can be submitted per minute or per day. You may also get a 429 error if you reach your usage limit.
You need to control the number of requests you make, buy more credits, or increase your project or organization’s monthly spending limits.
A “Rate limit reached” error means that you are sending requests too quickly. You may have hit the rate limit set for your project or organization. You may also be rate limited by the AI model you’re using, as models have their own rate limits. You can learn more about rate limits in the OpenAI platform rate limit guide.
You can view the rate limits for your organization under the limits section of your account settings. You can avoid rate limit errors by retrying requests that get a rate limit error response with an exponential backoff. When retrying a request, add a short wait period before sending the request. If the request is unsuccessful, increase the wait period. Repeat this process until the request succeeds or the maximum number of requests is reached. This technique is known as automatic retries with an exponential backoff.
The OpenAI Node API library automatically retries 429 error requests with a short exponential backoff, from version 4 onward. You can use the maxRetries
option to set the number of retries:
// Configure the default for all requests const client = new OpenAI({ maxRetries: 5, // default is 2 }); // Configure per request await openai.chat.completions.create( { messages: [ { role: "user", content: "How can I fix a OpenAI API error: 429 Too Many Requests?", }, ], model: "gpt-4-0613", }, { maxRetries: 7, } );
Requests time out after 10 minutes by default. You can configure this with the timeout
option:
// Configure the default for all requests const client = new OpenAI({ timeout: 20 * 1000, // 20 seconds (default is 10 minutes) }); // Configure per request await openai.chat.completions.create( { messages: [ { role: "user", content: "How can I fix a OpenAI API error: 429 Too Many Requests?", }, ], model: "gpt-4-0613", }, { timeout: 60 * 1000, } );
When you’ve exceeded your current quota, it means that you’ve run out of credits or you’ve reached your maximum monthly spending limit.
If you’ve used up all of your credits, you must buy more credits. If you’ve reached your maximum monthly spending limit, you must increase your usage limits. Your usage limit depends on your usage tier. You can view the usage limits for your organization under the limits section of your account settings.
Tasty treats for web developers brought to you by Sentry. Get tips and tricks from Wes Bos and Scott Tolinski.
SEE EPISODESConsidered “not bad” by 4 million developers and more than 100,000 organizations worldwide, Sentry provides code-level observability to many of the world’s best-known companies like Disney, Peloton, Cloudflare, Eventbrite, Slack, Supercell, and Rockstar Games. Each month we process billions of exceptions from the most popular products on the internet.
Here’s a quick look at how Sentry handles your personal information (PII).
×We collect PII about people browsing our website, users of the Sentry service, prospective customers, and people who otherwise interact with us.
What if my PII is included in data sent to Sentry by a Sentry customer (e.g., someone using Sentry to monitor their app)? In this case you have to contact the Sentry customer (e.g., the maker of the app). We do not control the data that is sent to us through the Sentry service for the purposes of application monitoring.
Am I included?We may disclose your PII to the following type of recipients:
You may have the following rights related to your PII:
If you have any questions or concerns about your privacy at Sentry, please email us at [email protected].
If you are a California resident, see our Supplemental notice.