ai
OpenAI Errors
OpenAI API errors: insufficient_quota (402), rate_limit_exceeded (429), invalid_request_error, model_not_found, context_length_exceeded. Most production pain comes from token limits and tier-based RPM caps.
OpenAI rate limits are tier-based: free, tier 1, tier 2, etc. Limits apply to RPM (requests/min), TPM (tokens/min), and RPD (requests/day) per model. `context_length_exceeded` triggers when prompt + max_tokens > model's context window — count tokens with tiktoken before sending. `insufficient_quota` is a billing issue, not a rate limit.
Validation (1)
Billing & quota (1)
Configuration (1)
Rate limits (1)
Official docs
https://platform.openai.com/docs/guides/error-codes
Status page
https://status.openai.com/
Support
https://help.openai.com/