Next.js Error: FUNCTION_INVOCATION_TIMEOUT — Timed Out
HTTP/2 504 Gateway Timeout
x-vercel-error: FUNCTION_INVOCATION_TIMEOUT
x-vercel-id: lhr1::iad1::abc123-1714138800000-xyz
server: Vercel
An error occurred with your deployment
FUNCTION_INVOCATION_TIMEOUT
FUNCTION_INVOCATION_TIMEOUT is Vercel’s signal that your serverless function ran past its allotted budget and the platform killed it. The HTTP response is 504, the x-vercel-error header carries the code, and the x-vercel-id lets you pull the exact invocation log from the dashboard. The fix isn’t usually to extend the timeout — it’s to find the slow downstream and either speed it up, stream around it, or move it off the request path entirely.
Most production timeouts trace to one of three things: an upstream API call without its own abort signal, a missing database index, or batch work running inside a request handler when it should be in a queue. All three are diagnosable in minutes if you have p95 latency dashboards and an x-vercel-id to look up.
Why this happens
- Slow downstream API call without a timeout of its own. Your function calls Stripe, OpenAI, or your own backend without an explicit timeout. The downstream hangs at 30+ seconds and your fetch doesn't abort. Vercel kills the function before the upstream replies. Most production timeouts are this case.
- Database query without an index. A `SELECT … WHERE col = ?` against a table with 5M rows and no index on `col` runs for the full execution budget. Postgres `EXPLAIN ANALYZE` reveals a Seq Scan; the fix is an index, not a longer timeout.
- Doing batch/CPU work in a request handler. Generating PDFs, processing CSVs, calling an LLM with a 30s reasoning budget, or running ETL inside an API route. Serverless functions are for fast request/response, not for queue workers. The right fix is offloading.
- Streaming response not properly flushed. Returning an LLM streaming response from a function that doesn't actually stream — the platform buffers and counts the full duration as one synchronous request. Use the Web Streams API and return a `ReadableStream` so chunks ship as they arrive.
- Cold start + slow init compounding. Cold start adds 1–3s. If your function then opens a fresh DB connection, loads a 50MB ML model, or hits a slow downstream, total time can blow the budget. Especially common on functions that aren't called often enough to stay warm.
How to fix it
Fixes are ordered by likelihood. Start with the first one that matches your context.
1. Set explicit fetch timeouts shorter than the platform timeout
Never let an external call run as long as the function budget — wrap it in `AbortSignal.timeout()` so a slow upstream returns a clean error inside your function instead of getting killed by the platform.
export async function GET(req: Request) {
try {
const res = await fetch('https://slow-upstream.example/api', {
signal: AbortSignal.timeout(8000), // 8s, leaves headroom on a 10s plan
});
const data = await res.json();
return Response.json(data);
} catch (err) {
if (err instanceof DOMException && err.name === 'TimeoutError') {
return Response.json(
{ error: 'upstream_timeout' },
{ status: 504 },
);
}
throw err;
}
}
2. Stream long responses instead of buffering
For LLMs and any response over a couple of seconds, return a `ReadableStream`. Vercel's timeout starts ticking but the response begins flowing immediately, the user sees progress, and you stay well clear of the budget.
import OpenAI from 'openai';
export const runtime = 'edge'; // edge has its own timeout rules
export const dynamic = 'force-dynamic';
const client = new OpenAI();
export async function POST(req: Request) {
const { messages } = await req.json();
const completion = await client.chat.completions.create({
model: 'gpt-4o-mini',
messages,
stream: true,
});
const stream = new ReadableStream({
async start(controller) {
for await (const chunk of completion) {
const text = chunk.choices[0]?.delta?.content ?? '';
controller.enqueue(new TextEncoder().encode(text));
}
controller.close();
},
});
return new Response(stream, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' },
});
}
3. Move slow work to a queue or cron
For anything that's legitimately slow (PDF generation, CSV exports, batch emails, deep reports), accept the request, write a job row, return 202, and process it in a background worker. Inngest, Trigger.dev, QStash, or a Vercel Cron + DB job-queue table all work.
4. Configure the function's `maxDuration` (Pro/Enterprise)
On Pro you can lift any single function up to 60s. On Enterprise up to 900s. Don't apply globally — only to the routes that genuinely need it, since longer timeouts cost more and hide latency bugs.
export const maxDuration = 60; // seconds. Pro plan max.
export async function POST() {
// Heavy work...
}
5. Add a database index, or move the slow query off the request path
Run `EXPLAIN ANALYZE` on the suspect query. Seq Scans on big tables, missing composite indexes, and full-table sorts are all fixable with a single `CREATE INDEX`. If indexing isn't enough, materialize the result into a summary table that you query in O(1).
Detection and monitoring in production
Track 504s in your error monitor with the `x-vercel-id` value attached so you can pull the exact invocation log. Add p95 + p99 latency dashboards per route — a function that took 1s last week and 9s today is about to start timing out, which is much easier to fix proactively than reactively. Alert when p95 exceeds 70% of your `maxDuration`.
Related errors
- nextjshydration_failedThe HTML React rendered on the server doesn't match what React tries to render on the client during hydration. The two trees disagree about a node, so React throws and falls back to a full client re-render.
- nextjsmodule_not_foundThe Next.js build (webpack/Turbopack) tried to resolve an import path and couldn't find it. Either the package isn't installed, the relative path is wrong, a TypeScript path alias isn't mirrored in `tsconfig.json` and `next.config.js`, or the file's case differs between disk and import (Linux is case-sensitive, macOS isn't).
- postgresECONNREFUSEDYour application tried to open a TCP connection to Postgres and the OS rejected it — Postgres isn't listening on the host:port you specified, or a firewall blocked the connection.
- nodejsheap_out_of_memoryV8's old-generation heap filled up and the garbage collector couldn't free enough space, so V8 aborts the process with a fatal allocation failure. Default heap is ~4GB on 64-bit; long-lived references (caches, listeners, closures, big arrays) prevent reclamation.
- openairate_limit_exceededYour account has exceeded its per-minute request (RPM) or per-minute token (TPM) limit for the model you're calling. Limits are tier-based and per-model.
Frequently asked questions
What's the actual timeout on Vercel for each plan? +
Why does my function still time out at 10s after setting `maxDuration = 60`? +
Does streaming a response prevent the timeout from firing? +
How is `FUNCTION_INVOCATION_TIMEOUT` different from a regular HTTP 504? +
Will increasing `maxDuration` fix the underlying slowness? +
Why does my function time out only sometimes? +
Can I retry a 504 from FUNCTION_INVOCATION_TIMEOUT? +
Does the timeout count cold-start time? +
When to escalate to Next.js support
Open a Vercel support ticket with the `x-vercel-id` header value and the deployment URL when (a) the same function consistently times out at well under its configured `maxDuration`, (b) the dashboard shows function duration much shorter than the 504 you're seeing client-side, or (c) you suspect a Vercel-region-specific issue. For "my function is slow" — that's an application performance problem, not a platform bug.