Skip to content
fixerror.dev
Node.js runtime

Node.js Error: heap_out_of_memory — Heap Out of Memory

stderr text
<--- Last few GCs --->
[12345:0x130008000]   65432 ms: Mark-sweep 4083.5 (4127.4) -> 4079.2 (4127.4) MB, 2087.4 / 0.0 ms  (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 2088 ms) (average mu = 0.106, current mu = 0.020) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0x100b95c4c node::Abort() [/usr/local/bin/node]
 2: 0x100b95dd8 node::OnFatalError(char const*, char const*) [/usr/local/bin/node]
V8 logs the last few GC cycles before it gives up. The "(4083.5 → 4079.2)" tells you GC freed almost nothing — a leak.

JavaScript heap out of memory is V8 throwing in the towel. Garbage collection ran over and over, couldn’t free enough space, and the engine aborts rather than leave you in a half-corrupted state. The error always means one of two things: either your real working set is bigger than the configured heap, or you have a leak.

The default reflex — bump --max-old-space-size — only helps the first case. For the second, it just delays the crash. The reliable path is a heap snapshot: see what’s retained, fix the retention, watch the sawtooth come back.

Why this happens

  • Large data loaded fully into memory. Reading a 2GB JSON file with `JSON.parse(fs.readFileSync(...))`, loading every row of a query into a single array, or buffering an entire HTTP response body. Stream the data instead — Node's stream APIs were built for exactly this.
  • Event listener leak. Adding listeners to an `EventEmitter` (or DOM-style emitter) without ever removing them. Node logs `MaxListenersExceededWarning: 11 listeners added` first; if you ignore the warning, you eventually run out of heap. Common around `process.on('uncaughtException')` in test suites.
  • Cache without bounds. An in-memory cache (`Map`, `WeakMap`, plain object) that grows unbounded over the process lifetime. Production: a `Map` keyed by user ID with no eviction, holding response data, growing forever. Replace with `lru-cache`.
  • Closure holding more than it needs. A function returned from another function captures the entire parent scope. If the parent had a 200MB buffer, the closure keeps it alive. Visible in heap snapshots as 'closure → context → buffer'.
  • Worker spawn / require explosion. Forking workers in a tight loop, or `require()`ing modules dynamically without limit. Each worker has its own heap; spawning 1000 workers exhausts memory at the OS level too. Monorepos that import every workspace at startup also hit this.

How to fix it

Fixes are ordered by likelihood. Start with the first one that matches your context.

1. Take a heap snapshot and find what's retaining memory

Don't guess. `node --inspect` your process, attach Chrome DevTools (`chrome://inspect`), and take three heap snapshots: one cold, one mid-load, one after the leak should have cleared. Compare with the Comparison view; the largest "Delta" is your leak.

debug-heap.sh bash
# Run with inspector + auto-snapshot on near-heap-limit:
node --inspect \
     --heapsnapshot-near-heap-limit=3 \
     --max-old-space-size=4096 \
     server.js

# Or programmatic snapshot (good for long-running services):
# require('v8').writeHeapSnapshot('/tmp/heap.heapsnapshot');
# then load in Chrome DevTools → Memory tab.

2. Stream large data, never buffer

For files, queries, and HTTP bodies over a few MB, use streams. Node's `stream/promises` + `pipeline()` is the safe pattern — backpressure and error handling are correct by default.

stream-csv.ts typescript
import { createReadStream, createWriteStream } from 'node:fs';
import { pipeline } from 'node:stream/promises';
import { parse } from 'csv-parse';
import { stringify } from 'csv-stringify';

// Process a 5GB CSV row-by-row with O(1) memory.
await pipeline(
  createReadStream('huge.csv'),
  parse({ columns: true }),
  async function* (rows) {
    for await (const row of rows) {
      yield { ...row, processed: true };
    }
  },
  stringify({ header: true }),
  createWriteStream('out.csv'),
);

3. Bound your caches

Replace plain `Map`/object caches with `lru-cache`, which evicts by size, by count, or by TTL. Without bounds, a cache is a leak with extra steps.

cache.ts typescript
import { LRUCache } from 'lru-cache';

export const userCache = new LRUCache<string, User>({
  max: 10_000,                       // count
  maxSize: 50 * 1024 * 1024,         // 50MB
  sizeCalculation: (v) => JSON.stringify(v).length,
  ttl: 5 * 60 * 1000,                // 5min
  updateAgeOnGet: true,
});

4. Remove event listeners when you're done with them

Always `.off()` (or `removeListener`) what you `.on()`. For `process.on('uncaughtException')` in tests, prefer `process.once()`. For long-lived emitters in handlers, use `AbortController` to clean up on request abort.

5. Bump heap size as a temporary mitigation

`--max-old-space-size=8192` (8GB) gives you headroom while you find the leak. Don't ship this as a fix — your container will OOM at the OS level next, which is worse (no JS error, just a SIGKILL). Use it to keep production alive while you investigate.

Detection and monitoring in production

Track Node's `process.memoryUsage().heapUsed` and `heapTotal` as gauges in your APM. Healthy services have a sawtooth pattern (GC reclaims regularly); leaking services trend upward without dropping back. Alert on heapUsed > 80% of max_old_space_size for over 5 minutes. Use `--heapsnapshot-near-heap-limit=N` in production to capture N snapshots automatically before the crash — invaluable for post-mortems.

Related errors

Frequently asked questions

Will increasing `--max-old-space-size` actually fix the error? +
Only if your real working set is bigger than the default heap (about 1.5–4GB depending on Node version) and you genuinely need it. If your app has a leak, bumping the heap delays the crash by hours or days, not eliminates it. Run a heap snapshot to confirm whether you have a leak (heap grows monotonically) or a sizing issue (heap plateaus high but stable).
What's the default Node heap size? +
Roughly 4GB on 64-bit Node 18+, but it auto-scales based on physical RAM in newer versions. On Node 14 and earlier, it was ~1.7GB regardless of host RAM. Override with `--max-old-space-size=<MB>` or the env var `NODE_OPTIONS=--max-old-space-size=8192`. The new-generation heap (where short-lived objects live) is much smaller and tuned separately with `--max-semi-space-size`.
How do I take a heap snapshot in production without a debugger? +
Two options: (1) `require('v8').writeHeapSnapshot('/tmp/heap.heapsnapshot')` from inside your code (e.g., on a SIGUSR2 handler), then load in Chrome DevTools → Memory. (2) Run with `--heapsnapshot-near-heap-limit=3` so Node auto-dumps three snapshots as it approaches the limit. Snapshots are big (often hundreds of MB) — save to a volume with space.
Why does my service leak when running locally is fine? +
Two common reasons: (1) traffic patterns — production sees thousands of unique users; dev sees one. A cache keyed by user ID grows in production but never in dev. (2) long uptime — leaks accumulate over hours, and you restart your local server every few minutes. Run a soak test (sustained traffic for 30+ minutes) locally to reproduce.
Does using TypeScript or async/await cause heap issues? +
No. The compiled output runs on the same V8 — TypeScript is just types. Async/await uses Promises, and unresolved Promises (held by closures, stored in arrays) are the actual leak source, not async itself. If you await everything correctly, async/await has no memory penalty.
Are WeakMaps a fix for unbounded caches? +
Sometimes. WeakMaps don't prevent garbage collection of their keys, so if your key (object) becomes unreachable, the WeakMap entry goes away. They're great for metadata-keyed-by-object patterns. They are NOT a fix for caches keyed by string IDs — those need explicit eviction (lru-cache).
My Lambda / Vercel function gets out-of-memory. Same fix? +
Same diagnosis (find the retainer), different mitigation. Serverless platforms have hard memory limits — you can't `--max-old-space-size` past the function's allocated memory. Trim large allocations, avoid loading datasets into memory, and use streaming responses. On Lambda, allocate more memory (which also gives you proportionally more CPU).
How is heap out-of-memory different from container OOMKilled? +
Heap out-of-memory is V8 hitting its own configured ceiling — Node prints a JS stack and exits cleanly(ish). OOMKilled is the OS/container runtime killing the process because total RSS exceeded the cgroup limit. OOMKilled gives you no JS stack, just a SIGKILL. If `--max-old-space-size` is higher than your container's memory limit, you'll OOMKilled before V8 OOMs — set the JS heap to ~80% of the container memory.

When to escalate to Node.js support

Heap out-of-memory is virtually always an application bug or config mismatch, not a V8 bug. Before filing upstream: confirm with a heap snapshot that the retainers are your code (or a library's code), not internal V8 structures. If a heap snapshot shows runaway growth in V8 internals (HeapSnapshot, native handles) with no userland references, that's an upstream issue worth filing with the package maintainer or the Node project.