How does garbage collection work internally?
V8 uses a generational, mostly-concurrent GC: young objects in a fast scavenger, survivors promoted to the old generation collected by mark-sweep-compact.
JavaScript is a garbage-collected language: you allocate freely with new, {}, [], etc., and the engine is responsible for reclaiming memory that's no longer reachable. The classic algorithm — naïve mark-and-sweep over the whole heap — would freeze the page for hundreds of milliseconds every collection, so modern engines (V8, JSC, SpiderMonkey) combine many techniques to keep pauses sub-millisecond.
The foundational idea: reachability. An object is live if there is a chain of references from a GC root to it. Roots include the current call-stack frames, registers, the global object, the JS module map, and engine-internal handles. Anything not reachable from a root is, by definition, garbage. Reference counting is not used directly because it can't reclaim cycles (a.next = b; b.next = a).
Generational GC is the workhorse, motivated by the weak generational hypothesis: most objects die young (loop locals, intermediate string concatenations, JSX elements created during render). It pays to collect young objects often and cheaply.
V8's heap layout:
- Young generation (~1–16MB, split into "from-space" and "to-space" semi-spaces). Allocation is a bump-pointer: just increment a pointer; no free-list lookup. When from-space fills, the Scavenger runs Cheney's semi-space copying algorithm: walk roots, copy each reachable object from from-space to to-space, then swap. Anything not copied is dead — there's no per-dead-object work at all. The scavenger is typically <1ms for a few MB.
- Old generation (megabytes to gigabytes). Objects that survive ~2 scavenges are promoted here. Collected by Mark-Sweep-Compact (Major GC):
- Mark: depth-first traversal from roots, set a mark bit on every reachable object.
- Sweep: free runs of unmarked memory back to the free list.
- Compact: occasionally slide live objects together to defragment, which prevents pathological "many small holes" allocation failures later.
- Large-object space: objects above a threshold (>~½ page) are allocated separately and never moved.
- Code-space, Map-space, Read-only space — segregated by type.
Why pauses are tiny on modern engines:
- Incremental marking — the mark phase is sliced into ~5ms steps interleaved with JS, using write barriers to track mutations so the marker doesn't miss objects.
- Concurrent marking — marking runs on a background thread; the main thread only pauses briefly at start and finish ("STW" remarking).
- Concurrent sweeping — likewise.
- Lazy sweeping — pages are swept only when an allocation needs that page.
- Parallel scavenging — multiple helper threads scavenge in parallel.
- Idle-time GC — Chromium tells V8 about idle gaps (e.g.
requestIdleCallbackwindows, frame slack) and GC runs preferentially in those. - Black allocation during marking — newly allocated objects are pre-marked live, so the marker doesn't have to chase them.
Observable consequences:
- The "GC pause" you see in DevTools' Performance panel as a yellow bar is usually a Minor GC and is <2ms; Major GC is a few tens of ms but rare in healthy code.
global.gc()(with--expose-gc) and Chrome's trash-can icon force a major GC for tests/repro.- You can't reliably "free" an object — only make it unreachable. Setting variables to
nulldoesn't free immediately, just makes the object eligible. WeakRefandFinalizationRegistry(ES2021) let you observe collection, but the spec deliberately doesn't guarantee when.- High-frequency allocation in hot paths still costs: even at <1ms per scavenge, doing it 60×/s eats budget. Object pooling and avoiding closures-in-render are real wins for game loops.
Code
Follow-up questions
- •What is a write barrier, and why does concurrent GC need one?
- •How does WeakRef interact with GC?
- •Why can't JS implement deterministic destructors?
Common mistakes
- •Believing `null`-ing a variable forces GC — it only removes the reference.
- •Assuming GC is the cause of jank without measuring; main-thread JS is more often the culprit.
Performance considerations
- •Allocate less, reuse more (object pools for hot loops).
- •Avoid creating objects/closures inside render or animation frames.
Edge cases
- •WeakMap/WeakSet keys can be GC'd, but iteration is forbidden (would break reachability).
- •FinalizationRegistry callbacks fire **without timing guarantees** — never use for resource lifecycle.
Real-world examples
- •React's freelist for fiber nodes is an explicit GC-pressure optimization.