Code splitting and lazy loading — how they improve performance
Ship less JS upfront so the user can interact sooner. Split by route, by interaction (modals, editors), and by visibility (below-fold). In React, use `React.lazy` + `Suspense` for components and dynamic `import()` for libraries. Preload the next likely chunk on hover/idle to hide the network cost.
Every byte of JS the browser parses before First Input Delay is a byte the user is waiting on. Code splitting trades a single big bundle for several smaller ones loaded on demand — so the initial render gets only what it needs.
What gets faster, mechanically.
- Network — smaller initial bundle = fewer bytes over the wire.
- Parse + compile — JS engines parse linearly. 200KB takes ~5x less than 1MB.
- Execution — modules that aren't on the critical path don't run until needed.
- Cache — when you ship a fix to one route, only that route's chunk invalidates. Vendor chunks (React, lodash) stay cached across deploys.
The three split axes.
| Axis | What | Tool |
| ----------------- | --------------------------------- | ---------------------- |
| Route-based | /dashboard, /settings, /admin | React.lazy + Suspense |
| Interaction-based | modals, editors, charts, video | dynamic import on click|
| Visibility-based | below-fold widgets | IntersectionObserver |Route-based (the high-leverage one).
const Settings = lazy(() => import("./routes/Settings"));
<Suspense fallback={<Skeleton />}>
<Routes>
<Route path="/settings" element={<Settings />} />
</Routes>
</Suspense>Webpack/Vite see the dynamic import() and emit Settings as its own chunk. The first visit to /settings fetches it; subsequent visits use the HTTP cache.
Interaction-based.
async function openEditor() {
const { default: RichEditor } = await import("./RichEditor");
setEditor(<RichEditor />);
}Heavy editors (TipTap, CodeMirror, Monaco), date pickers, charts, PDF renderers — none of these should be in the initial bundle of a page where most users won't use them.
The senior detail: prefetch + preload.
A naive split makes the first interaction slower — user clicks, then waits for the chunk. Hide the network cost:
// On hover / focus / idle — start fetching before the click.
<Link
to="/settings"
onMouseEnter={() => import("./routes/Settings")}
onFocus={() => import("./routes/Settings")}
>
Settings
</Link>Or use <link rel="prefetch"> / rel="modulepreload"> injected from the SSR layer. Frameworks like Next.js and Remix do route prefetching automatically.
Webpack magic comments give you finer control.
const Charts = lazy(() => import(
/* webpackChunkName: "charts" */
/* webpackPrefetch: true */
"./Charts"
));webpackPrefetch injects a <link rel="prefetch"> so the browser fetches during idle time. webpackPreload is for chunks needed right now (parallel with parent).
Failure modes to plan for.
- Chunk-load failure after deploy. A user with an old HTML file requests
Settings.abc123.jsafter you deployed a build with hashdef456. They get 404. Catch it:import().catch(() => window.location.reload()). Long-term fix:assetPrefix/versioned routes + soft-reload on chunk errors. - Waterfall. Lazy-loading a component that lazy-loads its children that lazy-load theirs. The user sees three spinners back-to-back. Flatten by prefetching the next level on entry.
- CLS / layout shift.
<Suspense fallback>should reserve the same dimensions as the lazy content, otherwise the page jumps.
When NOT to split.
- Component is tiny (<5KB) — overhead of an HTTP request + JS module init outweighs savings.
- Component is always rendered — splitting just adds a Suspense boundary for no benefit.
- Critical-path content — splitting can push Largest Contentful Paint later if the lazy bit is what LCP measures.
Beyond JS: lazy assets.
<img loading="lazy">for below-fold images (built-in, no JS needed).<iframe loading="lazy">for embeds.fetchpriority="high"for the LCP image.
Measure. Coverage tab in DevTools shows what % of each loaded file actually executed before First Paint. Anything below ~50% used is a candidate for splitting.
Follow-up questions
- •What's the difference between prefetch and preload?
- •How would you handle stale chunk errors after a new deploy?
- •When does code splitting hurt performance instead of helping?
- •How does Suspense streaming affect lazy loading on the server?
Common mistakes
- •Splitting components that always render — extra boundary, no payoff.
- •Forgetting to prefetch on hover, creating a click → spinner experience.
- •No `<Suspense>` boundary above the lazy component → app crashes.
- •Suspense fallback different size than the lazy content → CLS.
Performance considerations
- •Vendor chunk caching — split React/lodash separately so app updates don't invalidate them.
- •Use HTTP/2 multiplexing; many small chunks are fine on H2.
- •Avoid splitting below ~5KB — request overhead dominates.
Edge cases
- •Chunk-load failure on old HTML after deploy — show retry, then reload.
- •Slow 3G — Suspense fallbacks may show for seconds; design skeletons accordingly.
- •SSR + lazy — must render the fallback or pre-load the chunk server-side.
Real-world examples
- •Next.js auto-splits per route + per `next/dynamic`. Vite uses native dynamic imports as split points.
- •Notion lazy-loads its block-editor variants per page open.