Why is code splitting better than loading everything at once
Smaller initial bundle → faster parse/execute → better LCP/INP. Users only download code for the route they hit; rest loads on demand or prefetched on hover. Cuts bytes 30–80% on first paint typically. Tradeoff: cold-load delay when navigating to a non-prefetched route, so prefetch likely-next routes.
Loading 2MB of JS upfront makes the page wait while it parses, compiles, and runs code the user may never touch.
The cost of "load everything"
- Download time on slow networks.
- Parse + compile time on phones (often the bottleneck, not network).
- Execute time on initial render.
- Memory holding unused code.
On a low-end Android, parsing 1MB of JS can take 1+ seconds before anything renders.
Code splitting
Ship the bare minimum for the current route; lazy-load the rest:
// Static — always loaded
import { Home } from "./Home";
// Lazy — fetched when needed
const Settings = React.lazy(() => import("./Settings"));Webpack / Rollup / Vite emit a separate chunk; the bundler inserts the fetch when Settings first renders.
Granularity levels
- Per-route.
/dashboard,/settingseach their own chunk. The biggest win. - Per-feature. Heavy editor / chart / modal lazy-loaded inside a route.
- Vendor split.
react,react-domin a long-lived vendor chunk that hits CDN cache.
Prefetching
The cost of code splitting is the first time you visit a chunked route — fetch delay. Mitigate with prefetch:
<link rel="prefetch" href="/settings.[hash].js">Next.js's <Link> prefetches on hover. Most routes feel instant.
Tradeoffs
| Pro | Con |
|---|---|
| Smaller LCP / INP | First-visit delay per chunk |
| Less memory | More HTTP requests (mitigated by HTTP/2 multiplex) |
| Better caching (vendor unchanged) | Risk of waterfall (chunk → another chunk) |
Avoid common mistakes
- Splitting too granularly — every tiny module becomes a request. Aim for ~30-100kb chunks.
- Chunk waterfalls — chunk A imports chunk B sync; load A, then discover B. Use modulepreload or eager loading for chained deps.
- Splitting code that's needed immediately — moving the LCP image's component to a lazy chunk costs more than it saves.
Tree shaking is not code splitting
Tree shaking removes unused exports at build time. Code splitting defers used code to load later. Both matter.
Measurement
- Bundle analyzer (
webpack-bundle-analyzer,source-map-explorer,vite-plugin-visualizer). - Lighthouse JS bytes report.
- RUM: time-to-interactive per route.
Interview framing
"Don't make users download code they don't run. Per-route splitting is the biggest win — Settings code doesn't ship until they visit Settings. Lazy-load heavy features inside routes (modals, charts, editors). Vendor split for long-term caching. Prefetch likely-next routes on hover so the first visit still feels instant. The classic mistakes are over-splitting (request fan-out), under-splitting (5MB initial), and chunk waterfalls. Tree shaking + code splitting are complementary, not the same."
Follow-up questions
- •How does prefetching avoid the first-visit penalty?
- •When can splitting *hurt* perf?
- •What's the difference between dynamic import and React.lazy?
Common mistakes
- •Over-splitting to dozens of tiny chunks.
- •Splitting LCP-critical code.
- •Chunk waterfalls.
- •Confusing tree shaking with code splitting.
Performance considerations
- •Parse + compile dominates on phones. Each MB saved upfront is multi-second on low-end devices. Prefetch hides the cost of split routes.
Edge cases
- •Server-rendered apps need to know which chunks were used per route.
- •Suspense boundary placement matters.
- •Webpack magic comments for naming chunks.
Real-world examples
- •Next.js per-route chunks, Vite dynamic imports, Webpack 5 module federation.