Big list rendering: windowing and virtualization
Category
Bundle size, Core Web Vitals, virtualization, caching.
60 questions
Big list rendering: windowing and virtualization
Build a reusable UI component with focus on modularity, performance, and design tradeoffs
Ship less JS upfront so the user can interact sooner. Split by route, by interaction (modals, editors), and by visibility (below-fold). In React, use `React.lazy` + `Suspense` for components and dynamic `import()` for libraries. Preload the next likely chunk on hover/idle to hide the network cost.
Three axes: route-level (each page its own chunk), component-level (heavy widgets behind dynamic import), and vendor (long-lived deps in their own chunk for cache reuse). Combine all three; default to route-level first.
Debounce = wait until the burst stops, then fire once (e.g., search-as-you-type). Throttle = fire at most once per N ms during a burst (e.g., scroll, drag, resize). For visual updates tied to the screen, prefer `requestAnimationFrame` over a fixed throttle interval.
Difference between SSR and CSR
Lazy: load when needed. Preload: load now, high priority, current page. Prefetch: load idle-time, low priority, future navigation.
Explain Web Performance Metrics
Frontend system design fundamentals (component structure, state, performance)
Frontend system design: Performance, SSR, and SEO
How can you do caching on a website
How do you explain frontend performance metrics (like TTI, FCP, CLS) to non-technical stakeholders
How do you handle bundle analysis and optimization
How do you measure and quantify the impact of a performance fix
How do you measure performance in real-world projects
How do you optimise bundle splitting
LCP: ship the hero image fast (CDN, format, priority). INP: keep main-thread tasks short. CLS: reserve space for everything that loads later.
How do you optimize web performance and reduce load times
How do you prevent XSS and CSRF attacks
Measure first, then: tree-shake, route-split, dynamic import heavy widgets, swap heavy deps, ship modern syntax, and budget aggressively.
TTI is the moment the page is reliably responsive. The killer is JS — download, parse, execute, hydrate. Reduce by shipping less JS (RSC, code splitting, tree shaking), avoiding long tasks, deferring non-critical work, and hydrating selectively.
How thunk-based API calls work and how caching behaves
How to cancel previous API requests
How unnecessary re-renders happen and how to avoid them
How would you build maps.google.com (tile loading, performance, UI responsiveness)
How would you design a system to handle client-side caching, API retries, and error boundaries gracefully
How would you implement dynamic theming (light/dark mode) in a large web application without performance issues
How would you manage performance if the undo stack gets large
HTTP caching strategies
Don't render 5000 DOM nodes. Combine: server-side search/pagination, async incremental load, virtualization (react-window / TanStack Virtual), and a debounced filter input. Most apps need only the last three; large lists need all four.
Serve modern formats (AVIF/WebP), correct sizes via srcset, lazy-load below-the-fold, eager + fetchpriority='high' for the LCP image, reserve dimensions to prevent CLS, and use a CDN with on-the-fly resizing.
Infinite scroll, virtual list — what's the performance model
Use `<img loading="lazy">` for below-fold, `fetchpriority="high"` for the LCP image, modern formats (AVIF/WebP) via `<picture>` with fallbacks, correctly-sized `srcset` + `sizes`, explicit `width`/`height` (or aspect-ratio) to prevent CLS, and an image CDN that serves the right variant per device. Prefer `<img>` over CSS `background-image` for content imagery.
Memoization stops re-renders by giving React's diff stable references and stable child props. `React.memo` skips child renders when props are shallow-equal; `useMemo` caches expensive values; `useCallback` caches function identity. In React 19+, the React Compiler does most of this automatically — manual memoization is a fallback, not the default.
Optimize performance for scale
Don't render 10,000 nodes. Virtualize: render only the slice visible in the viewport (plus a small overscan). Use `@tanstack/react-virtual` or `react-window`. Stable keys, memoized row components, fixed or pre-measured heights, and CSS containment to keep paint cheap. Cursor-based pagination on the server side if data is unbounded.
Third-party JS (analytics, ads, chat widgets, A/B tools) is usually the worst offender on real-user TTI. Defer everything by default with `async` or `defer`, load post-interaction or on `requestIdleCallback`, sandbox in a Web Worker (Partytown) when feasible, and budget the total third-party weight. Self-host the loader where the vendor allows.
Performance topic: CDN + asset optimization
Performance topic: Code splitting and dynamic imports
Performance topic: Debounced inputs
Performance topic: Virtualization
Performance, SSR, and SEO — what matters and how they interact
Preload, prefetch, and lazy loading
preload = high-priority current-page resource. prefetch = low-priority future-navigation resource. preconnect = warm up TCP+TLS to an origin. dns-prefetch = resolve DNS only. Use the right one or you waste bandwidth.
Reflow = browser recomputes geometry; repaint = re-rasterizes pixels. Batch DOM writes, separate reads from writes (avoid layout thrashing), animate `transform`/`opacity` (composite-only), and use `will-change` / `contain` sparingly to isolate work. Use rAF for visual updates; debounce reads with `getBoundingClientRect` once per frame.
The browser pipeline: JS → Style → Layout (reflow) → Paint (repaint) → Composite. Layout is the most expensive; transform/opacity skip layout AND paint and run on the GPU. Avoid layout-thrashing read/write loops.
CSR ships JS and renders in the browser (best for app-like, auth'd UI). SSR renders per request on the server (best for personalized, fresh content). SSG pre-renders at build time (best for marketing/blogs). ISR adds incremental revalidation on top of SSG.
Tree shaking is dead-code elimination over ES modules. Static `import`/`export` syntax lets bundlers analyze the dependency graph and drop exports nothing imports. Side effects (or `sideEffects: false`) decide what's safely removable.
LCP measures loading (largest paint), INP measures interaction responsiveness (replaced FID in 2024), CLS measures layout stability. Optimize each with different levers: LCP via image/critical-resource pipeline, INP via task scheduling, CLS via reserving space.
What are Preload, Reconnect, Prefetch, and Prerender
What are resource hints (preload, prefetch, dns-prefetch, preconnect)
What challenges have you faced when building for large user traffic
Next/Image gives you on-the-fly resizing, modern formats (AVIF/WebP), responsive srcset, lazy loading, and reserved layout space (no CLS). Plain <img> needs you to wire all of that yourself.
What is lazy loading and how to implement it
What is Service Worker
What performance metrics do you track in production
What techniques would you use to ensure performance and responsiveness
What would be your approach for handling the drag events, and how would you optimize performance
When function declarations vs arrow functions impact performance
When the list is long enough (≈hundreds of rows) that DOM nodes alone hurt — measure first. Virtualization renders only visible rows + overscan, trading complexity for memory and render time.