You need to render a large dataset without blocking the main thread—how do you approach it
Don't render it all: virtualize the visible window, paginate/infinite-scroll the data. For heavy computation, move it off the main thread (Web Worker) or chunk it across frames; use React's startTransition/useDeferredValue to keep input responsive. Stream and process incrementally.
"Large dataset" + "don't block the main thread" splits into two problems: don't render too many DOM nodes, and don't run heavy JS synchronously. Different fixes for each.
Problem 1 — too much to render → virtualize
The DOM and React reconciliation can't handle tens of thousands of nodes.
- Windowing / virtualization — render only the visible rows + overscan (
@tanstack/react-virtual,react-window). DOM stays at ~30 nodes regardless of dataset size. - Pagination / infinite scroll — don't even fetch everything; load pages/cursors on demand.
content-visibility: autofor long heterogeneous content.
This handles the rendering side. But if you also need to process the data, that's problem 2.
Problem 2 — heavy computation → get it off the main thread
Sorting/filtering/transforming/parsing a huge dataset synchronously freezes the UI (no input, no scroll, no paint).
a) Web Worker — move the heavy work to a background thread. The main thread stays free; post the data in, post results out. Best for genuinely CPU-heavy work (parsing big JSON/CSV, sorting/aggregating 100k+ rows, image processing). Comlink makes the messaging ergonomic.
b) Chunk the work across frames — split the job into batches and yield between them so the browser can paint and handle input:
requestIdleCallbackfor low-priority work in idle time.- A batch loop with
setTimeout(0)/await scheduler.yield()between chunks. - Process N items, yield, repeat — the UI stays responsive, the work just takes a bit longer wall-clock.
c) React concurrent features — startTransition / useDeferredValue mark the expensive render as non-urgent so typing/clicking stays responsive while the big list updates in the background. (This keeps React's own work interruptible — it doesn't move JS off-thread; combine with virtualization.)
Problem 3 — large data arriving → stream it
- Stream and process incrementally — don't wait for a 50MB response; process chunks as they arrive (streaming fetch, server-sent chunks).
- Render progressively as data lands.
Putting it together
For a typical "100k-row table that must stay smooth":
- Paginate/stream the fetch — don't pull all 100k at once.
- Virtualize the table — render only the visible window.
- Web Worker for sort/filter/aggregate over the full set.
useDeferredValueon the filter input so typing stays responsive.- Memoize derived data; debounce expensive inputs.
The framing
"Two separate concerns. Rendering: virtualize so DOM size is constant, and paginate/stream so I'm not even holding it all. Computation: Web Worker for genuinely heavy CPU work, or chunk-and-yield across frames, plus startTransition/useDeferredValue to keep input responsive. The principle is never do a large amount of synchronous work — render or compute — on the main thread at once."
Follow-up questions
- •When do you reach for a Web Worker vs chunking work across frames?
- •What does startTransition/useDeferredValue actually do — and not do?
- •How does virtualization differ from pagination here?
- •How would you process a huge response without waiting for it all?
Common mistakes
- •Rendering all rows and freezing the browser.
- •Sorting/filtering a huge array synchronously on the main thread.
- •Thinking startTransition moves work off-thread (it doesn't — it just deprioritizes React work).
- •Fetching the entire dataset at once instead of paginating/streaming.
- •No debounce/memoization on expensive derived computations.
Performance considerations
- •Virtualization caps DOM and reconciliation cost. Web Workers free the main thread entirely but cost message serialization. Chunk-and-yield keeps the UI responsive at the price of longer wall-clock time. startTransition keeps input responsive by making React's render interruptible.
Edge cases
- •Worker serialization cost for very large payloads (structured clone overhead).
- •Sorting/filtering must reflect in the virtualized view consistently.
- •Scroll-to-item into not-yet-rendered virtualized rows.
- •Streaming data that changes order as more arrives.
Real-world examples
- •A 100k-row data grid: virtualized rows + Web Worker sort/filter + useDeferredValue on the search box.
- •Parsing a large uploaded CSV in a worker so the UI never freezes.