A complex dashboard is lagging when users apply filters — how do you profile and optimize it
Measure first (React DevTools Profiler + Chrome Performance) — don't guess. Common culprits: derived data recomputed on every render, large lists not virtualized, charts re-rendering on unrelated state changes, expensive layout on every filter change, fetching too eagerly. Fixes: memoize derived data, move heavy computation off the render path (worker / server), virtualize lists, debounce filter input, scope React keys/state to avoid wholesale re-renders.
"My dashboard lags on filter change" is the classic perf question. The right answer is measure, then fix the specific bottleneck — not a list of generic optimizations. Walk through the diagnostic process.
Step 1 — reproduce and define "lag"
- Where exactly? Filter input feels slow? Filter applies but UI doesn't respond? Charts re-render slow? Scrolling slow after?
- Quantify — record on a slow CPU (Chrome DevTools throttling) and look at INP, frame times, total scripting.
Step 2 — profile with the right tools
React DevTools Profiler
- Record one filter interaction.
- Look at the flame chart: which components rendered, how often, how long.
- "Why did this render?" — DevTools shows whether props/state/parent triggered it.
Common findings:
- Many components re-rendering when only one needed to.
- A heavy memoized component re-rendering because its prop is a fresh function/object each time.
- The whole dashboard re-rendering for a filter change.
Chrome DevTools Performance panel
- Record a filter interaction.
- Look at scripting (yellow) vs rendering/painting (purple) vs idle.
- Find long tasks (> 50ms blocks).
- Layout thrash (forced sync layouts shown as "purple/red blocks").
Step 3 — common culprits and fixes
Culprit: derived data recomputed every render
// BAD: filtered every render
const filtered = data.filter(matches(filters));Fix: memoize.
const filtered = useMemo(() => data.filter(matches(filters)), [data, filters]);Culprit: child components re-rendering on parent state change
Memoize children with React.memo; stabilize callback props with useCallback; stabilize object props with useMemo.
Culprit: huge list re-rendered
If 5000 rows are rendered every filter change, virtualize (react-window/react-virtual).
Culprit: heavy computation on the main thread
Move to a Web Worker — JSON parsing, aggregation, chart-data prep. The main thread stays free for input/render.
const worker = new Worker("aggregate.js");
worker.postMessage({ data, filters });
worker.onmessage = (e) => setResults(e.data);Culprit: chart libraries
Many chart libs re-render slow on big datasets. Options:
- Aggregate / downsample data before passing in.
- Use canvas-based libs (uPlot, ECharts canvas mode) instead of SVG for large series.
- Memoize chart props; tell the chart to update incrementally if it supports it.
Culprit: fetching too eagerly
Each filter change refetches → network and parse latency cascade. Fixes:
- Debounce the filter input (~250ms).
- Cache by filter combination (React Query keyed on filters).
- Server-side filter; don't fetch the entire dataset.
Culprit: layout thrash
getBoundingClientRect/offsetTop in a loop after a write forces sync layout each iteration. Batch reads then writes.
Culprit: filter input itself
Controlled input re-renders the whole tree on every keystroke. Local-state the input, sync to global on debounce.
Step 4 — fix from the top of the cost stack
Profile shows where the time is. Fix the biggest line first — premature micro-optimizations are wasted effort.
Step 5 — verify
Re-profile after each change. Don't ship a "performance fix" without numbers proving it helped.
Step 6 — long-term hygiene
- Budget perf in CI — Lighthouse CI / size-limit.
- Track INP in RUM — surfaces real-user slowness.
- Document the dashboard's load-time budgets so future features know the constraints.
Interview framing
"First, I don't guess — I profile. React DevTools Profiler shows which components rendered and why; Chrome Performance shows main-thread time and layout. From there the culprit is usually one of: derived data recomputed every render (fix with useMemo), memoized children re-rendering because props aren't referentially stable, heavy compute on the main thread (move to a worker), a huge list rendered without virtualization, or refetching the entire dataset per filter change. I'd debounce the filter input, cache by filter combination, and prefer server-side filtering. Then fix the biggest item, re-profile, repeat — and protect the gain with a perf budget so it doesn't regress."
Follow-up questions
- •Walk through using the React DevTools Profiler on this.
- •When do you move compute to a worker vs memoize it?
- •Why might React.memo not help on a child?
- •How do you protect a perf fix from regressing?
Common mistakes
- •Guessing the bottleneck and adding useMemo everywhere.
- •Memoizing children whose props are fresh references every render.
- •Not virtualizing 5000-row tables.
- •Heavy compute on the main thread blocking input.
- •No perf budget — gains regress within a quarter.
Performance considerations
- •The whole topic. INP / TBT / main-thread time are the metrics. Cache, memoize, virtualize, worker — in that order of usual impact.
Edge cases
- •Slow on low-end mobile only.
- •Slow only after some interaction (memory leak?).
- •Slow only with specific filter combinations (specific data shape).
Real-world examples
- •Analytics dashboards (Mixpanel, Amplitude, Datadog) — heavy lists + charts + filters.
- •Admin panels with large data tables.