Tell me about a React performance optimization you implemented that failed
Behavioral: pick a real case where the optimization had no measurable effect or backfired. Common stories: over-memoization (useMemo/useCallback adding cost > benefit), virtualization on a too-small list, premature code splitting causing chunk waterfalls, debouncing the wrong handler. The key signal is showing you measured, learned, and reverted.
Interviewers ask this to see if you measure or vibe. Strong answers follow the situation → action → result → learning arc.
Pick a real failure
Avoid generic "tried memo, didn't help" — pick a story with detail.
Strong example: over-memoized a small tree
"We had a card list with maybe 50 items. INP scores looked fine but I'd seen blog posts about useMemo + React.memo for lists, so I memoized every card and useCallback'd every handler. Result: profiling showed each card was now slower on initial render — memo allocates an object per render, useCallback memoization comparison cost was non-zero, and memo's identity check on props had to traverse every prop reference. Initial render TTI got worse by 8%. What I learned: useMemo / React.memo / useCallback have non-zero cost. They only help when (1) the wrapped work is significant, and (2) re-renders are actually happening with stable inputs. For 50 cards re-rendering in < 1ms, the gate adds overhead with no benefit. I reverted and instead profiled to find the actual slow path — turned out to be a chart sibling, not the cards. Takeaway: profile before optimizing. React DevTools Profiler + the highlight-renders setting tell you what's actually re-rendering."
Other valid stories
Virtualization on a small list
"Virtualized a list that maxed at 200 rows. Scroll perf was fine before; now we had scroll restoration bugs, sticky-header layering issues, and Ctrl-F broke. Net negative."
Premature code splitting
"Split the route into 8 chunks. Initial bundle dropped, but chunk waterfalls during navigation made first-interaction slower. Coalesced back into 3 chunks."
Debouncing the wrong handler
"Debounced the form change handler thinking it would smooth typing — but the actual bottleneck was a synchronous validate() that I should have moved to a Worker. Debounce hid the symptom while users typed but kept the jank on submit."
CDN edge runtime regression
"Moved an SSR route to edge runtime. TTFB improved on average, but Node-specific code paths broke + cold-start variance got worse on uncommon regions."
What strong answers include
- Measurement — you knew it was wrong because of numbers, not vibes.
- Specific metric that worsened (LCP, INP, TTI, bundle size, error rate).
- Hypothesis vs reality — what you expected and what happened.
- Revert / next step — you didn't double down.
- Generalizable lesson — what you do differently now.
What weak answers look like
- "I tried memoization, it didn't help, so I removed it." (No detail.)
- "We optimized something that broke the build." (Build break != perf optimization.)
- Defensive answers blaming someone else.
Interview framing
"Situation–action–result–learning. Pick a real case where measurement showed the optimization didn't help or backfired — over-memoization on a small tree, premature code splitting causing waterfalls, virtualizing a list too small to need it. Include numbers (8% worse TTI, 200ms LCP regression). Show you reverted and found the actual hotspot. The signal interviewers want: you measure, you don't pattern-match optimizations from blog posts onto problems you haven't profiled."
Follow-up questions
- •How did you measure the regression?
- •What's your default profiling toolkit?
- •When is memoization the right answer?
Common mistakes
- •Vague story without numbers.
- •Blaming colleagues.
- •Hiding that you didn't measure.
Performance considerations
- •The point is: measurement-first.
Edge cases
- •Interviewer probes for what you do differently now.
- •Be ready for follow-ups on profiler tools.
Real-world examples
- •Personal experience; blog posts on 'when memoization hurts'; React docs on profiling.