Walk me through a performance optimization project where you significantly improved load time
Use STAR. Lead with the metric (LCP went from 4.8s → 1.6s p75). Walk the bottleneck identification, the trade-offs, the rollout, and what you'd do differently.
Performance-project deep dives are the single most common senior-frontend interview prompt — and the most commonly bungled. Candidates either rattle off generic optimizations ("we lazy-loaded things") with no metrics, or dive straight into low-level fixes without framing why the work mattered. The right answer threads a STAR scaffold (Situation, Task, Action, Result) with numbers, trade-offs, and an honest reflection. The numbers prove you measured; the trade-offs prove you understood costs; the reflection proves you can self-critique.
Situation — frame the surface and the stakes. 1–2 sentences setting the business context, with a quantified problem. Generic bad: "Our app was slow." Specific good: "Our checkout LCP at p75 was 4.8s on slow 4G. Funnel analytics showed conversion fell off a cliff above 3s — correlated with ~12% revenue impact on the cohort. The CEO had named it a top-3 quarterly priority."
Task — your specific scope. Make your ownership unambiguous. "I owned the perf workstream for the checkout team — three engineers and a designer. Target: LCP <2.5s p75 within one quarter, measured by Chrome User Experience Report (CrUX) field data."
Action — bottleneck identification. Show measurement before fixes:
- Chrome DevTools / Lighthouse on representative devices, throttled.
- WebPageTest filmstrip + waterfall on real network profiles (3G slow, 4G fast).
- Field data via the web-vitals JS library reported into our analytics — RUM is what Google's ranking uses; lab is what you optimize against.
- Findings: the LCP element was the hero image, fetched only after the main JS bundle (≈600KB gzipped) had parsed. Fonts blocked text render via FOIT. A 200KB analytics SDK was on the critical path.
Action — fixes, prioritized by impact/effort. Show you sequenced by ROI, not by tech excitement:
- Switched hero to AVIF +
<Image priority>+ responsive srcset → LCP dropped 1.2s (highest-leverage single change). - Tree-shook
moment→date-fns(-80KB), code-split the analytics SDK to load post-LCP → another 800ms. - Inlined critical CSS, deferred third-party tag manager (consent-aware) → 400ms.
font-display: swap+ preload primary font weight in WOFF2 → CLS down to 0.04, no FOIT.- Promoted three above-the-fold components from client to server components → ~100KB of JS off the critical path.
Trade-offs you explicitly named (this is the senior signal):
- AVIF support gap on older iOS — kept WebP fallback via
<picture>. Cost: complexity, ~10 extra lines per image. - Deferring analytics meant losing some early-funnel events — accepted, replaced with server-side tracking to keep funnel intact.
- Critical-CSS extraction added 30s to the build — accepted, ran async; gated on prod only.
- The hero AVIF was 40% smaller but slower to decode on low-end Android — measured INP, found no regression, but flagged for monitoring.
Result — with numbers, statistically valid, time-bounded:
- LCP 1.6s p75, INP <120ms, CLS 0.04 (CrUX, four-week window).
- Conversion +6.4% (statsig'd, two-sided test, 4 weeks).
- Bundle 600KB → 290KB gzipped on critical path.
- Annualized revenue impact ≈$X (per finance's model — quote ranges, don't invent precision).
Reflection — what you'd do differently. This is what separates senior from staff signal:
- "I'd ship Web Vitals to our error-monitoring SLO board next time so regressions page us within hours, not Tuesday's report."
- "I'd push for a CI bundle budget before the project, not after — a budget would have caught the analytics SDK regression that originally introduced 200KB."
- "I'd have onboarded the design team to AVIF earlier; the back-and-forth on rendering parity cost two weeks."
Bonus signals interviewers listen for:
- You measured before optimizing. Anyone can list optimizations from a blog post; the question is which one mattered.
- You named trade-offs, not just wins. Every change costs something. If you can't say what, the interviewer assumes you didn't think about it.
- You collaborated. Performance is rarely a solo project — name the PM call on AVIF, the designer on the font choice, the SRE on CDN headers.
- You quantified the business outcome, not just the technical one. "LCP improved" is engineering; "conversion +6.4%" is the reason engineering was funded.
- You can answer ambush follow-ups ("what if the analytics SDK was contractual?", "what would you do for the Tablet cohort?", "what's the cheapest thing to do next?") — readiness for these signals depth of ownership.
Pre-interview prep: write down 2–3 STAR stories (perf, reliability, scope-reduction, team-conflict) with hard numbers and trade-offs, before you walk in. You'll forget under pressure if you didn't.
Follow-up questions
- •How did you decide which optimization to do first?
- •How did you convince the PM to deprioritize a feature for perf work?
- •How did you measure the conversion impact?
- •What regressed afterwards, and how did you catch it?
Common mistakes
- •Vague metrics ('it got faster') — interviewers mark this down hard.
- •Skipping trade-offs — every choice has cost.
- •Taking sole credit when it was a team effort.
Performance considerations
- •Frame answers with p50 vs p75 vs p99 — shows you think in distributions.
Edge cases
- •If you don't have a perf project, talk about a systematic improvement (test reliability, build time, error rate) and use the same scaffold.
Real-world examples
- •BBC News, Pinterest, Walmart all have public case studies — read them for vocabulary.