Your app feels slow but Lighthouse score is good. Why
Lighthouse is a synthetic lab test of initial page load. It misses runtime/interaction performance, real-device and real-network variance, post-load JS jank, slow API responses, and the specific user flows that feel slow. Measure with field data (RUM) and profile the actual interactions.
This is a great question because it exposes whether you understand what Lighthouse actually measures — and what it doesn't.
What Lighthouse measures
A synthetic, lab audit of initial page load on a simulated mid-tier device/network, for a single URL, usually the landing state. Great for load-time regressions. But that's a narrow slice of "feels slow."
Why a good score can still feel slow
1. It doesn't measure runtime / interaction performance. Lighthouse looks at load. If your app janks after load — slow list rendering, laggy typing, a 300ms dropdown, scroll jank — Lighthouse never sees it. (INP partly addresses this in field data, but lab Lighthouse still won't catch your specific slow interaction.)
2. Lab ≠ field. Lighthouse runs once, on a controlled machine, on a fast connection, no extensions, cold or warm in a specific way. Real users have low-end phones, flaky networks, browser extensions, full caches or empty ones, background tabs. RUM (Real User Monitoring) tells the real story.
3. It tests one URL, often the simplest state. The slow part might be the dashboard with 500 rows, the search results page, a deep authenticated flow — not the marketing home page Lighthouse scored.
4. Slow APIs / data fetching. A fast shell that then sits on spinners because the backend is slow, or there's a request waterfall. Lighthouse's load metrics may look fine while the useful content is late.
5. Post-load JavaScript cost. Hydration, heavy re-renders, unoptimized state updates, expensive effects — the page "loaded" fast but is busy and unresponsive.
6. Perceived performance. No skeletons, janky transitions, layout shifts during interaction, no optimistic UI — it feels slow even when metrics are ok.
7. Metric gaming. You can optimize for the score (lazy-load everything below the fold, defer all JS) in ways that make the score green but the experience worse.
How to actually diagnose it
- Get field data — RUM (web-vitals library, Sentry, SpeedCurve, CrUX). Look at INP, not just LCP. p75/p95, not averages.
- Profile the specific slow flow — Chrome DevTools Performance panel during the interaction users complain about, with CPU/network throttling. React DevTools Profiler for re-renders.
- Check the network — slow APIs? Waterfalls? In the Network panel for the real flow.
- Test on a real low-end device.
- Watch session replays — see what users actually experience.
The one-liner
"Lighthouse is a synthetic lab test of initial load on one URL. 'Feels slow' is usually runtime interaction performance, real-device/real-network variance, or slow data — none of which Lighthouse fully captures. I'd pull RUM (especially INP) and profile the actual slow flow."
Follow-up questions
- •What's the difference between lab data and field data?
- •What is INP and why does it matter more than LCP for 'feels slow'?
- •How would you profile a specific slow interaction?
- •How can you game a Lighthouse score while making the app worse?
Common mistakes
- •Treating the Lighthouse score as a complete performance picture.
- •Optimizing the score instead of the actual user experience.
- •Only ever auditing the home page / simplest state.
- •Ignoring field data and runtime/interaction performance.
Performance considerations
- •Lab tools (Lighthouse) catch load regressions; field tools (RUM, CrUX, web-vitals) catch real experience. INP captures interaction responsiveness Lighthouse's lab run misses. Always profile the actual complained-about flow on throttled/real hardware.
Edge cases
- •Authenticated or data-heavy pages Lighthouse never audits.
- •Performance that degrades only on low-end devices.
- •Jank that appears only after specific interactions.
- •Slow third-party scripts that vary by run.
Real-world examples
- •A 95 Lighthouse score on the marketing page while the logged-in dashboard janks on every filter change.
- •Fast LCP but poor INP because hydration and heavy re-renders block interaction.