How I set up core web vitals monitoring with Vercel Analytics and Next.js
May 17, 2026 · 7 min read
Core web vitals monitoring on Vercel Analytics gives you real-user measurement (RUM) data tied directly to your Next.js deployments — no separate analytics account, no third-party script bloat. This post covers the exact setup I use, which percentiles to watch, and the workflow I follow when a metric regresses after a deploy.
Why Vercel Analytics over Lighthouse or PageSpeed Insights
Lighthouse runs in a controlled lab environment on a single machine. It's useful for catching obvious problems during development, but it doesn't tell you what actual users on their actual devices and connections are experiencing. Vercel Analytics and Speed Insights both instrument the browser directly and send field data back — LCP, INP, CLS, FCP, and TTFB — keyed to the deployment SHA that served the page.
That deployment linkage is the part I find most useful. When a metric spikes, I can immediately correlate it with the commit that went out, rather than digging through log timestamps.
Installing the packages
You need two separate packages. @vercel/analytics handles page-view tracking and web vitals collection. @vercel/speed-insights handles the Speed Insights product (the one with the p75 breakdown per route). They overlap slightly but report into different dashboards.
npm install @vercel/analytics @vercel/speed-insightsIn the App Router, mount both in your root layout:
// app/layout.tsx
import { Analytics } from '@vercel/analytics/react';
import { SpeedInsights } from '@vercel/speed-insights/next';
import type { ReactNode } from 'react';
export default function RootLayout({ children }: { children: ReactNode }) {
return (
<html lang="en">
<body>
{children}
<Analytics />
<SpeedInsights />
</body>
</html>
);
}Both components inject a small script that defers until after the page is interactive — they won't affect your LCP or INP scores themselves. The SpeedInsights component from the /next subpath automatically handles route-change tracking for the App Router without any extra configuration.
Deploy to a Vercel project and data starts appearing within a few minutes of real traffic hitting the site. There is no data in preview deployments unless you explicitly enable it in your project settings — which I leave off to avoid polluting production metrics with internal QA traffic.
Reading the dashboards: which numbers actually matter
The Speed Insights dashboard shows each metric broken down by route and by percentile. The percentile that matters for Google's CrUX-based ranking signal is p75 — the 75th percentile of all field observations. That means 75% of your users experienced a time at or below that value. If your p75 LCP is 3.1 s, you're in the "needs improvement" band, and roughly one in four users is seeing something worse than that.
I ignore the median (p50) for ranking purposes. It flatters the numbers. Median LCP of 1.8 s sounds great until you notice p75 is 3.4 s and p95 is 7.2 s — a long tail of users on slow connections or low-end Android devices who are having a genuinely bad experience.
The thresholds I keep visible on a sticky note:
- LCP: good < 2.5 s, needs improvement 2.5–4.0 s, poor > 4.0 s
- INP: good < 200 ms, needs improvement 200–500 ms, poor > 500 ms
- CLS: good < 0.1, needs improvement 0.1–0.25, poor > 0.25
All at p75. If any route tips into "needs improvement" at p75, it goes onto the fix list.
My regression workflow
I treat Core Web Vitals like test coverage: a regression in production is a bug, not a cosmetic issue. Here's the process I follow.
Step 1 — Baseline before a major feature ships. Before merging a branch that touches rendering (new image carousels, new third-party embeds, layout changes), I note the current p75 values for the affected routes from the Speed Insights dashboard. Screenshot or paste into the PR description.
Step 2 — Check 24 hours after deploy. Vercel needs real traffic to accumulate enough samples for the percentile to stabilise. I wait at least 24 hours — 48 for lower-traffic routes — before comparing post-deploy numbers to the baseline.
Step 3 — Filter by route, not site average. The site-wide average hides route-specific regressions. A new landing page with a slow hero image won't move the aggregate much if your blog has 200 routes. Always drill into the specific route in Speed Insights.
Step 4 — Correlate with deployment SHA. Speed Insights tags each data point with the deployment. If the p75 LCP on /products/[slug] jumped from 2.1 s to 3.8 s, I can see exactly which deployment introduced the change and diff the commit.
Step 5 — Reproduce in DevTools before fixing. I open the affected route in Chrome with CPU 4× throttle and a Fast 3G network profile applied in the Network tab. This simulates roughly the p75 user environment. I run a Lighthouse trace to find which element is the LCP candidate, then check the filmstrip for CLS shifts.
For INP regressions specifically, I use the Chrome DevTools Performance panel with "Enable advanced paint instrumentation" checked. Long tasks during interaction — anything over 50 ms on the main thread — are the first thing I look for.
A note on sampling and traffic volume
Speed Insights samples traffic, it does not record every page view. On sites with fewer than ~5,000 monthly visits to a single route, the p75 figure can swing significantly between days purely due to sample size. For low-traffic sites, I watch the trend over a week rather than reacting to a single day's numbers.
For high-traffic routes (tens of thousands of visits per day), the data is stable enough to act on within a few hours of a deploy going live. That's where the real-time alerting value kicks in.
Vercel doesn't yet expose webhook alerts for metric threshold breaches, so for client projects I set a calendar reminder to check Speed Insights 48 hours after every significant deploy. It takes two minutes and has caught three regressions in production over the past year before the client noticed anything.
Sending vitals to a custom endpoint
If you need to pipe the raw events to your own data warehouse or a tool like Datadog, you can intercept the reportWebVitals export that the App Router supports:
// app/_components/web-vitals-reporter.ts
'use client';
import { useReportWebVitals } from 'next/web-vitals';
export function WebVitalsReporter() {
useReportWebVitals((metric) => {
// metric.name: 'LCP' | 'INP' | 'CLS' | 'FCP' | 'TTFB'
// metric.value: number
// metric.rating: 'good' | 'needs-improvement' | 'poor'
if (process.env.NODE_ENV !== 'production') return;
void fetch('/api/vitals', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({
name: metric.name,
value: metric.value,
rating: metric.rating,
path: window.location.pathname,
}),
// Use keepalive so the request survives page navigation
keepalive: true,
});
});
return null;
}Mount <WebVitalsReporter /> in your root layout alongside the Vercel components. The route handler at app/api/vitals/route.ts can then forward to whatever backend you prefer. I use this pattern on projects where the client's data team wants vitals in the same warehouse as their conversion events.
The part most teams skip
Installing the packages takes ten minutes. The discipline that actually moves metrics is reviewing the data after every meaningful deploy, not just when something visibly breaks. Set the reminder, check the percentiles, and tie every CWV regression directly to a commit. That's the workflow that makes monitoring useful rather than decorative.
Related posts
All posts →How to Fix LCP on Image-Heavy Pages (Next.js Patterns That Work)
Apr 24, 2026 · 4 min read
LCP is usually one big image. Here’s how to identify the true LCP element, reduce TTFB, ship the right image bytes, and consistently hit <2.5s on real devices.
How I Eliminated Sanity Image Hot-Spot Reflows by Pre-Calculating Focal Crops
May 08, 2026 · 5 min read
How I pre-calculate Sanity image hot-spot crops at build time to eliminate layout shift and guarantee stable LCP under 2.5s on editorial pages.
How I Handle Conditional GROQ Projections to Cut Query Payload by 60%
May 02, 2026 · 5 min read
A pattern for projecting only the fields your Next.js components actually render, using GROQ coalesce and select operators to prune unused blocks.