CWV Superpowers: AI That Diagnoses and Fixes Core Web Vitals With Real User Data
The open-source Claude skill that connects AI to your real user data and turns Core Web Vitals diagnosis into a conversation

I wrote about why AI agents need real user data to do meaningful Core Web Vitals work. That article explained the problem: most AI performance tools optimize for Lighthouse scores that have nothing to do with your Google rankings. CWV Superpowers is the solution. It is a free, open-source Claude Code skill that connects to your CoreDash field data, identifies your worst bottleneck across millions of real page loads, traces the root cause in Chrome, and generates the code fix. Not a report with generic suggestions. The element, the file, the line of code, backed by evidence from real users and a targeted Chrome trace.
Last reviewed by Arjen Karel on March 2026
Table of Contents!
What CWV Superpowers Does
CWV Superpowers combines two data sources that together tell you exactly what is slow and why:
CoreDash Real User Monitoring tells you what is actually slow. Real users, real devices, real networks. CoreDash tracks every page load without sampling or caps and attributes every metric to the exact element causing the issue. When CoreDash says your LCP is 4.2 seconds and the bottleneck element is div.hero > img.main, that is what your actual users experience.
Chrome browser tracing tells you why it is slow. The skill visits the page with mobile emulation (Fast 3G, 4x CPU throttling), records the network waterfall, captures filmstrip screenshots and traces the exact bottleneck phase that RUM identified. Not all phases. Just the one that matters.
Neither source alone is enough. RUM data tells you what is slow but not why. A Chrome trace gives you the why, but without field data you are probably investigating the wrong page. CWV Superpowers combines both automatically.
It diagnoses all three Core Web Vitals:
| Metric | Breakdown | What it identifies |
|---|---|---|
| LCP | TTFB / Load Delay / Load Time / Render Delay | The element, the bottleneck phase, the priority state, the 7-day trend |
| INP | Input Delay / Processing / Presentation | The interaction element, the responsible script (LOAF), the page load state |
| CLS | 5 cause patterns | The shifting element, mobile vs desktop split, new vs repeat visitors, network speed |
You pick the output: apply the code fix, generate an HTML report, or both.
Quick Start
Here is how to get running in two minutes.
Step 1: Add the CoreDash MCP server (skip if you already have it):
claude mcp add --transport http coredash https://app.coredash.app/api/mcp \
--header "Authorization: Bearer cdk_YOUR_API_KEY" Get your API key from CoreDash → Project Settings → API Keys (MCP). The free tier works.
Step 2: Install CWV Superpowers from the Claude marketplace:
/install corewebvitals/cwv-superpowers
Step 3: Start Claude Code with Chrome for the full experience:
claude --chrome
Step 4: Ask it anything:
Find my biggest CWV issue and fix it.
That is it. The skill handles capability detection, data collection, diagnosis, tracing and fix generation automatically.
Tip: Hey, Chrome is optional. Without it you still get full RUM diagnosis, bottleneck identification, and code fixes. You lose the filmstrip and waterfall visuals, but the diagnosis quality is the same because real user data is the primary source of truth.
How It Works
CWV Superpowers runs a five-step investigation. Here is what happens under the hood.
Step 1: Intent. You either tell it what to look at ("LCP on my product pages is bad") or ask it to find the problem ("What should I fix first?"). If you name something, it clarifies: which page, which metric, which device. If you want it to find the problem, it moves to automated discovery.
Step 2: Discovery. The skill scans your site through CoreDash. It pulls overall health, mobile health, the worst 5 URLs by LCP, and the worst 5 URLs by INP. Then it picks the biggest problem using a clear priority: poor ratings over needs improvement, mobile over desktop, pages with more than 15% of page loads in poor territory even if the p75 passes, and higher traffic volume. A page that "passes" Core Web Vitals with a p75 LCP of 2.4 seconds but has 18% of users in poor territory is still a problem. I see this all the time. CWV Superpowers catches it.
Step 3: Diagnosis. This is where it gets specific. For LCP, the skill makes 5-7 CoreDash MCP calls: the LCP element selector, the element type (image, text, background image, video), the priority state (does it have fetchpriority? is it lazily loaded?), the four-phase breakdown (TTFB / Load Delay / Load Time / Render Delay), and the 7-day trend. It identifies the bottleneck as the phase consuming the largest share of total LCP time. Not the phase exceeding an absolute threshold. Proportional reasoning. More on that below.
For INP, it pulls the slow interaction element, the LOAF scripts (Long Animation Frames, the JavaScript files causing the delay), the page load state when the interaction happened, and the three-phase breakdown. For CLS, it matches the shifting element against five known cause patterns and cross-references mobile vs desktop, new vs repeat visitors, and network speed.
Step 4: Chrome trace. If Chrome is available, the skill visits the page with mobile emulation and traces only the bottleneck phase from Step 3. If Load Delay is the bottleneck, it focuses on the network waterfall, looking for the gap between HTML and the LCP resource request. If Render Delay, it looks for blocking scripts. This targeted approach is deliberate. A full trace of everything generates noise. A focused trace of the bottleneck generates evidence.
Step 5: Output. You choose: apply the code fix, generate an HTML report, or both. The code fix names the file, the line, the element, and shows before and after. The report is a self-contained HTML file with metrics cards, phase breakdown charts, and when Chrome was used, filmstrip screenshots and a network waterfall.
LCP: From Symptom to Fix
LCP diagnosis follows the four-phase model that Google defines: TTFB, Load Delay, Load Time, Render Delay. Most developers know these phases in theory. The problem is figuring out which one is your bottleneck, on your pages, with your users.
CWV Superpowers gets the phase breakdown from CoreDash and interprets it proportionally. Here is what that means in practice:
Your LCP is 3,800ms. The breakdown: TTFB 600ms (16%), Load Delay 1,900ms (50%), Load Time 800ms (21%), Render Delay 500ms (13%). The bottleneck is Load Delay. The hero image was discovered late. Maybe it is a CSS background image invisible to the preload scanner. Maybe it is loaded via JavaScript. Maybe there is no preload hint.
The skill checks the LCP element type and the priority state. Combined with the phase breakdown, it constructs the root cause. Then Chrome traces the waterfall to confirm: is there really a gap between the HTML response and the image request? Is a render-blocking script delaying discovery?
The output is a root cause statement like this:
Root cause: The LCP image div.hero-banner > img.product-main on /product/running-shoes-42 is discovered 1,980ms late because it lacks a preload hint and has no fetchpriority="high". CoreDash data: LCP is 3,820ms (poor) on mobile, p75. Load Delay is the bottleneck at 52% of total. Chrome trace confirms: 1,940ms gap between HTML first byte and image request in the network waterfall.
The fix follows the diagnosis. Load Delay? Add a preload hint. Load Time? Optimize the image format or add responsive srcset. Render Delay? Fix the render-blocking resource. Specific to the element, the file, the line. Not a list of Lighthouse suggestions.
INP: Finding the Slow Interaction
INP is the metric AI agents struggle with most. It measures responsiveness during real user sessions. These are interactions that cannot be simulated in a lab. Lighthouse uses Total Blocking Time as a proxy, but the correlation is loose at best.
CWV Superpowers skips the proxy entirely. CoreDash includes the actual INP interaction: which element was clicked (the CSS selector), which scripts were running (Long Animation Frames attribution), and what the page load state was when it happened.
The three-phase breakdown tells the rest:
Input Delay means the main thread was busy when the user interacted. If the page was still loading, the cause is likely large JavaScript bundles, analytics scripts, or framework hydration. If the page was complete, something else is running: a timer, a background sync, a third-party script.
Processing means the event handler itself is too slow. Layout thrashing (read-write-read DOM cycles), expensive re-renders, or synchronous work that should be asynchronous.
Presentation means the browser takes too long to paint after the handler finishes. Large DOM trees, complex CSS selectors, or forced style recalculations.
The skill reads the LOAF data to name the exact script. Not "you have too much JavaScript." The file, the function, the duration. Then it proposes a specific fix: yield to the main thread with scheduler.yield(), defer evaluation with requestIdleCallback, break up the handler, or apply content-visibility: auto for large DOMs.
CLS: Pattern Matching
CLS works differently from LCP and INP. There are no phases to break down. Instead, CWV Superpowers matches the shifting element against five known cause patterns:
1. Images or video without dimensions. The browser does not know the size until the resource loads, causing a reflow. Add width and height attributes or CSS aspect-ratio.
2. Web font swap (FOUT). The fallback font renders at one size, the web font loads at a different size, and everything shifts. The fix is <code>font-display: optional</code> or font metric overrides with size-adjust.
3. Dynamically injected content. A chat widget, cookie banner, or ad slot appears above the fold after render and pushes everything down. Reserve the space with min-height or use position: fixed.
4. Late-loading resources above the fold. An image or embed loads after the initial layout. Give it explicit dimensions and a preload hint.
5. CSS animations on layout properties. Animating top, left, width, or height triggers layout recalculation on every frame. Use transform instead. Always.
To narrow the cause, the skill compares CLS across dimensions. Worse on mobile than desktop? Likely dimension-related because images scale differently. Worse for new visitors than repeat? A cookie banner or onboarding modal. Worse on slow networks? Late-loading resources above the fold.
Why Proportional Reasoning Matters
This is the design principle that separates CWV Superpowers from a Lighthouse audit.
Most performance tools use absolute thresholds. Lighthouse tells you "Render Delay is 350ms" and you have no idea if that is the problem. CWV Superpowers uses proportional reasoning: a phase is the bottleneck if it consumes the largest percentage of total metric time.
Example: INP is 350ms. Input Delay 70ms (20%), Processing 80ms (23%), Presentation 200ms (57%). Presentation is the bottleneck even though 200ms is not alarming in absolute terms. Fixing Presentation moves the needle. Optimizing Input Delay barely registers.
Another example: LCP is 2,600ms. TTFB is 1,400ms (54%). No amount of image optimization will fix an LCP that spends more than half its time waiting for the server. Fix TTFB first. Then worry about the payload.
This prevents the most common mistake in performance work: fixing the wrong thing. It does not matter that your image is 2MB if the browser does not request it for two seconds. Fix the discovery delay first.
The Reports
Sometimes you just need the fix. Other times you need to show stakeholders why the fix matters. CWV Superpowers generates two types of self-contained HTML reports. No external dependencies, no build step. Just open the file.
Full report (when Chrome was used): metrics cards with color-coded ratings, phase breakdown charts, filmstrip screenshots at key moments (first paint, LCP, fully loaded), network waterfall SVG, root cause analysis, and the recommended fix with before/after code.
RUM-only report (CoreDash only): same metrics cards and phase breakdown, plus element attribution and root cause. No filmstrip or waterfall, but the diagnosis quality is the same because field data is the source of truth.
Both are designed to be shared. Drop the HTML in a Slack thread, attach it to a Jira ticket, or include it in a performance review.
Getting Started
CWV Superpowers requires a CoreDash account with real user data flowing. The free tier works. You need an API key from Project Settings → API Keys (MCP). The key is shown once and stored as a SHA-256 hash. Read-only. No write access.
Claude Code (recommended)
# Add CoreDash MCP server
claude mcp add --transport http coredash https://app.coredash.app/api/mcp \
--header "Authorization: Bearer cdk_YOUR_API_KEY"
# Install the skill
/install corewebvitals/cwv-superpowers
# Start with Chrome for full analysis
claude --chrome Cursor
/plugin-add cwv-superpowers
Add CoreDash to .cursor/mcp.json:
{
"mcpServers": {
"coredash": {
"url": "https://app.coredash.app/api/mcp",
"headers": {
"Authorization": "Bearer cdk_YOUR_API_KEY"
}
}
}
} VS Code, Windsurf, Gemini CLI
The CoreDash MCP server works with any client that supports HTTP MCP. The endpoint is https://app.coredash.app/api/mcp with a Bearer token header. Check the CoreDash setup guide for per-client config details.
Verify it works: Ask your agent "What are my Core Web Vitals?" If CoreDash is connected, it returns your real LCP, INP, CLS, FCP, and TTFB data immediately.
Open Source and Why I Built This
CWV Superpowers is MIT licensed and open source. The entire skill (the orchestrator, the LCP/INP/CLS diagnosis modules, the Chrome tracing logic, and the report templates) is on GitHub. Read how it works, extend it, or contribute.
I built it because I am going all in on AI and Core Web Vitals. Not the "AI-powered" marketing that slaps a chatbot on a dashboard. The real thing: AI agents that understand your field data deeply enough to trace a 4-second LCP from a real user session to a missing preload hint on line 47 of your template.
I have been doing web performance consulting for 17 years. The investigation loop (segment the data, hypothesize, trace, confirm, draft the fix) used to take me 2 to 4 hours per issue. CWV Superpowers does the same thing in minutes. That is insane when you think about it. Not because the AI is smarter than a performance engineer. Because it has access to the same data the engineer would use and can process it in seconds.
The expertise still matters. An AI agent does not understand your business logic or the politics around that third-party analytics script your marketing team refuses to remove. You review every fix. You make the call on what ships. But the hours of manual investigation between "something is slow" and "here is exactly what to change," those are gone.
That is the shift I am betting on. Not "AI fixes your Core Web Vitals." It is "AI does the investigation in minutes so you can spend your time on the decisions that actually require a human." The tool is free. The data requires CoreDash. Fire it up and let me know what it finds.
Ask AI why your INP spiked.
CoreDash is the only RUM tool with MCP support. Connect it to your AI agent and query your Core Web Vitals data in natural language. No more clicking through dashboards.
See How It Works
