Is your feature request related to a problem? Please describe.
Currently, performance analysis via LLMs can be inconsistent. When a model is asked to measure performance, it often generates code on the fly, leading to:
- Non-deterministic results: Different code for the same metric (e.g., LCP or CLS).
- High token consumption: Generating and sending large blocks of JavaScript via
Runtime.evaluate.
- Hallucinations: Inaccurate ways of accessing the Performance API or observing entries.
Following the pattern of Addy Osmani's web-quality-skills, I propose a dedicated performance skill set for this MCP.
Describe the solution you'd like
The approach uses curated scripts instead of raw code generation:
- Deterministic Output: The model calls a skill (e.g.,
get_core_web_vitals) which executes a known, reliable script.
- Token Optimization: Only the results are passed back to the model, saving the cost of generating the code itself.
- CDP Integration: Leveraging
Runtime.evaluate to execute these snippets and return structured JSON.
User Stories / Examples
Scenario 1: General performance audit
- User Prompt: "Analyze the performance of the current page."
- MCP Action: The model calls the
performance meta-skill, which triggers core-web-vitals and loading.
- Result: The model receives a structured JSON with precise metrics and provides a grounded recommendation.
Scenario 2: Debugging interactions
- User Prompt: "Why is this page feeling laggy when I click buttons?"
- MCP Action: The model calls
performance:interaction to check for Long Animation Frames or high INP.
Describe alternatives you've considered
.
Additional context
I have already migrated these snippets to a skill-based architecture in my project. I believe adapting and optimizing these for the Chrome DevTools MCP would significantly enhance its capabilities for web performance engineers.
I'm happy to provide a PR with the initial folder structure and the first set of core scripts if the maintainers agree with this direction.
Is your feature request related to a problem? Please describe.
Currently, performance analysis via LLMs can be inconsistent. When a model is asked to measure performance, it often generates code on the fly, leading to:
Runtime.evaluate.Following the pattern of Addy Osmani's web-quality-skills, I propose a dedicated
performanceskill set for this MCP.Describe the solution you'd like
The approach uses curated scripts instead of raw code generation:
get_core_web_vitals) which executes a known, reliable script.Runtime.evaluateto execute these snippets and return structured JSON.User Stories / Examples
Scenario 1: General performance audit
performancemeta-skill, which triggerscore-web-vitalsandloading.Scenario 2: Debugging interactions
performance:interactionto check for Long Animation Frames or high INP.Describe alternatives you've considered
.
Additional context
I have already migrated these snippets to a skill-based architecture in my project. I believe adapting and optimizing these for the Chrome DevTools MCP would significantly enhance its capabilities for web performance engineers.