Feat SEO Section (Overview, Heading Structure, Links & JSON LD Preview)#413
Feat SEO Section (Overview, Heading Structure, Links & JSON LD Preview)#413abedshaaban wants to merge 33 commits intoTanStack:mainfrom
Conversation
…functionality This commit introduces a new README.md file for the SEO tab in the devtools package. It outlines the purpose of the SEO tab, including its major features such as Social Previews, SERP Previews, JSON-LD Previews, and more. Each section provides an overview of functionality, data sources, and how the previews are rendered, enhancing the documentation for better user understanding.
…structure, and links preview This commit introduces several new sections to the SEO tab in the devtools package, enhancing its functionality. The new features include: - **JSON-LD Preview**: Parses and validates JSON-LD scripts on the page, providing detailed feedback on required and recommended attributes. - **Heading Structure Preview**: Analyzes heading tags (`h1` to `h6`) for hierarchy and common issues, ensuring proper SEO practices. - **Links Preview**: Scans all links on the page, classifying them as internal, external, or invalid, and reports on accessibility and SEO-related issues. Additionally, the SEO tab navigation has been updated to include these new sections, improving user experience and accessibility of SEO insights.
This commit refactors the SEO tab components to standardize the handling of severity levels for issues. The `Severity` type has been replaced with `SeoSeverity`, and the `severityColor` function has been removed in favor of a centralized `seoSeverityColor` function. This change improves code consistency and maintainability across the `canonical-url-preview`, `heading-structure-preview`, `json-ld-preview`, and `links-preview` components, ensuring a unified approach to displaying issue severity in the SEO analysis features.
This commit adds a canonical link and robots meta tag to the basic example's HTML file, improving SEO capabilities. Additionally, it refactors the SEO tab components to utilize the `Show` component for conditional rendering of issues, enhancing the user experience by only displaying relevant information when applicable. This change streamlines the presentation of SEO analysis results across the canonical URL, heading structure, and links preview sections.
…lysis This commit adds a new SEO overview section to the devtools package, aggregating insights from various SEO components including canonical URLs, social previews, SERP previews, JSON-LD, heading structure, and links. It implements a health scoring system to provide a quick assessment of SEO status, highlighting issues and offering hints for improvement. Additionally, it refactors existing components to enhance data handling and presentation, improving the overall user experience in the SEO tab.
…reporting This commit introduces new styles for the SEO tab components, improving the visual presentation of SEO analysis results. It adds structured issue reporting for SEO elements, including headings, JSON-LD, and links, utilizing a consistent design for severity indicators. Additionally, it refactors existing components to enhance readability and maintainability, ensuring a cohesive user experience across the SEO tab.
This commit introduces new styles for the SEO tab components, including enhanced visual presentation for SEO analysis results. It refactors the handling of severity indicators across various sections, such as headings, JSON-LD, and links, utilizing a consistent design approach. Additionally, it improves the structure and readability of the code, ensuring a cohesive user experience throughout the SEO tab.
…ization This commit enhances the SEO tab by updating styles for the health score indicators, including a new design for the health track and fill elements. It refactors the health score rendering logic to utilize a more consistent approach across components, improving accessibility with ARIA attributes. Additionally, it introduces a sorting function for links in the report, ensuring a clearer display order based on link types. These changes aim to provide a more cohesive and visually appealing user experience in the SEO analysis features.
This commit enhances the LinksPreviewSection by introducing an accordion-style layout for displaying links, allowing users to expand and collapse groups of links categorized by type (internal, external, non-web, invalid). It adds new styles for the accordion components, improving the visual organization of link reports. Additionally, it refactors the existing link rendering logic to accommodate the new structure, enhancing user experience and accessibility in the SEO analysis features.
…on features This commit introduces new styles for the JSON-LD preview component, improving the visual presentation of structured data. It adds functionality for validating supported schema types and enhances the display of entity previews, including detailed rows for required and recommended fields. Additionally, it refactors the health scoring system to account for missing schema attributes, providing clearer insights into SEO performance. These changes aim to improve user experience and accessibility in the SEO tab.
…tures This commit introduces a comprehensive update to the SEO overview section, adding a scoring system for subsections based on issue severity. It includes new styles for the score ring visualization, improving the presentation of SEO health metrics. Additionally, it refactors the issue reporting logic to provide clearer insights into the status of SEO elements, enhancing user experience and accessibility in the SEO tab.
…links preview in SEO tab This commit enhances the SEO tab by introducing new navigation buttons for 'Heading Structure' and 'Links Preview', allowing users to easily switch between these views. It also updates the display logic to show the corresponding sections when selected, improving the overall user experience and accessibility of SEO insights. The SEO overview section has been adjusted to maintain a cohesive structure.
…and scrollbar customization This commit updates the styles for the seoSubNav component, adding responsive design features for smaller screens, including horizontal scrolling and custom scrollbar styles. It also ensures that the seoSubNavLabel maintains proper layout with flex properties, enhancing the overall user experience in the SEO tab.
…inks preview functionality This commit modifies the package.json to improve testing scripts by adding a command to clear the NX daemon and updating the size limit for the devtools package. Additionally, it refactors the JSON-LD and links preview components to enhance readability and maintainability, including changes to function declarations and formatting for better code clarity. These updates aim to improve the overall user experience and accessibility in the SEO tab.
… tab components This commit refactors the SEO tab components by cleaning up imports related to severity handling and ensuring consistent text handling by removing unnecessary nullish coalescing and optional chaining. These changes enhance code readability and maintainability across the heading structure, JSON-LD, and links preview components.
…ew component This commit refactors the classifyLink function in the links preview component by removing unnecessary checks for non-web links and the 'nofollow' issue reporting. It enhances the handling of relative paths and same-document fragments to align with browser behavior, improving code clarity and maintainability in the SEO tab.
…README This commit removes the unused 'seoOverviewFootnote' style and its corresponding JSX element from the SEO overview section. Additionally, it updates the README to streamline the description of checks included in the SEO tab, enhancing clarity and conciseness. These changes improve code maintainability and documentation accuracy.
This commit modifies the size limit for the devtools package in package.json, increasing the limit from 60 KB to 69 KB. This change reflects adjustments in the package's size requirements, ensuring accurate size tracking for future development.
… in SEO tab components This commit updates the SEO tab components by standardizing the capitalization of section titles and improving code formatting for better readability. Changes include updating button labels to 'SEO Overview' and 'Social Previews', as well as enhancing the structure of JSX elements for consistency. These adjustments aim to enhance the overall clarity and maintainability of the code.
This commit modifies the titles of the 'Links' and 'JSON-LD' sections in the SEO overview to 'Links Preview' and 'JSON-LD Preview', respectively. These changes aim to enhance clarity and consistency in the presentation of SEO insights, aligning with previous updates to standardize capitalization and improve formatting across the SEO tab components.
…ed data analysis This commit adds a new SEO tab in the devtools, featuring live head-driven social and SERP previews, structured data (JSON-LD) analysis, heading and link assessments, and an overview that scores and links to each section. This enhancement aims to provide users with comprehensive SEO insights and improve the overall functionality of the devtools.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new SEO tab to TanStack Devtools with live head-driven social and SERP previews, JSON-LD validation, canonical/robots analysis, heading and link diagnostics, an overview scoring UI, a location-change hook, and supporting styles/utilities. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant DevtoolsUI as DevTools UI
participant Page as Page Document
participant LocationHook as useLocationChanges
User->>Page: Navigate / update URL
Page->>LocationHook: history.pushState / replaceState / popstate / hashchange
LocationHook->>DevtoolsUI: emit location change (if href changed)
DevtoolsUI->>Page: read head/body (meta, links, scripts, headings, anchors)
DevtoolsUI->>DevtoolsUI: run analyzers (canonical, headings, links, JSON‑LD, social, SERP)
DevtoolsUI->>DevtoolsUI: aggregate scores -> update Overview UI
DevtoolsUI->>User: render updated SEO tab / subsections
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
| Command | Status | Duration | Result |
|---|---|---|---|
nx affected --targets=test:eslint,test:sherif,t... |
❌ Failed | 1m 37s | View ↗ |
nx run-many --targets=build --exclude=examples/** |
✅ Succeeded | 36s | View ↗ |
☁️ Nx Cloud last updated this comment at 2026-04-05 15:44:41 UTC
More templates
@tanstack/devtools
@tanstack/devtools-a11y
@tanstack/devtools-client
@tanstack/devtools-ui
@tanstack/devtools-utils
@tanstack/devtools-vite
@tanstack/devtools-event-bus
@tanstack/devtools-event-client
@tanstack/preact-devtools
@tanstack/react-devtools
@tanstack/solid-devtools
@tanstack/vue-devtools
commit: |
…nonicalPageData This commit modifies the export statements for the CanonicalPageIssue and CanonicalPageData types in the SEO tab components, changing them from 'export type' to 'type'. This adjustment aims to streamline the code structure and improve consistency in type declarations across the module.
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (1)
packages/devtools/src/tabs/seo-tab/serp-preview.tsx (1)
126-189: Derive the overview summary from the shared SERP checks.
getSerpPreviewSummary()repeats the same predicates and messages already defined inCOMMON_CHECKSandSERP_PREVIEWS, so the overview can drift from the detail panel the next time one list changes. Consider storing severity on the shared descriptors and building both views from that single source.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx` around lines 126 - 189, getSerpPreviewSummary duplicates predicates and messages that already live in COMMON_CHECKS and SERP_PREVIEWS; refactor it to build its issues/hint from those shared descriptors instead of repeating logic. Update getSerpPreviewSummary to import COMMON_CHECKS and/or SERP_PREVIEWS, iterate over the shared descriptors, evaluate each descriptor's predicate against getSerpFromHead() (or use provided evaluation helpers), and push issues using the descriptor's severity and message; derive the hint by checking the descriptors for title/description presence rather than using separate trim checks. Ensure you reference the existing symbols COMMON_CHECKS, SERP_PREVIEWS, getSerpPreviewSummary, and getSerpFromHead when implementing this single-source-of-truth approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/react/basic/index.html`:
- Around line 36-38: The canonical link currently points to a localhost URL;
update the <link rel="canonical"> tag so it doesn't reference
http://localhost:3005/—make it match the site's public URL used by the og:url
and twitter:url meta tags (or use a relative canonical like "/" if this is an
example), ensuring the <link rel="canonical"> value is consistent with the
og:url/twitter:url values in the file.
In `@packages/devtools/src/tabs/seo-tab/canonical-url-data.ts`:
- Around line 82-92: The robots token parsing currently only checks for
'noindex' and 'nofollow' but must treat the 'none' directive as equivalent to
both; update the logic that computes indexable and follow (derived from robots
and robotsMetas) so that if robots includes 'none' it is treated the same as
including both 'noindex' and 'nofollow' (e.g., compute noIndex =
robots.includes('noindex') || robots.includes('none') and noFollow =
robots.includes('nofollow') || robots.includes('none') and then set indexable =
!noIndex and follow = !noFollow), ensuring the variables robots, robotsMetas,
indexable and follow are the ones adjusted.
In `@packages/devtools/src/tabs/seo-tab/heading-structure-preview.tsx`:
- Around line 47-80: The current logic pushes several non-fatal heading issues
as 'error' (see h1Count checks, the first-heading check referencing headings[0],
and the loop handling empty headings and skipped levels) which should be
downgraded to 'warning'; update the severity values in the issues.push calls for
"Multiple H1 headings detected", "First heading is ... instead of H1", the
`${current.tag.toUpperCase()} is empty.` case and the skipped-level message in
the for loop from 'error' to 'warning' while leaving the "No H1 heading found on
this page." case as 'error'.
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx`:
- Around line 127-145: The validation in validateContext incorrectly accepts any
string that merely contains "schema.org"; update it to only accept exact
schema.org contexts by parsing the string as a URL (in validateContext) and
ensuring the URL.hostname === 'schema.org' (or accept the literal 'schema.org'
if you want non-URL form), falling back to an error on parse failure; replace
the current context.includes('schema.org') condition with this hostname check so
values like "https://example.com/schema.org" are rejected while valid
"https://schema.org" and "http://schema.org" remain allowed.
- Around line 201-213: The allowlist built in allowedSet (using rules.required,
rules.recommended, rules.optional and RESERVED_KEYS) is too narrow and causes
unknownKeys to include valid schema.org properties, triggering issues.push
warnings; change the validation in json-ld-preview.tsx to stop treating any
property outside SUPPORTED_RULES as necessarily invalid by either (a) expanding
rules for the specific type to include full schema.org properties, or (b)
switching the unknownKeys check to a looser heuristic (e.g., only warn for truly
invalid/reserved keys from RESERVED_KEYS or when a property clearly conflicts
with required types) so that allowedSet no longer emits false-positive warnings
for legitimate keys like author, datePublished, contentLocation, etc. Ensure you
update the logic that computes allowedSet and the subsequent unknownKeys filter
accordingly and keep issues.push only for genuine invalid/reserved attribute
cases.
In `@packages/devtools/src/tabs/seo-tab/links-preview.tsx`:
- Around line 72-85: The _blank external-link check in links-preview.tsx
currently treats only rel="noopener" as acceptable; update the logic where
isExternal is computed (use of resolved, anchor, relTokens) so that the
relTokens check treats either "noopener" OR "noreferrer" as valid (i.e., do not
push the warning if relTokens.includes('noopener') ||
relTokens.includes('noreferrer')). Keep the existing target === '_blank' check
and the issues.push call (severity/message) unchanged, only broaden the accepted
rel tokens to include "noreferrer".
In `@packages/devtools/src/tabs/seo-tab/README.md`:
- Around line 5-15: Rewrite the README intro to remove the contradiction (choose
either "complement to" or "replacement for" — here use "complement to the
Inspect Elements / Lighthouse tabs, not a replacement"), correct typos and
grammar across the "SEO tabs" bullets (e.g., "your" → "your", "thier" → "their",
"sepcific" → "specific", "informations" → "information", "indexible" →
"indexable"), standardize bullet phrasing and capitalization (e.g., "Social
Previews", "SERP Previews", "JSON-LD Previews", "Heading Structure Visualizer",
"Links Preview", "Canonical & URL & indexability"), and make the overview bullet
concise and clear about the SEO score linking to specific tabs for details.
In `@packages/devtools/src/tabs/seo-tab/seo-overview.tsx`:
- Around line 124-166: The memo only invalidates on head mutations but also
reads window.location.href via getCanonicalPageData() and
getSerpPreviewSummary(), so SPA navigations that don't touch the head leave the
overview stale; add a URL-change trigger that calls setTick when the location
changes (e.g., listen for popstate and override history.pushState/replaceState
to dispatch a custom 'locationchange' event) and call setTick((t)=>t+1) in that
listener (the same signal used by bundle's createMemo); update the
useHeadChanges block or add a new effect that registers/removes these listeners
so createSignal tick, setTick, and the bundle memo (which reads
getCanonicalPageData and getSerpPreviewSummary) are invalidated on route changes
as well.
In `@packages/devtools/src/tabs/seo-tab/seo-section-summary.ts`:
- Around line 13-17: The current SeoSectionSummary type (properties issues and
issueCount) hides truncated findings' severities so scoring helpers still
compute penalties from the capped issues array; update SeoSectionSummary to
include full severity totals (e.g., severityCounts or totalsBySeverity)
alongside issues and issueCount, change getLinksPreviewSummary() (and the other
affected summary producers) to populate those totals from the uncapped analysis
before capping the issues array, and modify scoring helpers to use the new
severity totals rather than relying only on the truncated issues array to
compute health and severity counts.
---
Nitpick comments:
In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx`:
- Around line 126-189: getSerpPreviewSummary duplicates predicates and messages
that already live in COMMON_CHECKS and SERP_PREVIEWS; refactor it to build its
issues/hint from those shared descriptors instead of repeating logic. Update
getSerpPreviewSummary to import COMMON_CHECKS and/or SERP_PREVIEWS, iterate over
the shared descriptors, evaluate each descriptor's predicate against
getSerpFromHead() (or use provided evaluation helpers), and push issues using
the descriptor's severity and message; derive the hint by checking the
descriptors for title/description presence rather than using separate trim
checks. Ensure you reference the existing symbols COMMON_CHECKS, SERP_PREVIEWS,
getSerpPreviewSummary, and getSerpFromHead when implementing this
single-source-of-truth approach.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3448dacf-b048-497f-8c1e-c8258986564c
📒 Files selected for processing (15)
.changeset/puny-games-bow.mdexamples/react/basic/index.htmlpackage.jsonpackages/devtools/src/styles/use-styles.tspackages/devtools/src/tabs/seo-tab/README.mdpackages/devtools/src/tabs/seo-tab/canonical-url-data.tspackages/devtools/src/tabs/seo-tab/heading-structure-preview.tsxpackages/devtools/src/tabs/seo-tab/index.tsxpackages/devtools/src/tabs/seo-tab/json-ld-preview.tsxpackages/devtools/src/tabs/seo-tab/links-preview.tsxpackages/devtools/src/tabs/seo-tab/seo-overview.tsxpackages/devtools/src/tabs/seo-tab/seo-section-summary.tspackages/devtools/src/tabs/seo-tab/seo-severity.tspackages/devtools/src/tabs/seo-tab/serp-preview.tsxpackages/devtools/src/tabs/seo-tab/social-previews.tsx
…link and improving robots handling This commit removes the canonical link from the basic example HTML file and updates the robots handling logic in the canonical URL data module. The changes include refining the conditions for indexability and follow directives, ensuring more accurate SEO assessments. Additionally, the links preview component is updated to enforce the inclusion of both 'noopener' and 'noreferrer' for external links with target='_blank'. These adjustments aim to improve the overall functionality and security of the SEO tab.
…tion This commit introduces a new hook, useLocationChanges, that allows components to react to changes in the browser's location. The hook sets up listeners for pushState, replaceState, and popstate events, enabling efficient updates when the URL changes. Additionally, it integrates with the SEO tab components to enhance responsiveness to location changes, improving user experience and functionality.
This commit refactors the links-preview component by consolidating import statements for better clarity and organization. The countBySeverity function and SeoSectionSummary type are now imported separately, enhancing code readability and maintainability.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
packages/devtools/src/tabs/seo-tab/seo-section-summary.ts (1)
108-110: Consider whether label should reflect score tier, not just severity presence.With the current logic, a page with 50+ info issues would have
score=0butlabel='Good'(since no errors or warnings exist). If this is intentional—info issues indicate optimization opportunities rather than health problems—the code is fine. Otherwise, consider aligning the label with the computed score tier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/devtools/src/tabs/seo-tab/seo-section-summary.ts` around lines 108 - 110, The label currently derives only from counts.error/counts.warning which can conflict with the computed score; change label assignment so it reflects the score tier instead (use the existing score value computed earlier in this scope), e.g., map score ranges to 'Good'/'Fair'/'Poor' (or, if you prefer, incorporate counts.info into thresholds) and replace the current counts-based ternary that assigns label to ensure label = tierFromScore(score) rather than depending solely on counts.error/counts.warning.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx`:
- Around line 332-342: The map over scripts is calling script.textContent.trim()
which can throw if textContent is null; update the scripts.map handler (the
block that defines raw/parsed/types/issues) to safely handle null textContent by
first checking script.textContent (or using a safe default) before calling trim,
e.g., compute raw from a guarded value so empty or null textContent yields an
empty string and then proceed to return the same shape (id, raw, parsed, types,
issues) for empty content; locate this logic in the scripts.map callback and
replace the direct .trim() call with a null-safe approach.
In `@packages/devtools/src/tabs/seo-tab/links-preview.tsx`:
- Around line 26-30: The code uses anchor.textContent.trim() which can throw if
textContent is null; update the text computation for the text variable to guard
against null by using a null-safe access or coalescing before trimming (e.g.,
use anchor.textContent?.trim() or (anchor.textContent ?? '').trim()), and keep
the existing fallbacks anchor.getAttribute('aria-label')?.trim() and
anchor.getAttribute('title')?.trim() unchanged so the text variable never calls
.trim() on null.
---
Nitpick comments:
In `@packages/devtools/src/tabs/seo-tab/seo-section-summary.ts`:
- Around line 108-110: The label currently derives only from
counts.error/counts.warning which can conflict with the computed score; change
label assignment so it reflects the score tier instead (use the existing score
value computed earlier in this scope), e.g., map score ranges to
'Good'/'Fair'/'Poor' (or, if you prefer, incorporate counts.info into
thresholds) and replace the current counts-based ternary that assigns label to
ensure label = tierFromScore(score) rather than depending solely on
counts.error/counts.warning.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d226ef28-a68a-4d3d-b2a5-7c8974dc83e2
📒 Files selected for processing (7)
examples/react/basic/index.htmlpackages/devtools/src/hooks/use-location-changes.tspackages/devtools/src/tabs/seo-tab/canonical-url-data.tspackages/devtools/src/tabs/seo-tab/json-ld-preview.tsxpackages/devtools/src/tabs/seo-tab/links-preview.tsxpackages/devtools/src/tabs/seo-tab/seo-overview.tsxpackages/devtools/src/tabs/seo-tab/seo-section-summary.ts
✅ Files skipped from review due to trivial changes (1)
- examples/react/basic/index.html
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/devtools/src/tabs/seo-tab/canonical-url-data.ts
This commit updates the JSON-LD analysis function to ensure that it handles cases where the script content is null or empty. By using optional chaining and providing a default empty string, the function now avoids potential errors and improves robustness in processing JSON-LD scripts.
…mponents This commit updates the json-ld-preview and links-preview components by removing optional chaining from the textContent property. This change ensures that the textContent is always trimmed, improving the handling of empty strings and enhancing the robustness of the SEO tab components.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx (1)
204-215:⚠️ Potential issue | 🟠 MajorThe unknown-attribute warning is still treating partial rules as exhaustive.
SUPPORTED_RULESonly lists the fields this UI knows how to highlight. Using that subset as a full allowlist means any legitimate schema.org property that's not enumerated here still lands inunknownKeysand lowers the health score.💡 One looser heuristic
- const allowedSet = new Set([ - ...rules.required, - ...rules.recommended, - ...rules.optional, - ...RESERVED_KEYS, - ]) - const unknownKeys = Object.keys(entity).filter((key) => !allowedSet.has(key)) + const unknownKeys = Object.keys(entity).filter( + (key) => key.startsWith('@') && !RESERVED_KEYS.has(key), + ) if (unknownKeys.length > 0) { issues.push({ severity: 'warning', - message: `Possible invalid attributes for ${typeName}: ${unknownKeys.join(', ')}`, + message: `Unsupported JSON-LD control attributes for ${typeName}: ${unknownKeys.join(', ')}`, }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx` around lines 204 - 215, The current unknown-attribute warning uses allowedSet built from rules.* (and RESERVED_KEYS) which treats the UI's SUPPORTED_RULES subset as an exhaustive allowlist; instead, change the logic in the block that builds allowedSet / unknownKeys so we do NOT warn simply because a key isn't in rules.required|recommended|optional. Only emit warnings for keys that are clearly invalid or reserved: keep RESERVED_KEYS checks, and additionally flag keys that match obvious-mistake patterns (e.g., contain spaces or special characters or start with '_' or '@') or that appear in a curated DISALLOWED_KEYS list; stop treating absence from rules (SUPPORTED_RULES) as a reason to push a warning for entity keys, and update the unknownKeys calculation accordingly (references: allowedSet, rules, RESERVED_KEYS, unknownKeys, entity, typeName, issues).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx`:
- Around line 101-112: getEntities currently drops a root container's "@context"
when returning entries from a "@graph", causing downstream validation to think
nodes lack context; update getEntities to preserve the root "@context" from the
payload when flattening: if payload is a record with an "@graph" array and
payload["@context"] exists, attach that context to each graph entity that is a
record and does not already have its own "@context" (or otherwise return the
container as-is if you prefer), referencing the getEntities function and the
payload['@context'] and payload['@graph'] symbols so the change is made exactly
where the flattening occurs.
- Around line 180-189: The code is double-counting reserved keys (`@context` and
`@type`) because validateContext() and validateTypes() already report those errors
while rules.required (and recommended/optional) still include them; update the
logic around hasMissingKeys(entity, rules.*) or the resulting arrays
(missingRequired, missingRecommended, missingOptional) to filter out reserved
JSON-LD keys ('@context' and '@type') before pushing issues so that
validateContext() and validateTypes() remain the single source of truth and
getJsonLdScore() isn't over-penalized; adjust the block that builds issues (uses
hasMissingKeys, missingRequired, missingRecommended, missingOptional, and
issues.push) to ignore those reserved keys.
---
Duplicate comments:
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx`:
- Around line 204-215: The current unknown-attribute warning uses allowedSet
built from rules.* (and RESERVED_KEYS) which treats the UI's SUPPORTED_RULES
subset as an exhaustive allowlist; instead, change the logic in the block that
builds allowedSet / unknownKeys so we do NOT warn simply because a key isn't in
rules.required|recommended|optional. Only emit warnings for keys that are
clearly invalid or reserved: keep RESERVED_KEYS checks, and additionally flag
keys that match obvious-mistake patterns (e.g., contain spaces or special
characters or start with '_' or '@') or that appear in a curated DISALLOWED_KEYS
list; stop treating absence from rules (SUPPORTED_RULES) as a reason to push a
warning for entity keys, and update the unknownKeys calculation accordingly
(references: allowedSet, rules, RESERVED_KEYS, unknownKeys, entity, typeName,
issues).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: dfaa2381-860f-4ddb-8001-7c13c20bac38
📒 Files selected for processing (1)
packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx
| function getEntities(payload: unknown): Array<JsonLdValue> { | ||
| if (Array.isArray(payload)) { | ||
| return payload.filter(isRecord) | ||
| } | ||
| if (!isRecord(payload)) return [] | ||
| const graph = payload['@graph'] | ||
| if (Array.isArray(graph)) { | ||
| const graphEntities = graph.filter(isRecord) | ||
| if (graphEntities.length > 0) return graphEntities | ||
| } | ||
| return [payload] | ||
| } |
There was a problem hiding this comment.
Preserve the root @context when flattening @graph.
A block shaped like { "@context": "https://schema.org", "@graph": [...] } is valid, but getEntities() drops the container and returns only the graph nodes. That strips the inherited context before validation, so each node is incorrectly reported as missing @context.
💡 Proposed fix
function getEntities(payload: unknown): Array<JsonLdValue> {
if (Array.isArray(payload)) {
return payload.filter(isRecord)
}
if (!isRecord(payload)) return []
const graph = payload['@graph']
if (Array.isArray(graph)) {
- const graphEntities = graph.filter(isRecord)
+ const inheritedContext = payload['@context']
+ const graphEntities = graph.filter(isRecord).map((entity) =>
+ entity['@context'] === undefined && inheritedContext !== undefined
+ ? { '@context': inheritedContext, ...entity }
+ : entity,
+ )
if (graphEntities.length > 0) return graphEntities
}
return [payload]
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx` around lines 101 -
112, getEntities currently drops a root container's "@context" when returning
entries from a "@graph", causing downstream validation to think nodes lack
context; update getEntities to preserve the root "@context" from the payload
when flattening: if payload is a record with an "@graph" array and
payload["@context"] exists, attach that context to each graph entity that is a
record and does not already have its own "@context" (or otherwise return the
container as-is if you prefer), referencing the getEntities function and the
payload['@context'] and payload['@graph'] symbols so the change is made exactly
where the flattening occurs.
| const issues: Array<ValidationIssue> = [] | ||
| const missingRequired = hasMissingKeys(entity, rules.required) | ||
| const missingRecommended = hasMissingKeys(entity, rules.recommended) | ||
| const missingOptional = hasMissingKeys(entity, rules.optional) | ||
|
|
||
| if (missingRequired.length > 0) { | ||
| issues.push({ | ||
| severity: 'error', | ||
| message: `Missing required attributes: ${missingRequired.join(', ')}`, | ||
| }) |
There was a problem hiding this comment.
Avoid double-counting reserved JSON-LD keys.
validateContext() and validateTypes() already emit the canonical errors for @context and @type, but rules.required includes those same keys again. That turns one missing control field into multiple issues and over-penalizes both the gap summary and getJsonLdScore().
💡 Proposed fix
- const missingRequired = hasMissingKeys(entity, rules.required)
+ const missingRequired = hasMissingKeys(
+ entity,
+ rules.required.filter((key) => !RESERVED_KEYS.has(key)),
+ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx` around lines 180 -
189, The code is double-counting reserved keys (`@context` and `@type`) because
validateContext() and validateTypes() already report those errors while
rules.required (and recommended/optional) still include them; update the logic
around hasMissingKeys(entity, rules.*) or the resulting arrays (missingRequired,
missingRecommended, missingOptional) to filter out reserved JSON-LD keys
('@context' and '@type') before pushing issues so that validateContext() and
validateTypes() remain the single source of truth and getJsonLdScore() isn't
over-penalized; adjust the block that builds issues (uses hasMissingKeys,
missingRequired, missingRecommended, missingOptional, and issues.push) to ignore
those reserved keys.
… preview text handling This commit updates the max-width values for certain styles in the use-styles.ts file, increasing the desktop max-width to 620px and decreasing the mobile max-width to 328px. Additionally, it introduces new functions in serp-preview.tsx for measuring text width and truncating text based on width and line limits, improving the handling of SERP previews for better SEO representation.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/devtools/src/tabs/seo-tab/serp-preview.tsx (1)
65-68:⚠️ Potential issue | 🟡 MinorMessage text doesn't match actual threshold.
The warning message states "wider than 600px" but
DESKTOP_TITLE_MAX_WIDTH_PXat line 9 is620. This inconsistency also appears at line 410 ingetSerpPreviewSummary().Proposed fix
{ message: - 'The title is wider than 600px and it may not be displayed in full length.', + 'The title is wider than 620px and it may not be displayed in full length.', hasIssue: (_, overflow) => overflow.titleOverflow, },And at line 410:
message: - 'The title is wider than 600px and it may not be displayed in full length.', + 'The title is wider than 620px and it may not be displayed in full length.',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx` around lines 65 - 68, The message text is inconsistent with the actual threshold constant DESKTOP_TITLE_MAX_WIDTH_PX (620); update the user-facing warning in the object with hasIssue (and the duplicate message in getSerpPreviewSummary) to reference the correct value (620) or, better, interpolate/format using DESKTOP_TITLE_MAX_WIDTH_PX so the message always matches the constant; ensure both the message in the object where hasIssue: (_, overflow) => overflow.titleOverflow and the message returned from getSerpPreviewSummary() are changed accordingly.
🧹 Nitpick comments (1)
packages/devtools/src/tabs/seo-tab/serp-preview.tsx (1)
200-251: Consider extracting repeated binary search pattern.The binary search logic is duplicated across
truncateToWidth,truncateToLines, andtruncateToTotalWrappedWidth(lines 200-293). A generic helper could reduce repetition.Example abstraction
function binarySearchTruncate( text: string, fits: (candidate: string) => boolean, ): string { const chars = Array.from(text) let low = 0 let high = chars.length while (low < high) { const mid = Math.ceil((low + high) / 2) const candidate = chars.slice(0, mid).join('').trimEnd() + ELLIPSIS if (fits(candidate)) { low = mid } else { high = mid - 1 } } return chars.slice(0, low).join('').trimEnd() + ELLIPSIS }Each truncation function could then pass its specific
fitspredicate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx` around lines 200 - 251, Both truncateToWidth and truncateToLines (and truncateToTotalWrappedWidth) duplicate the same binary-search truncation; extract that pattern into a single helper (e.g., binarySearchTruncate) that accepts the original text and a fits(candidate: string) => boolean predicate; replace the binary search blocks in truncateToWidth, truncateToLines, and truncateToTotalWrappedWidth with calls to this helper, where truncateToWidth's fits uses measureTextWidth(candidate, font) <= maxWidth and truncateToLines' fits uses wrapTextByWidth(candidate, maxWidth, font).length <= maxLines (and similarly for total-wrapped-width), ensuring the helper returns the trimmed + ELLIPSIS result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx`:
- Around line 65-68: The message text is inconsistent with the actual threshold
constant DESKTOP_TITLE_MAX_WIDTH_PX (620); update the user-facing warning in the
object with hasIssue (and the duplicate message in getSerpPreviewSummary) to
reference the correct value (620) or, better, interpolate/format using
DESKTOP_TITLE_MAX_WIDTH_PX so the message always matches the constant; ensure
both the message in the object where hasIssue: (_, overflow) =>
overflow.titleOverflow and the message returned from getSerpPreviewSummary() are
changed accordingly.
---
Nitpick comments:
In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx`:
- Around line 200-251: Both truncateToWidth and truncateToLines (and
truncateToTotalWrappedWidth) duplicate the same binary-search truncation;
extract that pattern into a single helper (e.g., binarySearchTruncate) that
accepts the original text and a fits(candidate: string) => boolean predicate;
replace the binary search blocks in truncateToWidth, truncateToLines, and
truncateToTotalWrappedWidth with calls to this helper, where truncateToWidth's
fits uses measureTextWidth(candidate, font) <= maxWidth and truncateToLines'
fits uses wrapTextByWidth(candidate, maxWidth, font).length <= maxLines (and
similarly for total-wrapped-width), ensuring the helper returns the trimmed +
ELLIPSIS result.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 531d6316-3d25-465e-b33c-c3fd84c216ed
📒 Files selected for processing (2)
packages/devtools/src/styles/use-styles.tspackages/devtools/src/tabs/seo-tab/serp-preview.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/devtools/src/styles/use-styles.ts

🎯 Changes
Introduced new tabs in the SEO section:
(Sorry for the low quality video but GitHub didn't let me upload the high quality one)
SEO-tab-pr.mp4
✅ Checklist
pnpm test:pr.🚀 Release Impact
Summary by CodeRabbit
New Features
Chores