Feat SEO Section (Overview, Heading Structure, Links & JSON LD Preview)#413
Feat SEO Section (Overview, Heading Structure, Links & JSON LD Preview)#413abedshaaban wants to merge 25 commits intoTanStack:mainfrom
Conversation
…functionality This commit introduces a new README.md file for the SEO tab in the devtools package. It outlines the purpose of the SEO tab, including its major features such as Social Previews, SERP Previews, JSON-LD Previews, and more. Each section provides an overview of functionality, data sources, and how the previews are rendered, enhancing the documentation for better user understanding.
…structure, and links preview This commit introduces several new sections to the SEO tab in the devtools package, enhancing its functionality. The new features include: - **JSON-LD Preview**: Parses and validates JSON-LD scripts on the page, providing detailed feedback on required and recommended attributes. - **Heading Structure Preview**: Analyzes heading tags (`h1` to `h6`) for hierarchy and common issues, ensuring proper SEO practices. - **Links Preview**: Scans all links on the page, classifying them as internal, external, or invalid, and reports on accessibility and SEO-related issues. Additionally, the SEO tab navigation has been updated to include these new sections, improving user experience and accessibility of SEO insights.
This commit refactors the SEO tab components to standardize the handling of severity levels for issues. The `Severity` type has been replaced with `SeoSeverity`, and the `severityColor` function has been removed in favor of a centralized `seoSeverityColor` function. This change improves code consistency and maintainability across the `canonical-url-preview`, `heading-structure-preview`, `json-ld-preview`, and `links-preview` components, ensuring a unified approach to displaying issue severity in the SEO analysis features.
This commit adds a canonical link and robots meta tag to the basic example's HTML file, improving SEO capabilities. Additionally, it refactors the SEO tab components to utilize the `Show` component for conditional rendering of issues, enhancing the user experience by only displaying relevant information when applicable. This change streamlines the presentation of SEO analysis results across the canonical URL, heading structure, and links preview sections.
…lysis This commit adds a new SEO overview section to the devtools package, aggregating insights from various SEO components including canonical URLs, social previews, SERP previews, JSON-LD, heading structure, and links. It implements a health scoring system to provide a quick assessment of SEO status, highlighting issues and offering hints for improvement. Additionally, it refactors existing components to enhance data handling and presentation, improving the overall user experience in the SEO tab.
…reporting This commit introduces new styles for the SEO tab components, improving the visual presentation of SEO analysis results. It adds structured issue reporting for SEO elements, including headings, JSON-LD, and links, utilizing a consistent design for severity indicators. Additionally, it refactors existing components to enhance readability and maintainability, ensuring a cohesive user experience across the SEO tab.
This commit introduces new styles for the SEO tab components, including enhanced visual presentation for SEO analysis results. It refactors the handling of severity indicators across various sections, such as headings, JSON-LD, and links, utilizing a consistent design approach. Additionally, it improves the structure and readability of the code, ensuring a cohesive user experience throughout the SEO tab.
…ization This commit enhances the SEO tab by updating styles for the health score indicators, including a new design for the health track and fill elements. It refactors the health score rendering logic to utilize a more consistent approach across components, improving accessibility with ARIA attributes. Additionally, it introduces a sorting function for links in the report, ensuring a clearer display order based on link types. These changes aim to provide a more cohesive and visually appealing user experience in the SEO analysis features.
This commit enhances the LinksPreviewSection by introducing an accordion-style layout for displaying links, allowing users to expand and collapse groups of links categorized by type (internal, external, non-web, invalid). It adds new styles for the accordion components, improving the visual organization of link reports. Additionally, it refactors the existing link rendering logic to accommodate the new structure, enhancing user experience and accessibility in the SEO analysis features.
…on features This commit introduces new styles for the JSON-LD preview component, improving the visual presentation of structured data. It adds functionality for validating supported schema types and enhances the display of entity previews, including detailed rows for required and recommended fields. Additionally, it refactors the health scoring system to account for missing schema attributes, providing clearer insights into SEO performance. These changes aim to improve user experience and accessibility in the SEO tab.
…tures This commit introduces a comprehensive update to the SEO overview section, adding a scoring system for subsections based on issue severity. It includes new styles for the score ring visualization, improving the presentation of SEO health metrics. Additionally, it refactors the issue reporting logic to provide clearer insights into the status of SEO elements, enhancing user experience and accessibility in the SEO tab.
…links preview in SEO tab This commit enhances the SEO tab by introducing new navigation buttons for 'Heading Structure' and 'Links Preview', allowing users to easily switch between these views. It also updates the display logic to show the corresponding sections when selected, improving the overall user experience and accessibility of SEO insights. The SEO overview section has been adjusted to maintain a cohesive structure.
…and scrollbar customization This commit updates the styles for the seoSubNav component, adding responsive design features for smaller screens, including horizontal scrolling and custom scrollbar styles. It also ensures that the seoSubNavLabel maintains proper layout with flex properties, enhancing the overall user experience in the SEO tab.
…inks preview functionality This commit modifies the package.json to improve testing scripts by adding a command to clear the NX daemon and updating the size limit for the devtools package. Additionally, it refactors the JSON-LD and links preview components to enhance readability and maintainability, including changes to function declarations and formatting for better code clarity. These updates aim to improve the overall user experience and accessibility in the SEO tab.
… tab components This commit refactors the SEO tab components by cleaning up imports related to severity handling and ensuring consistent text handling by removing unnecessary nullish coalescing and optional chaining. These changes enhance code readability and maintainability across the heading structure, JSON-LD, and links preview components.
…ew component This commit refactors the classifyLink function in the links preview component by removing unnecessary checks for non-web links and the 'nofollow' issue reporting. It enhances the handling of relative paths and same-document fragments to align with browser behavior, improving code clarity and maintainability in the SEO tab.
…README This commit removes the unused 'seoOverviewFootnote' style and its corresponding JSX element from the SEO overview section. Additionally, it updates the README to streamline the description of checks included in the SEO tab, enhancing clarity and conciseness. These changes improve code maintainability and documentation accuracy.
This commit modifies the size limit for the devtools package in package.json, increasing the limit from 60 KB to 69 KB. This change reflects adjustments in the package's size requirements, ensuring accurate size tracking for future development.
… in SEO tab components This commit updates the SEO tab components by standardizing the capitalization of section titles and improving code formatting for better readability. Changes include updating button labels to 'SEO Overview' and 'Social Previews', as well as enhancing the structure of JSX elements for consistency. These adjustments aim to enhance the overall clarity and maintainability of the code.
This commit modifies the titles of the 'Links' and 'JSON-LD' sections in the SEO overview to 'Links Preview' and 'JSON-LD Preview', respectively. These changes aim to enhance clarity and consistency in the presentation of SEO insights, aligning with previous updates to standardize capitalization and improve formatting across the SEO tab components.
…ed data analysis This commit adds a new SEO tab in the devtools, featuring live head-driven social and SERP previews, structured data (JSON-LD) analysis, heading and link assessments, and an overview that scores and links to each section. This enhancement aims to provide users with comprehensive SEO insights and improve the overall functionality of the devtools.
📝 WalkthroughWalkthroughAdds a new SEO tab to the devtools with live social and SERP previews, JSON-LD parsing/validation, heading-structure analysis, link scanning, canonical/robots indexing checks, and an aggregated overview that scores and links into detailed subsections. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant SeoTab as SEO Tab Router
participant Overview as Overview Component
participant Analyzer as DOM Analyzers
participant Detail as Detail Sections
participant DOM as Page DOM
User->>SeoTab: Open SEO Tab
SeoTab->>Overview: Render overview (default)
Overview->>Analyzer: Request all section summaries
Analyzer->>DOM: Scan head & body (meta, JSON-LD, headings, links)
DOM-->>Analyzer: Return raw data
Analyzer->>Analyzer: Validate & compute issues/scores
Analyzer-->>Overview: Summaries + scores
Overview->>User: Display aggregated health + subsection cards
User->>SeoTab: Click subsection
SeoTab->>Detail: Navigate to detail view
Detail->>Analyzer: Request detailed analysis (rescan if reactive)
Analyzer->>DOM: Fetch fresh metadata
Analyzer-->>Detail: Detailed issues + preview data
Detail-->>User: Render detailed preview and issues
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit d73db33
☁️ Nx Cloud last updated this comment at |
More templates
@tanstack/devtools
@tanstack/devtools-a11y
@tanstack/devtools-client
@tanstack/devtools-ui
@tanstack/devtools-utils
@tanstack/devtools-vite
@tanstack/devtools-event-bus
@tanstack/devtools-event-client
@tanstack/preact-devtools
@tanstack/react-devtools
@tanstack/solid-devtools
@tanstack/vue-devtools
commit: |
…nonicalPageData This commit modifies the export statements for the CanonicalPageIssue and CanonicalPageData types in the SEO tab components, changing them from 'export type' to 'type'. This adjustment aims to streamline the code structure and improve consistency in type declarations across the module.
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (1)
packages/devtools/src/tabs/seo-tab/serp-preview.tsx (1)
126-189: Derive the overview summary from the shared SERP checks.
getSerpPreviewSummary()repeats the same predicates and messages already defined inCOMMON_CHECKSandSERP_PREVIEWS, so the overview can drift from the detail panel the next time one list changes. Consider storing severity on the shared descriptors and building both views from that single source.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx` around lines 126 - 189, getSerpPreviewSummary duplicates predicates and messages that already live in COMMON_CHECKS and SERP_PREVIEWS; refactor it to build its issues/hint from those shared descriptors instead of repeating logic. Update getSerpPreviewSummary to import COMMON_CHECKS and/or SERP_PREVIEWS, iterate over the shared descriptors, evaluate each descriptor's predicate against getSerpFromHead() (or use provided evaluation helpers), and push issues using the descriptor's severity and message; derive the hint by checking the descriptors for title/description presence rather than using separate trim checks. Ensure you reference the existing symbols COMMON_CHECKS, SERP_PREVIEWS, getSerpPreviewSummary, and getSerpFromHead when implementing this single-source-of-truth approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@examples/react/basic/index.html`:
- Around line 36-38: The canonical link currently points to a localhost URL;
update the <link rel="canonical"> tag so it doesn't reference
http://localhost:3005/—make it match the site's public URL used by the og:url
and twitter:url meta tags (or use a relative canonical like "/" if this is an
example), ensuring the <link rel="canonical"> value is consistent with the
og:url/twitter:url values in the file.
In `@packages/devtools/src/tabs/seo-tab/canonical-url-data.ts`:
- Around line 82-92: The robots token parsing currently only checks for
'noindex' and 'nofollow' but must treat the 'none' directive as equivalent to
both; update the logic that computes indexable and follow (derived from robots
and robotsMetas) so that if robots includes 'none' it is treated the same as
including both 'noindex' and 'nofollow' (e.g., compute noIndex =
robots.includes('noindex') || robots.includes('none') and noFollow =
robots.includes('nofollow') || robots.includes('none') and then set indexable =
!noIndex and follow = !noFollow), ensuring the variables robots, robotsMetas,
indexable and follow are the ones adjusted.
In `@packages/devtools/src/tabs/seo-tab/heading-structure-preview.tsx`:
- Around line 47-80: The current logic pushes several non-fatal heading issues
as 'error' (see h1Count checks, the first-heading check referencing headings[0],
and the loop handling empty headings and skipped levels) which should be
downgraded to 'warning'; update the severity values in the issues.push calls for
"Multiple H1 headings detected", "First heading is ... instead of H1", the
`${current.tag.toUpperCase()} is empty.` case and the skipped-level message in
the for loop from 'error' to 'warning' while leaving the "No H1 heading found on
this page." case as 'error'.
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx`:
- Around line 127-145: The validation in validateContext incorrectly accepts any
string that merely contains "schema.org"; update it to only accept exact
schema.org contexts by parsing the string as a URL (in validateContext) and
ensuring the URL.hostname === 'schema.org' (or accept the literal 'schema.org'
if you want non-URL form), falling back to an error on parse failure; replace
the current context.includes('schema.org') condition with this hostname check so
values like "https://example.com/schema.org" are rejected while valid
"https://schema.org" and "http://schema.org" remain allowed.
- Around line 201-213: The allowlist built in allowedSet (using rules.required,
rules.recommended, rules.optional and RESERVED_KEYS) is too narrow and causes
unknownKeys to include valid schema.org properties, triggering issues.push
warnings; change the validation in json-ld-preview.tsx to stop treating any
property outside SUPPORTED_RULES as necessarily invalid by either (a) expanding
rules for the specific type to include full schema.org properties, or (b)
switching the unknownKeys check to a looser heuristic (e.g., only warn for truly
invalid/reserved keys from RESERVED_KEYS or when a property clearly conflicts
with required types) so that allowedSet no longer emits false-positive warnings
for legitimate keys like author, datePublished, contentLocation, etc. Ensure you
update the logic that computes allowedSet and the subsequent unknownKeys filter
accordingly and keep issues.push only for genuine invalid/reserved attribute
cases.
In `@packages/devtools/src/tabs/seo-tab/links-preview.tsx`:
- Around line 72-85: The _blank external-link check in links-preview.tsx
currently treats only rel="noopener" as acceptable; update the logic where
isExternal is computed (use of resolved, anchor, relTokens) so that the
relTokens check treats either "noopener" OR "noreferrer" as valid (i.e., do not
push the warning if relTokens.includes('noopener') ||
relTokens.includes('noreferrer')). Keep the existing target === '_blank' check
and the issues.push call (severity/message) unchanged, only broaden the accepted
rel tokens to include "noreferrer".
In `@packages/devtools/src/tabs/seo-tab/README.md`:
- Around line 5-15: Rewrite the README intro to remove the contradiction (choose
either "complement to" or "replacement for" — here use "complement to the
Inspect Elements / Lighthouse tabs, not a replacement"), correct typos and
grammar across the "SEO tabs" bullets (e.g., "your" → "your", "thier" → "their",
"sepcific" → "specific", "informations" → "information", "indexible" →
"indexable"), standardize bullet phrasing and capitalization (e.g., "Social
Previews", "SERP Previews", "JSON-LD Previews", "Heading Structure Visualizer",
"Links Preview", "Canonical & URL & indexability"), and make the overview bullet
concise and clear about the SEO score linking to specific tabs for details.
In `@packages/devtools/src/tabs/seo-tab/seo-overview.tsx`:
- Around line 124-166: The memo only invalidates on head mutations but also
reads window.location.href via getCanonicalPageData() and
getSerpPreviewSummary(), so SPA navigations that don't touch the head leave the
overview stale; add a URL-change trigger that calls setTick when the location
changes (e.g., listen for popstate and override history.pushState/replaceState
to dispatch a custom 'locationchange' event) and call setTick((t)=>t+1) in that
listener (the same signal used by bundle's createMemo); update the
useHeadChanges block or add a new effect that registers/removes these listeners
so createSignal tick, setTick, and the bundle memo (which reads
getCanonicalPageData and getSerpPreviewSummary) are invalidated on route changes
as well.
In `@packages/devtools/src/tabs/seo-tab/seo-section-summary.ts`:
- Around line 13-17: The current SeoSectionSummary type (properties issues and
issueCount) hides truncated findings' severities so scoring helpers still
compute penalties from the capped issues array; update SeoSectionSummary to
include full severity totals (e.g., severityCounts or totalsBySeverity)
alongside issues and issueCount, change getLinksPreviewSummary() (and the other
affected summary producers) to populate those totals from the uncapped analysis
before capping the issues array, and modify scoring helpers to use the new
severity totals rather than relying only on the truncated issues array to
compute health and severity counts.
---
Nitpick comments:
In `@packages/devtools/src/tabs/seo-tab/serp-preview.tsx`:
- Around line 126-189: getSerpPreviewSummary duplicates predicates and messages
that already live in COMMON_CHECKS and SERP_PREVIEWS; refactor it to build its
issues/hint from those shared descriptors instead of repeating logic. Update
getSerpPreviewSummary to import COMMON_CHECKS and/or SERP_PREVIEWS, iterate over
the shared descriptors, evaluate each descriptor's predicate against
getSerpFromHead() (or use provided evaluation helpers), and push issues using
the descriptor's severity and message; derive the hint by checking the
descriptors for title/description presence rather than using separate trim
checks. Ensure you reference the existing symbols COMMON_CHECKS, SERP_PREVIEWS,
getSerpPreviewSummary, and getSerpFromHead when implementing this
single-source-of-truth approach.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3448dacf-b048-497f-8c1e-c8258986564c
📒 Files selected for processing (15)
.changeset/puny-games-bow.mdexamples/react/basic/index.htmlpackage.jsonpackages/devtools/src/styles/use-styles.tspackages/devtools/src/tabs/seo-tab/README.mdpackages/devtools/src/tabs/seo-tab/canonical-url-data.tspackages/devtools/src/tabs/seo-tab/heading-structure-preview.tsxpackages/devtools/src/tabs/seo-tab/index.tsxpackages/devtools/src/tabs/seo-tab/json-ld-preview.tsxpackages/devtools/src/tabs/seo-tab/links-preview.tsxpackages/devtools/src/tabs/seo-tab/seo-overview.tsxpackages/devtools/src/tabs/seo-tab/seo-section-summary.tspackages/devtools/src/tabs/seo-tab/seo-severity.tspackages/devtools/src/tabs/seo-tab/serp-preview.tsxpackages/devtools/src/tabs/seo-tab/social-previews.tsx
| <link rel="canonical" href="http://localhost:3005/" /> | ||
|
|
||
| <meta name="robots" content="index, follow" /> |
There was a problem hiding this comment.
Don't point the canonical tag at a localhost URL.
This now disagrees with the og:url / twitter:url values in the same file and will make the example look broken anywhere except that one local port.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/react/basic/index.html` around lines 36 - 38, The canonical link
currently points to a localhost URL; update the <link rel="canonical"> tag so it
doesn't reference http://localhost:3005/—make it match the site's public URL
used by the og:url and twitter:url meta tags (or use a relative canonical like
"/" if this is an example), ensuring the <link rel="canonical"> value is
consistent with the og:url/twitter:url values in the file.
| const robots = robotsMetas | ||
| .map((meta) => meta.getAttribute('content') || '') | ||
| .flatMap((content) => | ||
| content | ||
| .split(',') | ||
| .map((token) => token.trim().toLowerCase()) | ||
| .filter(Boolean), | ||
| ) | ||
|
|
||
| const indexable = !robots.includes('noindex') | ||
| const follow = !robots.includes('nofollow') |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the file mentioned in the review
fd -t f "canonical-url-data.ts"Repository: TanStack/devtools
Length of output: 117
🏁 Script executed:
# Read the specific file and lines mentioned in the review
cat -n "packages/devtools/src/tabs/seo-tab/canonical-url-data.ts" | sed -n '75,100p'Repository: TanStack/devtools
Length of output: 1040
🏁 Script executed:
# Let's also check the full context around the function to understand the logic better
cat -n "packages/devtools/src/tabs/seo-tab/canonical-url-data.ts" | head -120Repository: TanStack/devtools
Length of output: 4151
🏁 Script executed:
# Check if there are any tests for this function to understand expected behavior
fd -t f ".*test.*\|.*spec.*" | grep -i "canonical\|seo" | head -20Repository: TanStack/devtools
Length of output: 43
🏁 Script executed:
# Search for any existing handling of 'none' in the codebase
rg "none" packages/devtools/src/tabs/seo-tab/ -A 2 -B 2Repository: TanStack/devtools
Length of output: 785
🏁 Script executed:
# Let's search for any documentation or comments in the codebase about robots directives
rg -i "robots|noindex|nofollow" packages/devtools/src/tabs/seo-tab/ --type ts --type tsx -B 3 -A 3 | head -80Repository: TanStack/devtools
Length of output: 88
🏁 Script executed:
# Search for robots directives in the codebase
rg -i "robots|noindex|nofollow" packages/devtools/src/tabs/seo-tab/ -A 3 -B 3 | head -100Repository: TanStack/devtools
Length of output: 7623
🏁 Script executed:
# Check if there are any test files related to this functionality
find packages/devtools -name "*test*" -o -name "*spec*" | grep -i seoRepository: TanStack/devtools
Length of output: 43
🏁 Script executed:
# Let's look for any additional context or usage of these computed values
grep -r "indexable\|follow" packages/devtools/src/tabs/seo-tab/ -B 2 -A 2 | head -60Repository: TanStack/devtools
Length of output: 4812
🏁 Script executed:
# Verify the robots directive specification by checking if there are any comments or documentation
cat packages/devtools/src/tabs/seo-tab/README.mdRepository: TanStack/devtools
Length of output: 6521
Handle robots=none directive as equivalent to both noindex and nofollow.
The none directive is a valid W3C robots meta value that means both noindex and nofollow. Currently, the code only checks for literal noindex and nofollow tokens, so a page with <meta name="robots" content="none"> is incorrectly reported as indexable and follow-enabled, flipping the SEO status display.
Suggested fix
- const indexable = !robots.includes('noindex')
- const follow = !robots.includes('nofollow')
+ const hasNoindex = robots.includes('noindex') || robots.includes('none')
+ const hasNofollow = robots.includes('nofollow') || robots.includes('none')
+ const indexable = !hasNoindex
+ const follow = !hasNofollow📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const robots = robotsMetas | |
| .map((meta) => meta.getAttribute('content') || '') | |
| .flatMap((content) => | |
| content | |
| .split(',') | |
| .map((token) => token.trim().toLowerCase()) | |
| .filter(Boolean), | |
| ) | |
| const indexable = !robots.includes('noindex') | |
| const follow = !robots.includes('nofollow') | |
| const robots = robotsMetas | |
| .map((meta) => meta.getAttribute('content') || '') | |
| .flatMap((content) => | |
| content | |
| .split(',') | |
| .map((token) => token.trim().toLowerCase()) | |
| .filter(Boolean), | |
| ) | |
| const hasNoindex = robots.includes('noindex') || robots.includes('none') | |
| const hasNofollow = robots.includes('nofollow') || robots.includes('none') | |
| const indexable = !hasNoindex | |
| const follow = !hasNofollow |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/canonical-url-data.ts` around lines 82 -
92, The robots token parsing currently only checks for 'noindex' and 'nofollow'
but must treat the 'none' directive as equivalent to both; update the logic that
computes indexable and follow (derived from robots and robotsMetas) so that if
robots includes 'none' it is treated the same as including both 'noindex' and
'nofollow' (e.g., compute noIndex = robots.includes('noindex') ||
robots.includes('none') and noFollow = robots.includes('nofollow') ||
robots.includes('none') and then set indexable = !noIndex and follow =
!noFollow), ensuring the variables robots, robotsMetas, indexable and follow are
the ones adjusted.
| if (h1Count === 0) { | ||
| issues.push({ | ||
| severity: 'error', | ||
| message: 'No H1 heading found on this page.', | ||
| }) | ||
| } else if (h1Count > 1) { | ||
| issues.push({ | ||
| severity: 'error', | ||
| message: `Multiple H1 headings detected (${h1Count}).`, | ||
| }) | ||
| } | ||
|
|
||
| if (headings[0] && headings[0].level !== 1) { | ||
| issues.push({ | ||
| severity: 'error', | ||
| message: `First heading is ${headings[0].tag.toUpperCase()} instead of H1.`, | ||
| }) | ||
| } | ||
|
|
||
| for (let index = 0; index < headings.length; index++) { | ||
| const current = headings[index]! | ||
| if (!current.text) { | ||
| issues.push({ | ||
| severity: 'error', | ||
| message: `${current.tag.toUpperCase()} is empty.`, | ||
| }) | ||
| } | ||
| if (index > 0) { | ||
| const previous = headings[index - 1]! | ||
| if (current.level - previous.level > 1) { | ||
| issues.push({ | ||
| severity: 'error', | ||
| message: `Skipped heading level from ${previous.tag.toUpperCase()} to ${current.tag.toUpperCase()}.`, | ||
| }) |
There was a problem hiding this comment.
Downgrade the non-fatal heading findings to warnings.
multiple H1, first heading not H1, skipped levels, and empty headings are all emitted as error here. That makes the overview score penalize normal outline-quality issues the same way it penalizes “no headings found”, and it also disagrees with the README for this tab.
Suggested severity adjustment
} else if (h1Count > 1) {
issues.push({
- severity: 'error',
+ severity: 'warning',
message: `Multiple H1 headings detected (${h1Count}).`,
})
}
if (headings[0] && headings[0].level !== 1) {
issues.push({
- severity: 'error',
+ severity: 'warning',
message: `First heading is ${headings[0].tag.toUpperCase()} instead of H1.`,
})
}
@@
if (!current.text) {
issues.push({
- severity: 'error',
+ severity: 'warning',
message: `${current.tag.toUpperCase()} is empty.`,
})
}
@@
if (current.level - previous.level > 1) {
issues.push({
- severity: 'error',
+ severity: 'warning',
message: `Skipped heading level from ${previous.tag.toUpperCase()} to ${current.tag.toUpperCase()}.`,
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (h1Count === 0) { | |
| issues.push({ | |
| severity: 'error', | |
| message: 'No H1 heading found on this page.', | |
| }) | |
| } else if (h1Count > 1) { | |
| issues.push({ | |
| severity: 'error', | |
| message: `Multiple H1 headings detected (${h1Count}).`, | |
| }) | |
| } | |
| if (headings[0] && headings[0].level !== 1) { | |
| issues.push({ | |
| severity: 'error', | |
| message: `First heading is ${headings[0].tag.toUpperCase()} instead of H1.`, | |
| }) | |
| } | |
| for (let index = 0; index < headings.length; index++) { | |
| const current = headings[index]! | |
| if (!current.text) { | |
| issues.push({ | |
| severity: 'error', | |
| message: `${current.tag.toUpperCase()} is empty.`, | |
| }) | |
| } | |
| if (index > 0) { | |
| const previous = headings[index - 1]! | |
| if (current.level - previous.level > 1) { | |
| issues.push({ | |
| severity: 'error', | |
| message: `Skipped heading level from ${previous.tag.toUpperCase()} to ${current.tag.toUpperCase()}.`, | |
| }) | |
| if (h1Count === 0) { | |
| issues.push({ | |
| severity: 'error', | |
| message: 'No H1 heading found on this page.', | |
| }) | |
| } else if (h1Count > 1) { | |
| issues.push({ | |
| severity: 'warning', | |
| message: `Multiple H1 headings detected (${h1Count}).`, | |
| }) | |
| } | |
| if (headings[0] && headings[0].level !== 1) { | |
| issues.push({ | |
| severity: 'warning', | |
| message: `First heading is ${headings[0].tag.toUpperCase()} instead of H1.`, | |
| }) | |
| } | |
| for (let index = 0; index < headings.length; index++) { | |
| const current = headings[index]! | |
| if (!current.text) { | |
| issues.push({ | |
| severity: 'warning', | |
| message: `${current.tag.toUpperCase()} is empty.`, | |
| }) | |
| } | |
| if (index > 0) { | |
| const previous = headings[index - 1]! | |
| if (current.level - previous.level > 1) { | |
| issues.push({ | |
| severity: 'warning', | |
| message: `Skipped heading level from ${previous.tag.toUpperCase()} to ${current.tag.toUpperCase()}.`, | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/heading-structure-preview.tsx` around
lines 47 - 80, The current logic pushes several non-fatal heading issues as
'error' (see h1Count checks, the first-heading check referencing headings[0],
and the loop handling empty headings and skipped levels) which should be
downgraded to 'warning'; update the severity values in the issues.push calls for
"Multiple H1 headings detected", "First heading is ... instead of H1", the
`${current.tag.toUpperCase()} is empty.` case and the skipped-level message in
the for loop from 'error' to 'warning' while leaving the "No H1 heading found on
this page." case as 'error'.
| function validateContext(entity: JsonLdValue): Array<ValidationIssue> { | ||
| const context = entity['@context'] | ||
| if (!context) { | ||
| return [{ severity: 'error', message: 'Missing @context attribute.' }] | ||
| } | ||
| if (typeof context === 'string') { | ||
| if ( | ||
| !context.includes('schema.org') && | ||
| context !== 'https://schema.org' && | ||
| context !== 'http://schema.org' | ||
| ) { | ||
| return [ | ||
| { | ||
| severity: 'error', | ||
| message: `Invalid @context value "${context}". Expected schema.org context.`, | ||
| }, | ||
| ] | ||
| } | ||
| return [] |
There was a problem hiding this comment.
@context validation is too permissive to catch bad values.
Any string containing schema.org passes this check, so values like https://example.com/schema.org are accepted even though the message says only schema.org contexts are valid. That lets malformed blocks show as healthy.
Suggested tightening
+const VALID_SCHEMA_CONTEXTS = new Set([
+ 'https://schema.org',
+ 'http://schema.org',
+ 'https://schema.org/',
+ 'http://schema.org/',
+])
+
function validateContext(entity: JsonLdValue): Array<ValidationIssue> {
const context = entity['@context']
if (!context) {
return [{ severity: 'error', message: 'Missing `@context` attribute.' }]
}
if (typeof context === 'string') {
- if (
- !context.includes('schema.org') &&
- context !== 'https://schema.org' &&
- context !== 'http://schema.org'
- ) {
+ if (!VALID_SCHEMA_CONTEXTS.has(context)) {
return [
{
severity: 'error',
message: `Invalid `@context` value "${context}". Expected schema.org context.`,
},
]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx` around lines 127 -
145, The validation in validateContext incorrectly accepts any string that
merely contains "schema.org"; update it to only accept exact schema.org contexts
by parsing the string as a URL (in validateContext) and ensuring the
URL.hostname === 'schema.org' (or accept the literal 'schema.org' if you want
non-URL form), falling back to an error on parse failure; replace the current
context.includes('schema.org') condition with this hostname check so values like
"https://example.com/schema.org" are rejected while valid "https://schema.org"
and "http://schema.org" remain allowed.
| const allowedSet = new Set([ | ||
| ...rules.required, | ||
| ...rules.recommended, | ||
| ...rules.optional, | ||
| ...RESERVED_KEYS, | ||
| ]) | ||
| const unknownKeys = Object.keys(entity).filter((key) => !allowedSet.has(key)) | ||
| if (unknownKeys.length > 0) { | ||
| issues.push({ | ||
| severity: 'warning', | ||
| message: `Possible invalid attributes for ${typeName}: ${unknownKeys.join(', ')}`, | ||
| }) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Locate and read the file in question
fd "json-ld-preview.tsx" packages/devtools/src/tabs/seo-tab/Repository: TanStack/devtools
Length of output: 115
🏁 Script executed:
#!/bin/bash
# Read the full file to understand the context
cat -n packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx | head -250Repository: TanStack/devtools
Length of output: 8971
🏁 Script executed:
#!/bin/bash
# Search for SUPPORTED_RULES and RESERVED_KEYS definitions
rg "SUPPORTED_RULES|RESERVED_KEYS" packages/devtools/src/tabs/seo-tab/ -A 5Repository: TanStack/devtools
Length of output: 3593
The allowlist is too restrictive and will emit warnings for valid schema.org properties.
allowedSet is constructed only from fields explicitly defined in SUPPORTED_RULES, which covers just a subset of each schema type's valid properties. Any legitimate property outside this list—like author, datePublished, contentLocation etc. for Article—will be flagged as "Possible invalid attributes" and lower the health score despite being valid schema.org properties. Either expand the allowed properties to match schema.org's full spec for each type, or use a looser heuristic that doesn't warn on unknown properties for types without exhaustive coverage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/json-ld-preview.tsx` around lines 201 -
213, The allowlist built in allowedSet (using rules.required, rules.recommended,
rules.optional and RESERVED_KEYS) is too narrow and causes unknownKeys to
include valid schema.org properties, triggering issues.push warnings; change the
validation in json-ld-preview.tsx to stop treating any property outside
SUPPORTED_RULES as necessarily invalid by either (a) expanding rules for the
specific type to include full schema.org properties, or (b) switching the
unknownKeys check to a looser heuristic (e.g., only warn for truly
invalid/reserved keys from RESERVED_KEYS or when a property clearly conflicts
with required types) so that allowedSet no longer emits false-positive warnings
for legitimate keys like author, datePublished, contentLocation, etc. Ensure you
update the logic that computes allowedSet and the subsequent unknownKeys filter
accordingly and keep issues.push only for genuine invalid/reserved attribute
cases.
| const isExternal = resolved.origin !== window.location.origin | ||
| if (isExternal) { | ||
| const target = (anchor.getAttribute('target') || '').toLowerCase() | ||
| const relTokens = (anchor.getAttribute('rel') || '') | ||
| .toLowerCase() | ||
| .split(/\s+/) | ||
| .filter(Boolean) | ||
|
|
||
| if (target === '_blank' && !relTokens.includes('noopener')) { | ||
| issues.push({ | ||
| severity: 'warning', | ||
| message: 'External _blank link should include rel="noopener".', | ||
| }) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n packages/devtools/src/tabs/seo-tab/links-preview.tsx | sed -n '72,85p'Repository: TanStack/devtools
Length of output: 637
🌐 Web query:
HTML rel noopener noreferrer _blank link security semantics
💡 Result:
The HTML attributes rel="noopener" and rel="noreferrer" are used with the target="_blank" attribute on elements (and ,
) to enhance security and privacy when opening links in new tabs or windows. rel="noopener" instructs the browser not to set the Window.opener property on the new browsing context, preventing the target page from accessing or manipulating the original page via JavaScript (e.g., redirecting it to a phishing site). This mitigates a key vulnerability with target="_blank". rel="noreferrer" omits the Referer header (protecting privacy by not leaking the origin URL) and implies noopener behavior (no Window.opener access). Using both together (rel="noopener noreferrer") provides full protection: security from opener attacks and privacy from referrer leakage. This is a best practice for external/untrusted links. Modern browsers (Chrome 88+, Firefox 52+, Safari 2016+) implicitly apply noopener behavior to target="_blank" without explicit rel, but explicitly setting rel="noopener noreferrer" ensures compatibility, adds referrer protection, and follows standards. Example: External Link Semantics: These are link type keywords defined in the HTML Standard (WHATWG/W3C), specifying the relationship between documents. They do not affect SEO (unlike nofollow).Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/HTML/Link_types/noopener
- 2: https://html.spec.whatwg.org/C/links.html
- 3: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Attributes/rel/noopener
- 4: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Attributes/rel
- 5: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/a
- 6: https://elementor.com/blog/noopener-noreferrer/
- 7: https://respona.com/blog/noopener-noreferrer/
- 8: https://stackoverflow.com/questions/50709625/link-with-target-blank-and-rel-noopener-noreferrer-still-vulnerable
- 9: https://w3things.com/blog/rel-noopener-noreferrer/
Accept links with rel="noreferrer" in the _blank security check.
The current code only accepts rel="noopener", but rel="noreferrer" provides the same opener prevention plus privacy protection (as it implies noopener behavior per HTML spec). Links using only rel="noreferrer" are incorrectly flagged as insecure, creating false positives in the SEO report.
Suggested fix
- if (target === '_blank' && !relTokens.includes('noopener')) {
+ if (
+ target === '_blank' &&
+ !relTokens.includes('noopener') &&
+ !relTokens.includes('noreferrer')
+ ) {
issues.push({
severity: 'warning',
message: 'External _blank link should include rel="noopener".',
})
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/links-preview.tsx` around lines 72 - 85,
The _blank external-link check in links-preview.tsx currently treats only
rel="noopener" as acceptable; update the logic where isExternal is computed (use
of resolved, anchor, relTokens) so that the relTokens check treats either
"noopener" OR "noreferrer" as valid (i.e., do not push the warning if
relTokens.includes('noopener') || relTokens.includes('noreferrer')). Keep the
existing target === '_blank' check and the issues.push call (severity/message)
unchanged, only broaden the accepted rel tokens to include "noreferrer".
| The seo tab contains major tabs that are complement to the inspect elements light house tab and not a replacement for them. It is a replacement for the extensions and simple tools you use to check and discover things by simply digging deeper in the html section, network or other pages in your site. | ||
|
|
||
| SEO tabs: | ||
|
|
||
| - Social Previews: shows open graph and twitter previews for you page when shared across social media apps. | ||
| - SERP Previews: shows you a similar preview of how your page will be displayed in search engine results page. | ||
| - JSON-LD Previews: shows you all the json ld detected in the page. | ||
| - Heading Structure Visualizer: preview your layout in heading tags. | ||
| - Links preview: check all page links and thier details like internal/external, text, ... | ||
| - Canonical & URL & if page is indexible and follow | ||
| - overview tab for SEO Score / Report: that contains a percentage of how everything is going in the other tabs and a small icon/link that will redirect them to the sepcific tab for more informations and details. |
There was a problem hiding this comment.
Please proofread the overview section.
The intro currently says the tab is both “not a replacement” and “a replacement”, and the bullets have several obvious typos (your, their, specific, information). Cleaning that up will make the new feature read much more polished.
🧰 Tools
🪛 LanguageTool
[style] ~5-~5: You have already used this phrasing in nearby sentences. Consider using an alternative to add variety to your writing.
Context: ...and not a replacement for them. It is a replacement for the extensions and simple tools you use...
(REP_REPLACEMENT)
[grammar] ~9-~9: Ensure spelling is correct
Context: ...ows open graph and twitter previews for you page when shared across social media ap...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~13-~13: Ensure spelling is correct
Context: ...Links preview: check all page links and thier details like internal/external, text, ....
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
[grammar] ~15-~15: Ensure spelling is correct
Context: ...con/link that will redirect them to the sepcific tab for more informations and details. ...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/README.md` around lines 5 - 15, Rewrite
the README intro to remove the contradiction (choose either "complement to" or
"replacement for" — here use "complement to the Inspect Elements / Lighthouse
tabs, not a replacement"), correct typos and grammar across the "SEO tabs"
bullets (e.g., "your" → "your", "thier" → "their", "sepcific" → "specific",
"informations" → "information", "indexible" → "indexable"), standardize bullet
phrasing and capitalization (e.g., "Social Previews", "SERP Previews", "JSON-LD
Previews", "Heading Structure Visualizer", "Links Preview", "Canonical & URL &
indexability"), and make the overview bullet concise and clear about the SEO
score linking to specific tabs for details.
| const [tick, setTick] = createSignal(0) | ||
|
|
||
| useHeadChanges(() => { | ||
| setTick((t) => t + 1) | ||
| }) | ||
|
|
||
| const bundle = createMemo(() => { | ||
| void tick() | ||
| const canonical = getCanonicalPageData() | ||
| const social = getSocialPreviewsSummary() | ||
| const serp = getSerpPreviewSummary() | ||
| const jsonLd = getJsonLdPreviewSummary() | ||
| const headings = getHeadingStructureSummary() | ||
| const links = getLinksPreviewSummary() | ||
|
|
||
| const rows: Array<OverviewRow> = [ | ||
| { | ||
| id: 'heading-structure', | ||
| title: 'Heading Structure', | ||
| summary: headings, | ||
| }, | ||
| { id: 'links-preview', title: 'Links Preview', summary: links }, | ||
| { id: 'social-previews', title: 'Social Previews', summary: social }, | ||
| { id: 'serp-preview', title: 'SERP Preview', summary: serp }, | ||
| { id: 'json-ld-preview', title: 'JSON-LD Preview', summary: jsonLd }, | ||
| ] | ||
|
|
||
| const canonicalSummary: SeoSectionSummary = { | ||
| issues: canonical.issues, | ||
| hint: canonical.indexable ? 'Indexable' : 'Noindex', | ||
| } | ||
|
|
||
| const health = aggregateSeoHealth([ | ||
| canonicalSummary, | ||
| social, | ||
| serp, | ||
| jsonLd, | ||
| headings, | ||
| links, | ||
| ]) | ||
|
|
||
| return { canonical, rows, health } | ||
| }) |
There was a problem hiding this comment.
The overview stays stale on URL-only navigations.
useHeadChanges() only invalidates this memo on <head> mutations, but the bundle also reads window.location.href through getCanonicalPageData() and getSerpPreviewSummary(). On SPA route changes that do not touch <head>, the overview keeps showing the previous URL/canonical state until some unrelated head update happens.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/seo-overview.tsx` around lines 124 - 166,
The memo only invalidates on head mutations but also reads window.location.href
via getCanonicalPageData() and getSerpPreviewSummary(), so SPA navigations that
don't touch the head leave the overview stale; add a URL-change trigger that
calls setTick when the location changes (e.g., listen for popstate and override
history.pushState/replaceState to dispatch a custom 'locationchange' event) and
call setTick((t)=>t+1) in that listener (the same signal used by bundle's
createMemo); update the useHeadChanges block or add a new effect that
registers/removes these listeners so createSignal tick, setTick, and the bundle
memo (which reads getCanonicalPageData and getSerpPreviewSummary) are
invalidated on route changes as well.
| <p class={styles().seoOverviewCheckListCaption}> | ||
| Ring score matches overall SEO math (errors / warnings / info). Counts | ||
| on the right are raw totals for that panel. | ||
| </p> | ||
| <div class={styles().seoOverviewCheckList}> | ||
| <For each={bundle().rows}> | ||
| {(row) => { | ||
| const sev = worstSeverity(row.summary.issues) | ||
| const c = countBySeverity(row.summary.issues) | ||
| const subsectionScore = sectionHealthScore(row.summary.issues) | ||
| const totalIssues = | ||
| row.summary.issueCount ?? row.summary.issues.length | ||
| const cappedSuffix = | ||
| row.summary.issueCount != null && | ||
| row.summary.issueCount > row.summary.issues.length | ||
| ? ` (${row.summary.issues.length} of ${row.summary.issueCount} listed)` | ||
| : '' | ||
| const issueLine = | ||
| totalIssues === 0 | ||
| ? 'No issues' | ||
| : `${totalIssues} issue${totalIssues === 1 ? '' : 's'}${cappedSuffix}` | ||
| const metaLine = row.summary.hint | ||
| ? `${row.summary.hint} · ${issueLine}` | ||
| : issueLine | ||
| const ariaBits = [ | ||
| `${row.title}. Score ${Math.round(subsectionScore)} out of 100.`, | ||
| sectionStatusPhrase(sev) + '.', | ||
| metaLine, | ||
| `${c.error} errors, ${c.warning} warnings, ${c.info} info.`, |
There was a problem hiding this comment.
Capped summaries will understate both score and severity counts.
When a section sets issueCount, this code only uses it in the label text. The ring score, per-severity totals, and ARIA label still come from the truncated row.summary.issues, so a heavily capped panel can look much healthier than it really is.
| export type SeoSectionSummary = { | ||
| issues: Array<SeoIssue> | ||
| hint?: string | ||
| /** When `issues` is capped, total issues before capping. */ | ||
| issueCount?: number |
There was a problem hiding this comment.
Capped sections lose health accuracy with the current summary shape.
issueCount tells us how many findings were hidden, but not their severities, and both scoring helpers still compute penalties from the truncated issues array. getLinksPreviewSummary() already caps at 32, so subsection/overall health and severity counts drift on pages with lots of findings.
Also applies to: 50-77
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/devtools/src/tabs/seo-tab/seo-section-summary.ts` around lines 13 -
17, The current SeoSectionSummary type (properties issues and issueCount) hides
truncated findings' severities so scoring helpers still compute penalties from
the capped issues array; update SeoSectionSummary to include full severity
totals (e.g., severityCounts or totalsBySeverity) alongside issues and
issueCount, change getLinksPreviewSummary() (and the other affected summary
producers) to populate those totals from the uncapped analysis before capping
the issues array, and modify scoring helpers to use the new severity totals
rather than relying only on the truncated issues array to compute health and
severity counts.
🎯 Changes
Introduced new tabs in the SEO section:
(Sorry for the low quality video but GitHub didn't let me upload the high quality one)
SEO-tab-pr.mp4
✅ Checklist
pnpm test:pr.🚀 Release Impact
Summary by CodeRabbit
New Features
Style
Documentation
Chores