-
Broken Link Finder: Site crawls reveal 404 errors and server issues with exportable source URLs.
-
Redirect Audit: Detection of 301s, 302s, loops, and chains. Essential for clean SEO and smooth migrations.
-
Page Title & Metadata Analysis: Titles or meta descriptions flagged when too long, too short, missing, or duplicated.
-
Duplicate Content Discovery: Exact duplicates, partial duplicates, and thin pages identified as SEO risks.
-
Data Extraction with XPath: Custom data such as meta tags or SKUs pulled with CSS Path, XPath, or regex.
-
Robots & Directive Review: Blocked URLs, canonicals, and crawl directives displayed for correct indexing.
-
XML Sitemap Generator: Customizable XML and image sitemaps produced for better indexing.
-
Integrations: Connections to GA, Search Console, and PageSpeed Insights add performance and user data to crawls.
-
JavaScript Crawl Support: Full rendering of Angular, React, and other JS-heavy sites ensures complete content capture.
-
Site Architecture Visualization: Crawl diagrams and tree graphs expose linking patterns and overall structure.
-
Audit Scheduling: Crawls automated at chosen times with exports to Google Sheets and other tools.
-
Crawl & Staging Comparison: Changes highlighted between crawls or between staging and production environments.
-
Custom Data Extraction: Targeted data scraped with selectors or regex for tailored audits.
-
Structured Data Validation: Schema.org structured data extracted and validated for SERP enhancements.
-
Spelling & Grammar Review: Site text scanned for errors across 25+ languages to improve professionalism.