The author recently deployed a self-developed AI SEO agent on a real website, naija-vpn.com, uncovering critical issues that had been overlooked for weeks despite manual audits. The agent demonstrated its ability to identify problems far beyond standard on-page checks, such as mismatched titles leading to zero click-through rates (CTR) on a well-ranking page. For instance, a page about "does twitch pay nigerians" ranked 9.5 in Google Search Console with 29 impressions but 0 clicks, primarily because the title displayed by Google didn't align with the user's query. This highlighted a core challenge: human content creators know their intent, but an AI agent objectively sees what's actually published and how it performs.
The agent, developed over several months, evolved beyond a basic on-page auditor that merely checked title length, meta descriptions, H1 matches, keyword density, Core Web Vitals, and schema markup. While useful, the initial version only reported what was present, not what was actively hindering rankings. The current iteration incorporates four intelligent modules designed to pinpoint performance bottlenecks:
- Backlink Qualifier: This module evaluates referring domains based on niche relevance (0–100), traffic quality (0–100), and spam score (0–100, inverted). It computes a weighted average (niche * 0.50 + traffic * 0.30 - spam * 0.20) to categorize backlinks into tiers: Insert Worthy (≥80), Good (≥60), Review (≥40), and Avoid (<40). It fetches each backlink URL via a real browser and sends summaries to Claude Haiku for scoring. The process is resumable, caching state to a flat JSON file.
- GSC Insights: By parsing Google Search Console export CSVs, this module deterministically flags "quick wins" (pages ranking 4–20, with ≥50 impressions and <5% CTR). It then sends the top 50 relevant rows to Claude Haiku for advanced analysis, specifically detecting content cannibalization and identifying cluster gaps, which are difficult to spot through manual spreadsheet sorting.
- Relevance Scorer: Given a target page and a list of candidate pages, this module scores each candidate as a potential internal link source based on topical alignment (0–100), anchor opportunity (0–100), and link equity (0–100). Using a similar weighted approach, it categorizes potential links as Strong Link (≥75), Good Link (≥55), Weak Link (≥35), or Skip (<35). Crucially, it first checks if a candidate already links to the target, preventing redundant recommendations.
- Cluster Audit: This module constructs a comprehensive internal link graph across the entire site, calculates incoming link counts per page, and then leverages Claude Haiku to map topic clusters, identify orphan pages, flag missing hub pages, and suggest beneficial cross-cluster internal links.
All four modules utilize Claude Haiku for cost efficiency, store resumable state in flat JSON files, and generate markdown reports within the project directory. The agent's code, including prompts and module definitions, is publicly available on GitHub. The author audited his site, naija-vpn.com, which serves Nigerian creators seeking international payment solutions, including pages like the homepage, Cleva vs Geegpay comparison, Twitch payments Nigeria guide, Carter Efe case study, and Data Saver app landing page. The agent's findings on these pages validated its advanced analytical capabilities.