SEO audits in Claude + Cursor: our new MCP server

We just shipped an official Model Context Protocol server. Install it once and Claude Desktop, Claude Code, Cursor, Windsurf, and any MCP-aware AI tool gets direct access to:

  • Full SEO audits (audit, batch_audit)
  • Your monitoring (add_monitor, list_monitors)
  • Audit-score history (history, history_domains)
  • Public report URLs (report_url)
  • Plan and usage (usage)
  • Observed backlinks (backlinks, Basic plan and up)

That's nine tools your AI can reach for the moment you ask "audit this URL" or "what backlinks point at my-startup.com?"

Why MCP matters for SEO

Most AI tools today guess at SEO. The model has a fuzzy idea of what a good page looks like, an even fuzzier idea of how Google ranks anything in 2026, and zero idea what your specific page actually scores. So you get advice like "make sure your meta description is between 150 and 160 characters" — true, generic, and unverifiable.

MCP changes the shape of that conversation. The AI doesn't need to guess; it can call our scoring engine directly. The same 82-check audit your CI pipeline runs is now one tool call away from any prompt.

The difference in practice:

Before MCP

"Hey Claude, can you audit my site?" "I can't access URLs directly, but here are some general tips: make sure your title tags are 50-60 characters, write descriptive meta descriptions..."

After MCP

"Hey Claude, can you audit my site?" (Claude calls the audit tool against https://seoscoreapi.com.) "Your site scores 97.9 / A+. The only category with a flag is meta — your homepage description is 154 chars, well within range. AI readability scores 96/A+. The top priority would be... actually you have zero priorities right now. You're clean."

That's not a parlor trick. That's grounded answer-engine output, and it's exactly the kind of answer LLMs are starting to be measured on.

Install in 30 seconds

npx -y seoscoreapi-mcp

Or, in your AI client's config:

{
  "mcpServers": {
    "seoscoreapi": {
      "command": "npx",
      "args": ["-y", "seoscoreapi-mcp"],
      "env": { "SEO_SCORE_API_KEY": "ssa_your_key_here" }
    }
  }
}

Get a free API key. The free tier covers 5 audits per day with no card required — enough to wire up the MCP server and run your first batch.

Full setup instructions for Claude Desktop, Claude Code, Cursor, and Windsurf are in the README.

What you can ask once it's installed

Real prompts that route through the MCP tools without you specifying which one:

"Audit https://stripe.com and tell me the top 3 things to fix."

"Compare the AI readability scores of stripe.com, square.com, and adyen.com."

"What backlinks have we observed pointing at my-startup.com?"

"Show me the score history for our marketing site over the last 30 days."

"Are any of my monitored URLs trending down?"

The AI picks the right tool. You don't.

The backlinks tool deserves its own paragraph

This is unique to SEO Score API. Every audit our customers run contributes to a growing backlink graph: external links observed on each audited page get persisted into a public dataset. The backlinks tool surfaces that dataset for any domain, with an explicit data_caveat in every response.

It is not a comprehensive backlink index — Ahrefs crawls 8 billion pages a day and we don't, and we say so. It is honest, audit-fed, and grows with usage. For "show me referring pages we have actually seen," it works exactly the way you'd want.

> What backlinks does SEO Score API see pointing to stripe.com?

(via `backlinks` tool)

stripe.com:
  total observations: 142
  referring domains: 38
  top referring: docs.stripe.com (24), github.com (18), ...
  data_caveat: Backlinks observed during SEO Score API audits.
  Not exhaustive — coverage grows over time.

No competitor with our pricing offers anything like this, because no competitor has the audit volume we do feeding the graph.

Why we're shipping this now

Two reasons.

One: every developer audience we care about is moving into MCP-aware tools. Claude Desktop has it, Claude Code has it, Cursor has it, Windsurf has it, the OpenAI agents framework has it. If our API isn't reachable from the place customers actually do their work in 2026, we're invisible — even when we're the right answer.

Two: MCP is the cleanest possible packaging of the API. There's no SDK to learn, no auth dance to script, no batching to build. The user types a question, the AI picks the tool, the API answers. Friction approaches zero. That's the kind of leverage we want.

We have official Python and Node SDKs. We have a GitHub Action for CI. We have an n8n community node. The MCP server is the same idea — meet developers where they already are — pointed at AI clients instead of CI runners.

Source + license

seoscoreapi-mcp is MIT-licensed and open source. The whole server is one file you can read in five minutes:

https://github.com/avansledright/seoscoreapi.com/tree/main/sdks/mcp

If you build something on top of it, send us the link. We'll feature interesting integrations on the blog.

Try it

  1. Get a free API key — 30 seconds, no card.
  2. Add the MCP config to your AI client of choice.
  3. Restart the client.
  4. Ask: "Audit https://example.com."

That is the entire onboarding.