Comparing Stripe, Square, and Adyen with one API call
Every agency conversation eventually arrives at the same question: "how does my site compare to the competition?" The pre-API workflow was painful. Open Lighthouse three times. Copy three score numbers into a spreadsheet. Eyeball categories. Forget which one was the control. Generate a slide. Repeat next quarter.
We just shipped a /compare endpoint that returns all of that in one POST. Here's what it looks like with three real, recognizable brands: Stripe, Square, and Adyen — three payments companies that every fintech-adjacent founder has bookmarked.
The call
curl -X POST https://seoscoreapi.com/compare \
-H "X-API-Key: ssa_your_key_here" \
-H "Content-Type: application/json" \
-d '{"urls": ["https://stripe.com", "https://square.com", "https://adyen.com"]}'
About three seconds later you get a structured response back. The response has two parts — the per-URL results (each with score, grade, category breakdown, AI readability, top priorities) and a diff object that does the agency-deck math for you.
The response, abridged
{
"count": 3,
"compared": 3,
"results": [
{ "url": "https://stripe.com", "score": 78.2, "grade": "B", "ai_readability": 64,
"categories": { "meta": 76.5, "technical": 91.0, "social": 100,
"performance": 45.0, "accessibility": 73.7 } },
{ "url": "https://square.com", "score": 81.4, "grade": "A-", "ai_readability": 71,
"categories": { ... } },
{ "url": "https://adyen.com", "score": 73.1, "grade": "B-", "ai_readability": 58,
"categories": { ... } }
],
"diff": {
"overall_leader": { "url": "https://square.com", "score": 81.4 },
"ai_readability_leader": { "url": "https://square.com", "score": 71 },
"category_leaders": {
"accessibility": { "leader": "https://stripe.com", "score": 73.7 },
"performance": { "leader": "https://adyen.com", "score": 88.0 },
"social": { "leader": "https://stripe.com", "score": 100 },
"technical": { "leader": "https://stripe.com", "score": 91.0 },
"meta": { "leader": "https://square.com", "score": 84.0 }
},
"pairs": [
{ "a": "https://stripe.com", "b": "https://square.com",
"overall_gap": -3.2,
"category_diffs": [
{ "category": "performance", "winner": "https://square.com", "gap": 17.0 },
{ "category": "accessibility","winner": "https://stripe.com", "gap": 12.4 }
]
},
{ "a": "https://stripe.com", "b": "https://adyen.com",
"overall_gap": 5.1,
"category_diffs": [ ... ]
},
{ "a": "https://square.com", "b": "https://adyen.com",
"overall_gap": 8.3,
"category_diffs": [ ... ]
}
]
}
}
(Numbers above are illustrative. Your call gets the exact live values — these change as the sites update.)
How to read the diff
Three things matter for the agency conversation:
1. The overall leader. That's the one slide. "Square leads the category at 81.4." You don't need to explain the score to anyone — bigger is better, A is great, and the brand recognition does the rest.
2. The category leaders. This is where the conversation gets interesting. In our example: Stripe owns accessibility, social, and technical. Square owns meta and overall AI readability. Adyen quietly owns performance — which is genuinely surprising and the kind of insight that sells the audit. "Did you know Adyen's site loads 40% faster than Stripe's?" That's a sales hook.
3. The pairs. For two-way comparisons (you vs. competitor, old build vs. new build, prod vs. staging), diff.pairs[0].category_diffs is the punch list. Each entry says exactly which category one URL wins on and by how many points. Sort by gap, take the top three, you have a presentation.
What this replaces
Before the /compare endpoint we have actually watched agencies do this:
- Open three browser tabs.
- Run Lighthouse three times. Copy the four numbers from each into a Google Sheet.
- Wait for someone to ask "what's the AI readability score?" and realize Lighthouse doesn't have one.
- Open three more tabs to compute meta-tag length manually.
- Format the spreadsheet for the deck.
- Ship the slide. Find a typo three minutes after sending.
The single curl above replaces all of that. With the Python or Node SDK it's a one-liner. With our MCP server installed in Claude or Cursor, you literally just ask: "compare stripe.com, square.com, and adyen.com on SEO and tell me who wins each category."
A handful of real use cases
We've seen /compare used for:
- Agency new-business pitches. Pull a prospect's site against two of their direct competitors. Lead with whichever category they're behind on. Close with a price.
- Pre-deploy regression checks.
compare(prod_url, staging_url)in CI. Block the merge ifoverall_gapflips negative. - Quarterly competitive reports. Run the same set of competitors monthly, log to a spreadsheet, surface the trend.
- Investor diligence. "Our site outscores all three of our publicly-traded competitors on AI readability" is the kind of one-liner that lands in a deck.
Pricing
/compare is on the Basic plan ($15/mo). One call audits up to 5 URLs. The free tier doesn't include it because the per-call cost is meaningful — five full audits per request — but $15/mo gets you 1,000 of these per month, more than enough for any agency or competitive-intelligence flow.
Try it
Get a free API key → upgrade to Basic when you want /compare → start pasting comparisons into your decks.
Or, the lazy path: install the SEO Score API MCP server and ask your AI assistant in plain English. It picks the compare tool itself.
You: "Compare stripe.com, square.com, and adyen.com on SEO. Tell me
who wins each category and where each one is weakest."
Claude: (calls the compare tool)
"Square leads overall at 81.4. Adyen's surprise win is performance.
Stripe's weakest spot is the accessibility audit — alt-text coverage..."
That conversation took eight seconds. The deck slide is a copy/paste away.