If you run an SEO retainer, the worst thing a client can email you is "hey, we noticed our rankings dropped — what happened?" Especially when the answer turns out to be a deploy from three weeks ago that quietly stripped the canonical tag from a template.

This guide walks through how to set up a monitoring system that catches those changes the day they happen, alerts your team in Slack, and produces a per-client SLA report you can attach to the next monthly invoice.

We'll use a real-world example: a fictional 8-person agency, Northbeam Digital, with 14 retainer clients on the Pro plan. The whole setup takes about 45 minutes the first time and ~5 minutes per new client after that.

The Setup

The core building blocks:

  1. One monitor per critical client URL — homepage, top 3 landing pages, and the highest-converting product/service page.
  2. A daily cron job that reads each client's /history/domains and posts a digest to a #seo-alerts Slack channel.
  3. A monthly script that pulls each client's /history and renders a one-page SLA report.

You need a Pro plan or higher (covers up to 25 monitors) and one API key. Most agencies use a single shared key — if you want isolation per client, see "One key per client" at the end.

Step 1: Pick the URLs that actually matter

Don't monitor the entire site. Monitor the URLs whose score drops you'd actually act on.

For a typical retainer client, that's:

  • Homepage — anchors brand search, usually the highest-traffic URL.
  • Top 3 landing pages — pull from Google Search Console, top by impressions over the last 90 days.
  • Highest-revenue page — for ecommerce, the top product. For lead-gen, the main service page.

Five URLs per client. With 14 clients, that's 70 monitors — well within the 25-per-key limit if you split across two keys, or under the Ultra plan's 50 limit.

Step 2: Add monitors via the API

Once you've got your URL list, monitors take one POST per URL.

import os
from seoscoreapi import add_monitor

API_KEY = os.environ["SEOSCORE_API_KEY"]

CLIENTS = {
    "Acme Plumbing": [
        "https://acmeplumbing.com",
        "https://acmeplumbing.com/services/water-heater-repair",
        "https://acmeplumbing.com/services/drain-cleaning",
        "https://acmeplumbing.com/services/emergency-plumber",
        "https://acmeplumbing.com/locations/portland",
    ],
    "Riverside Dental": [
        "https://riversidedental.com",
        "https://riversidedental.com/services/invisalign",
        # ...
    ],
}

for client, urls in CLIENTS.items():
    for url in urls:
        result = add_monitor(url, api_key=API_KEY, frequency="daily")
        print(f"[{client}] {url} → {result}")

You'll see {"status": "ok", "url": "...", "frequency": "daily"} for each new monitor, or {"error": "Already monitoring this URL"} if you re-run the script (which is harmless — it's idempotent on URL).

The default frequency is daily. SEO score monitoring re-audits each URL every 6 hours and sends an email to the account owner whenever the score moves by 5+ points in either direction.

Step 3: Wire up Slack alerts

The built-in email alerts are fine for solo consultants, but most agencies want them in Slack so the whole account team sees them. Easiest path: a small daily cron that diffs scores via the new /history/domains endpoint and posts changes.

import os, requests
from seoscoreapi import history_domains

API_KEY = os.environ["SEOSCORE_API_KEY"]
SLACK_WEBHOOK = os.environ["SLACK_WEBHOOK_URL"]
ALERT_THRESHOLD = 5  # points

domains = history_domains(API_KEY)

alerts = []
for d in domains:
    if d["trend_30d"] is None:
        continue
    if abs(d["trend_30d"]) >= ALERT_THRESHOLD:
        emoji = "📈" if d["trend_30d"] > 0 else "📉"
        alerts.append(
            f"{emoji} *{d['domain']}* {d['trend_30d']:+.1f} pts over 30d — "
            f"now {d['latest_score']} ({d['latest_grade']})"
        )

if alerts:
    requests.post(SLACK_WEBHOOK, json={
        "text": "*SEO Score Movement (30d)*\n" + "\n".join(alerts)
    })

Drop this in a cron at 8am Monday and it posts a single weekly digest. Run it daily if you'd rather catch drops faster.

For per-URL granularity (instead of per-domain), call history(url, api_key) for each monitored URL and compare summary.first_score to summary.latest_score — that's the data the audit response now returns automatically as the history block, so you can also catch deltas at audit time without a separate cron.

Step 4: Generate the monthly SLA report

The retainer differentiator: a one-page PDF attached to each month's invoice showing the client exactly what their SEO health looked like over the period.

The /history?url=&since= endpoint gives you everything you need:

from datetime import datetime, timedelta
from seoscoreapi import history

since = (datetime.utcnow() - timedelta(days=30)).timestamp()

for url in client_urls:
    h = history(url, api_key=API_KEY, since=since)
    print(f"{url}")
    print(f"  Audits: {h['count']}")
    print(f"  First → Latest: {h['summary']['first_score']} → {h['summary']['latest_score']}")
    print(f"  Min/Max: {h['summary']['min_score']} / {h['summary']['max_score']}")
    print(f"  Avg: {h['summary']['avg_score']}")
    print(f"  Net change: {h['summary']['total_delta']:+.1f} pts")

Pipe that into a Jinja template + WeasyPrint and you've got a branded PDF in 50 lines. The category breakdown in h["history"][i]["categories"] lets you call out wins like "performance up 12 points after the hosting migration on the 14th."

A real example: catching a deploy regression

Northbeam onboarded a new client, Coastline Wealth Management, in March. The dev team at Coastline shipped a new homepage template on March 18th. Northbeam's monitor flagged a 9-point drop within 6 hours:

📉 coastlinewm.com/  87 → 78 (-9)
   Grade: A → B
   Categories changed: meta -12, social -7

The diff was easy to spot in the audit response — the new template was missing both the <meta name="description"> and the Open Graph tags. The Northbeam team filed a ticket the same morning and shipped a fix in two days. By March 21st the score was back to 88.

Without monitoring, the client would have noticed when their click-through rate started slipping in Search Console — which usually takes 2–3 weeks to show up. The 9-point drop would have cost them roughly 8–12% of organic clicks for that period, which for a wealth manager translates to real money in lost lead value.

This is the entire pitch for SEO monitoring on retainer accounts: it's the difference between "we caught a regression two days after deploy" and "we noticed the client's traffic slipping a month later."

Operational tips

A few things you'll figure out the hard way:

Don't monitor URLs you don't own. Sounds obvious, but a junior account manager will inevitably try to add competitor URLs "to track them." That works for one-off audits but not for monitoring — competitor sites change in ways you can't act on, so the alerts are noise.

Use weekly frequency for stable sites. If a client hasn't deployed in months, daily checks are wasteful. Switch them to frequency: "weekly" and free up daily slots for active sites.

Pause monitors during planned migrations. If a client tells you they're moving CMS next Tuesday, delete the monitors before the migration and re-add them once the new site is live. Otherwise you'll get a wall of false-positive drops while the new site gets indexed.

Watch grade transitions, not just score deltas. A drop from 89 → 84 (B+ to B) often matters more to clients than a drop from 75 → 70 (both C). The grade_change field in the audit response makes this trivial to surface.

One key per client

If you bill clients separately for SEO infrastructure or want clean isolation, sign up for a separate API key per client (free tier is fine for monitoring — the rate limit applies per-key). The benefit is each client's history_domains() call returns only their URLs, which makes the SLA report scripts simpler. The cost is one more thing to manage in your password manager.

For a 14-client agency, a single Pro key with 14 client folders in your repo works well enough. Start there, split later if you outgrow it.

Wrapping up

The whole stack is:

  • One Pro plan ($39/mo, covers 25 monitors)
  • One Python script (the snippets above, ~150 lines combined)
  • One cron job (daily Slack digest)
  • One monthly script (SLA PDF generation)

Total time-to-build is under an hour for the first client and under 10 minutes per client after that. Compared to the cost of a single missed regression on one retainer account — which usually shows up as a churned client a few months later — it's the highest-leverage piece of agency infrastructure you can ship this quarter.

Get an API key at seoscoreapi.com and the seoscoreapi package is on PyPI (pip install seoscoreapi) and npm (npm install seoscoreapi).