Free Tool

How does AI see your site?

Free 60-second scan against the open Agent-Adoption Spec. 25 signals, score, level, and the fixes worth shipping. No signup.

Or try one of these:

Key Takeaways

  • Free, no signup. Get a clear score (0-100) and a level (L1-L3) showing how AI-ready your site is for AI agents.
  • Paste-ready fix prompts for every failed check — copy into Claude Code, Cursor, or Windsurf and ship the fix.
  • Based on the open Agent-Adoption Specification — methodology you can read, not a black box.
  • Useful for site owners, marketers, and devs who want a concrete "what's broken, here's the fix" view.

What this tool does

The CompetLab Agent-Adoption Check is a free, no-signup tool that scans your domain for 25 specific signals AI agents look for when finding, accessing, and reading your site. You get a score, a level, and for every failed check, a paste-ready prompt you can drop into Claude Code, Cursor, or Windsurf to apply the fix. The whole thing is free — no signup, no email.

  • Scans your robots.txt for AI bot rules and access policies
  • Checks if you have an llms.txt — the AI-era sitemap
  • Tests whether your pages return markdown when an AI agent asks for it
  • Confirms your pages render without JavaScript (most agents don't run JS)
  • Looks for an MCP server card — does your site expose itself as a tool to Claude and other agents?
  • Checks for OAuth discovery endpoints — can agents authenticate with your service?
  • Probes for an API catalog (OpenAPI / GraphQL schema) — can agents call your APIs directly?
  • Verifies HTTP status codes are honest (404 means 404, not 200-with-error-page)
  • Examines cache headers, redirect behavior, and page sizes for agent-friendliness
  • Generates a paste-ready fix prompt for each failed scored check

Who this tool is best for

B2B SaaS marketers

You publish docs, blog posts, and landing pages. AI search increasingly drives discovery, and you want to know if your content is actually reachable. Run the check, get fixes you can hand to your dev team, and ship them.

Indie devs and small teams

You don't have a dedicated SEO or AI infrastructure team, but you're shipping a product. The check tells you what AI agents will struggle with on your site, in plain English. Paste-ready fixes mean you don't need to learn the spec — just apply and move on.

Agencies and consultants

You audit client sites for SEO, performance, and now agent-readiness. Use this as a quick first-pass to spot common gaps across multiple client domains.

Why this matters

AI agents — Claude, GPT, Perplexity, and a long tail of smaller models — increasingly read and act on the web on behalf of users. Whether you want them to find your product, summarize your docs, or call your API, they have to be able to access and parse your site. This tool implements the open Agent-Adoption Specification — a public methodology that defines 25 specific signals affecting how well agents interact with a site. Some are basic (does your robots.txt exist?). Some are advanced (do you expose an MCP server card?).

Implementing them won't guarantee your site shows up in AI search results — that depends on a lot of factors outside any tool's scope, like brand recognition, training data, and content quality. We measured this. In a 908-domain correlation study, only 2 of 50 tested agent-readiness signals reached statistical significance, and none showed a "large" effect. Agent-readiness is real but small. Fix the signals that move the needle, skip the ones that don't. The dishonest claim would be "fix this and AI traffic will explode" — we don't make that one. Read the full study interpretation →

How detection works / where it fails

The scan fetches your domain's homepage, follows redirects to your final URL, then probes 25 specific signals — most of which are HTTP-level checks (does this header exist? does this URL return markdown?). For checks that need to see how a page actually renders, we use a real browser engine to fetch the page the way an agent would. The whole thing typically takes 20-60 seconds, sometimes longer for complex sites.

Where it fails:

  • Bot-protected sites (aggressive bot-protection challenges, strict WAF rules) often block the scan, and we can't see past that
  • Sites behind authentication walls give us only the public surface
  • Single-page apps that render only via JavaScript may expose fewer agent-readable signals than they serve to logged-in users — that's a real gap, but it's also what AI agents see, so the score honestly reflects reality

What we don't measure: brand recognition, content quality, AI search ranking, conversion rates, traffic. The tool tells you about the plumbing, not the outcomes. Use it to fix problems you can fix.

Categories explained

Can AI find your site?

Three checks under this category — robots.txt, sitemap, and Link headers. Together they tell an AI agent "here's what's on this site, and here's what you can fetch." If you don't have a robots.txt, agents have to guess at your rules. If you don't have a sitemap, they crawl your whole site to find pages. If you skip Link headers, they can't quickly discover machine-readable formats of your pages. None of these are revolutionary — they're the foundational signals AI agents (and traditional crawlers) have always looked for.

Are AI agents allowed in?

Four checks here — explicit AI bot rules, content-signals declarations, web-bot-auth support, and a sanity check that your robots.txt isn't blocking everything. The point is to make your stance explicit. Generic robots.txt rules don't tell GPTBot or ClaudeBot whether you're OK with them. Declaring rules for specific AI crawlers — even just "Allow" or "Disallow" — shows you've thought about it. Content-signals goes further, letting you grant or deny specific uses (training, search, inference) per crawler.

Can AI read your content?

This is the biggest category — twelve checks covering how well your pages render for non-browser agents. Most AI agents don't run JavaScript, so single-page apps look empty to them. Many agents work better with markdown than HTML — if you serve markdown via content negotiation or .md URLs, you make their job easier. We also check llms.txt (a curated index for AI agents), AGENTS.md (guidance for AI coding tools), HTTP status codes, redirect behavior, page sizes, and other plumbing details. In our research, the single best-validated signal here is markdown content negotiation.

Built-in agent integrations

Six checks for the newer end of the agent-readiness spectrum — OpenAPI catalogs, OAuth discovery, MCP server cards, agent-to-agent (A2A) endpoints, and declared agent skills. These signals say "this site doesn't just publish content; it offers programmatic ways for agents to interact with it." Adoption is early — most sites don't have these yet. But if you're a SaaS product, even one of these (especially an MCP server card) can make your service discoverable as a tool to Claude and a growing list of other agents.

How to use in 5 minutes

  1. Enter your domain. Paste a full URL or just the bare name — we clean the prefix automatically.
  2. Wait 20-60 seconds while we probe 25 signals. The progress messages show what we're checking in real time.
  3. Read the score and level. The score (0-100) reflects how many checks passed, weighted by importance. The level (L1/L2/L3) tells you which gate checks you've cleared.
  4. Copy a fix-prompt. Expand any failed check and click Copy on the prompt that matters most to you.
  5. Paste into Claude Code, Cursor, or Windsurf. Let the AI apply the fix, review the diff, commit. Re-scan to confirm.

Frequently asked questions

How accurate is this scan?

The 25 checks are HTTP-level probes against the open Agent-Adoption Specification — robots.txt, sitemap, llms.txt, markdown content negotiation, OAuth discovery, MCP server cards, and so on. Each returns pass / fail / not-applicable based on what your server actually serves. False positives happen when sites are bot-protected (aggressive WAF rules can block the scan) or behind authentication walls. The signals are real, but their effect on AI visibility is small — agent-readiness is one factor among many.

Is this tool really free?

Yes. No signup, no email, no upsell to see your result. The only limit is 3 scans per minute per IP — same domain twice in a row gets cached so you don't waste quota. The paid CompetLab product covers AI visibility across LLMs, competitors, and traffic — the one-off scan here is free forever.

Will my domain or scan results be stored?

No. Each scan runs on demand, returns a result, and is gone. The only thing kept in memory is a 1-minute cache of the last few scans (so re-submitting the same domain doesn't re-run the whole probe) — it's per-process, not on disk, and clears within the minute. We don't log domains to a database, don't persist results, and don't share with third parties.

What does "Level 1 / Level 2 / Level 3" mean?

Levels are gate-driven, not score-driven. A site can score 99 and still be Level 1 if it fails the gate check for Level 2. Levels and their gates:

  • L1 Basic Web Presence — the default. Your site has a working homepage that returns content.
  • L2 AI-Aware — gate: content-signals declared (you've published explicit declarations of how AI may use your content)
  • L3 Agent-Optimized — gate: markdown content negotiation (your server returns markdown when agents ask for it via Accept: text/markdown)

If you want to level up, the "Next-Level Gates" panel in your result tells you exactly which check to fix.

Does a high score mean my AI traffic will increase?

No. In our 908-domain correlation study, only 2 of 50 tested agent-readiness signals reached significance — and none had a 'large' effect. Brand recognition, training data, and content quality matter more in practice. Use the score to spot fixable problems and ship them — don't use it as a ranking lever or a guarantee of AI traffic.

Why does the scan take 30 seconds to a few minutes?

We probe 25 different signals — robots.txt, sitemaps, llms.txt, markdown content negotiation, OAuth endpoints, MCP server cards, and more. Some checks (like rendering strategy) require fetching the page in a real browser engine to see how it actually renders. We trade speed for coverage. Most scans complete in under a minute; complex sites with lots of redirects can take longer. We give the scan up to 5 minutes before timing out.

Do I need to fix every failed check?

No. Focus on the scored checks — those are what move your score. Informational checks are useful context (they tell you about emerging signals like AGENTS.md or A2A cards) but don't affect your score. If you want to level up, look at the gate checks for the next level — those are the highest-leverage fixes.

Why don't some failed checks have a fix-prompt?

If a check is informational — like AGENTS.md or A2A agent cards — it doesn't move your score, so we don't generate a paste-ready fix-prompt for it. These signals are emerging early-adoption flags, not blockers. Implement them when you're ready to lead on that signal; otherwise, skip them and focus on the scored checks where the fix-prompt appears.

What do I actually do with a fix-prompt?

Copy it, paste it into Claude Code, Cursor, or Windsurf. The AI tool reads your codebase and applies the fix. You review the diff, commit. Pure "see → copy → ship" loop — no manual config wrangling, no spec-reading, no deciding what the right header value is. The prompt has all that context built in.

Why do you check for things like MCP server cards or AGENTS.md? Most sites don't have those.

True — most sites don't, today. We track them because adoption is early and the trajectory matters. If you're building a B2B SaaS, having an MCP server card makes your service discoverable as a tool to Claude (and a growing list of other agents). Adding AGENTS.md to a public repo tells AI coding assistants how to work with your codebase. Early movers signal agent-native — and as adoption grows, having these in place pays off.

Can I run this on competitors?

Yes. The tool works for any public domain. Run three competitors back-to-back to see how their agent-readiness compares to yours. Try the paid CompetLab product to see your full AI visibility picture across competitors and LLMs.

How often should I re-run the scan?

Right after you fix something, to confirm the fix landed. Otherwise, monthly is plenty unless you're shipping infrastructure changes weekly. Don't run it daily to obsess over the score — most of the value is in the failed-check fixes, not the score number itself.

When to use this tool

  • Before launching a content site or product page
  • Before a major redesign or framework migration
  • After migrating to a new hosting provider or CDN
  • When you're planning AI-ready integrations (MCP, OAuth, content-signals)
  • To audit competitors and see where the bar is in your category
  • After every infrastructure change you suspect could affect agent-readiness