\!DOCTYPE html>
We measure ChatGPT, Google AI Overviews and Perplexity. Not five engines, not six. The reasoning is in the data, and it surprises people.
Three or four times a week, a prospect, partner, or sceptical agency lead asks why our standard AVS measures three engines instead of five. Sometimes it lands as “what about Claude?”, sometimes as “what happens when ChatGPT 5.2 ships?”, sometimes as a friendly audit. The answer is the same five questions every time, so we wrote them down.
The short version: more engines is not the same as more signal. ChatGPT, Google AI Overviews and Perplexity cover roughly 95% of where buyers actually run AI search. The marginal engine ends up measuring the same brands. We can add Claude, Gemini, DeepSeek, or pinned API versions for clients who need them. They just sit outside the standard read, because the standard read is the buyer read.
Here are the five questions in full, with the answers we give in scoping calls.
The standard AVS measures three engines: ChatGPT, Google AIO, and Perplexity. We measure them at the consumer scraping tier, which is the version a regular buyer sees when they open the app or run a Google search. We do not measure paid Pro tiers or pinned API model versions by default.
The scope is deliberate: three platforms, the consumer experience, the same read your prospects actually get. Clients who want additional engines can scope them in. The standard read is the buyer read.
They cover roughly 95% of buyer-side AI search today.
ChatGPT carries about 78% of AI chatbot referrals to websites and serves 800 million weekly users (Similarweb, 2026). Google AI Overviews appears on around half of all Google searches and is projected to hit 75% by 2028 (McKinsey, 2026). Perplexity is the AI-first research tool, the platform a buyer opens specifically to research a category.
Together these three are where the actual buyer is. Adding more engines lifts coverage by single digits.
Six major platforms, multiple versions each, weekly drift. Measuring every permutation costs more credits per cycle, lengthens the run, and adds noise without proportional signal. The marginal engine ends up measuring the same brands.
BrightEdge’s 2026 cross-engine study found pairwise brand-recommendation overlap of 36 to 55%, against pairwise source overlap of 16 to 59%. Engines mostly agree on brands. They disagree on sources.
Three engines is the cleanest read on what a buyer actually sees. We can adapt the search if a client needs extras.
Less variance than people fear. The same BrightEdge data shows engines disagree about which articles to cite far more than they disagree about which brands to recommend. Versions of the same engine sit even closer together.
Brands win in AI search by being the ones an engine holds high confidence in, and high confidence stays stable across versions. The consensus shortlist your buyer sees does not flip when ChatGPT 5.2 ships.
AVS reads the consensus, and that is the signal that holds.
Yes. Claude, Gemini, DeepSeek, and version-pinned API engines are available as part of the Enhanced tier or as a one-off Discovery engagement. If you have a specific platform or model version that matters to your category, tell us at scoping and we will add it to your prompt set.
The standard three are the consumer baseline. Anything else is a deliberate addition for a specific commercial reason. Talk to us about scope before you sign.
The full AVS methodology page sets out the five-step framework end to end — Focus, Measure, Plan, Deliver, Repeat. The FAQ block above lives there too, anchored at methodology#how-we-measure.
Book the AVS Exec Brief. Three engines, your prompts, the consumer read. Free, with a 30-minute scoping call up front.
Book the Exec Brief Talk to us