Research Hub & Resources

GEO Research

Last updated 1 May 2026

A curated library of research and resources about how AI citations work. We gather the latest GEO research from international sources, analyse what it means for your business, and share what we learn. This page is a living resource — updated regularly with new findings from the GEO landscape and observations from our audits. Bookmark it.

GEO (Generative Engine Optimisation) and AI Search refer to the same thing — how businesses appear in AI-powered search tools like ChatGPT, Perplexity, and Google AI. We use both terms interchangeably.

Why we're curating GEO research

Generative Engine Optimisation is moving rapidly. There's a lot of noise. Not enough data shared openly. We think businesses deserve research they can actually rely on.

Known & Cited doesn't run its own research programme yet. Instead, we curate high-quality GEO research from international sources — academic studies, industry analysis, and original observations from our audits — and share what it means for your business. This page brings together what the community is learning and what we've observed in our measurement work.

We combine published research with findings from our AVS audits. Our goal is to help you understand the GEO landscape — what's working, what's uncertain, and where the field is still figuring things out. We're honest about confidence levels. Where research is solid, we'll say so. Where it's emerging or contested, we'll flag that too.

We want to start our own data-led international research programme — measuring how AI citations work across countries and languages — but that's future work. For now, this is our curation hub. Bookmark it.

What we know so far

Drawing on our own audit data, published academic research, and analysis from across the GEO landscape, here are the patterns we're seeing. We've tried to be honest about confidence levels — some of this is well-established, some is emerging, and some is informed speculation.

Each LLM behaves differently — and the gap is widening

ChatGPT, Perplexity, Gemini, Claude, and Bing Copilot do not cite the same businesses for the same queries. Our audits consistently show that a business scoring well on one platform can be invisible on another. Perplexity tends to cite more sources explicitly. ChatGPT is more likely to synthesise without attribution. Google's AI Overviews favour content already ranking in traditional search. This means a single-platform GEO strategy is inherently fragile.

Confidence: High — observed consistently across K&C audits. Also noted by Muck Rack, Authoritas, and academic GEO research (Georgia Tech, 2024).
Structured, authoritative content correlates with citation

Businesses that produce clear, well-structured content that directly answers common questions tend to be cited more often. This includes FAQ pages, "how it works" explainers, and content that uses schema markup. The Georgia Tech GEO research found that adding citations, quotations from authoritative sources, and statistics to content improved LLM citation rates by up to 40%. Content that reads like it was written for AI extraction — clear, factual, attributable — performs better than content written purely for human engagement.

Confidence: Medium-High — supported by academic research and our audit observations. The exact mechanisms remain unclear.
Third-party presence matters more than your own website

AI models don't just read your website. They've been trained on the entire web. Businesses that appear on trusted third-party domains — industry publications, review sites, Wikipedia, professional directories — tend to be cited more consistently. In our audits, businesses with strong earned media presence almost always outperform businesses that only invest in their own domain, even when the owned content is excellent. PR, in the traditional sense, may be one of the strongest GEO signals.

Confidence: Medium-High — consistent across audits. Also emphasised by Hotwire, C8 Consulting, and the broader PR industry's GEO research.
AI answers fluctuate — and that's normal

Run the same query on ChatGPT today and tomorrow, and you may get different brands cited. LLMs are probabilistic, not deterministic. Citation scores fluctuate weekly. This creates a measurement challenge: a single snapshot can be misleading. Meaningful trends only emerge over quarterly periods. Anyone claiming to offer real-time GEO tracking is measuring noise, not signal. This is why our methodology uses structured query frameworks across multiple time windows.

Confidence: High — fundamental to how LLMs work. Confirmed by every audit we've run.
GEO varies significantly by country — and almost nobody is measuring it

Our multi-country audits show that the same business can be recommended in the UK but invisible in Germany, or cited in the US but not in France. Language matters. Local sources matter. Regional search behaviour patterns influence AI training data. Most GEO services operate in English only, in a single market. This leaves a massive blind spot for any business operating internationally. We believe international GEO research is one of the most underexplored areas in the field.

Confidence: Medium — based on our multi-country audit data. Limited external validation because few others are doing this research.
Business narrative consistency amplifies citation

When a business tells the same story across multiple sources — website, press, industry directories, LinkedIn, review sites — AI models appear to develop a stronger "understanding" of what that business does and who it's for. Businesses with fragmented or contradictory positioning across different channels tend to receive weaker, less specific citations. The implication: GEO is partly a business consistency exercise.

Confidence: Medium — pattern observed in audits, particularly in competitor benchmarking. Causation not established.

The landscape is moving fast

The GEO space is changing rapidly. Two years ago, none of these services existed. Now there's an emerging ecosystem of tools, agencies, and methodologies — all trying to solve the same problem from different angles.

Dashboard tools like Otterly.ai, Sight, and Peec.ai offer self-serve LLM monitoring. PR agencies like Hotwire, Brands2Life, and Ambitious PR are adding GEO to retainer services. Analytics platforms like Authoritas provide the underlying data infrastructure. Agency services like Muck Rack's Generative Pulse, Impression Digital, and C8 Consulting are developing proprietary GEO frameworks.

What's notably absent from most of these approaches is regular, data-led international research. Most operate in a single market. Most focus on English-language queries. Most publish case studies rather than ongoing research. The field needs more data, shared openly, with honest confidence levels.

That's the gap we're aiming to fill.

We're developing ongoing international GEO research tracking how AI answers vary across countries, languages, and platforms. More to come.

What we're tracking

Our ongoing GEO research programme is designed around the questions that matter most to businesses. These are the themes we're actively investigating through our audit data and dedicated research queries.

Cross-LLM citation patterns

How do ChatGPT, Perplexity, Gemini, Claude, and Bing Copilot differ in what they cite and how? Which platforms are most volatile? Which are most consistent? When one platform starts citing a business, do others follow?

International citation variation

How do AI recommendations differ between the UK, US, Germany, France, and other markets? Do localised queries produce fundamentally different business recommendations? How much does language affect citation?

Content signals that drive citation

What types of content correlate most strongly with AI citation? How important are schema markup, FAQ pages, structured data, and authoritative third-party mentions? Can we isolate individual signals?

Citation velocity and decay

How quickly does new content get picked up by AI platforms? How long does a citation last? Is there a "half-life" for AI visibility? What causes a business to drop out of AI recommendations?

Sector-specific GEO dynamics

Does GEO work differently in professional services vs. consumer businesses? How do B2B and B2C citation patterns differ? Are some sectors more "GEO-ready" than others?

The PR-GEO relationship

How directly does traditional PR activity translate to AI citations? Is earned media the strongest GEO signal? How long after publication does a press mention start appearing in AI answers?

Our approach to curation

We're not an academic institution. We're a commercial business that runs AI visibility audits. We curate GEO research from a specific perspective: we want to understand this landscape well enough to give our clients genuinely useful advice. That means sourcing rigorous work and being honest about what's proven versus what's still emerging.

How we source research

We monitor GEO research from academic institutions, industry platforms, and fellow practitioners. We analyse what's relevant to our clients — how AI citations work, what signals matter, how measurement should work. We combine published findings with observations from our own audit data. We prioritise international research and cross-platform analysis.

How we report findings

Every finding includes a confidence level. "High" means it's been observed consistently and backed by multiple sources. "Medium" means we've seen it in our audit data or industry analysis, but it's not yet settled. "Low" means it's emerging or contested. We're honest about what we actually know.

What we're tracking, not claiming

We don't claim to lead GEO research — that's community work. What we do is analyse high-quality research, measure how it plays out across platforms and countries, and share observations that help you navigate the GEO landscape. This curation is part of our service to clients.

Understanding the GEO landscape is part of what makes Known & Cited different. We curate and analyse international research so you don't have to. We measure how it works in your market. We share what we learn so you can make better decisions about your AI visibility. Find out why that matters →

Want to know what AI says about your business?

Book an AVS Exec Brief — a quick snapshot of your AI visibility across five platforms.

Get in touch

See how this research applies to you

Explore our methodology and case studies to understand how we measure AI visibility and what it means for your market.

Explore our methodology →