What we're learning about how AI citations work. We gather the latest GEO research from the community, add our own findings when we can, and share what we discover. This page is a living resource — updated regularly with observations from our audits and analysis of how the GEO landscape is evolving. Bookmark it.
GEO (Generative Engine Optimisation) and AI Search refer to the same thing — how businesses appear in AI-powered search tools like ChatGPT, Perplexity, and Google AI. We use both terms interchangeably.
Generative Engine Optimisation is evolving rapidly. There's a lot of noise. Not enough data shared openly. We think businesses deserve research they can actually rely on.
Known & Cited is starting its own dedicated GEO research programme. Every Citation Authority audit generates data about how AI models cite and recommend businesses — we'll share key findings from this data as it comes in. We also gather observations from the community and run targeted research sprints when there's a gap we need to fill.
This page is where we share what we're learning. Not opinions dressed up as facts. Observations from audits and the field, with honest confidence levels about what we actually know versus what we're still figuring out. Where we're confident, we'll say so. Where we're not, we'll say that too.
GEO is unsolved. But we can get smarter about it by tracking patterns across real platforms, in real markets, and sharing what we find.
Drawing on our own audit data, published academic research, and analysis from across the GEO landscape, here are the patterns we're seeing. We've tried to be honest about confidence levels — some of this is well-established, some is emerging, and some is informed speculation.
ChatGPT, Perplexity, Gemini, Claude, and Bing Copilot do not cite the same businesses for the same queries. Our audits consistently show that a business scoring well on one platform can be invisible on another. Perplexity tends to cite more sources explicitly. ChatGPT is more likely to synthesise without attribution. Google's AI Overviews favour content already ranking in traditional search. This means a single-platform GEO strategy is inherently fragile.
Businesses that produce clear, well-structured content that directly answers common questions tend to be cited more often. This includes FAQ pages, "how it works" explainers, and content that uses schema markup. The Georgia Tech GEO research found that adding citations, quotations from authoritative sources, and statistics to content improved LLM citation rates by up to 40%. Content that reads like it was written for AI extraction — clear, factual, attributable — performs better than content written purely for human engagement.
AI models don't just read your website. They've been trained on the entire web. Businesses that appear on trusted third-party domains — industry publications, review sites, Wikipedia, professional directories — tend to be cited more consistently. In our audits, businesses with strong earned media presence almost always outperform businesses that only invest in their own domain, even when the owned content is excellent. PR, in the traditional sense, may be one of the strongest GEO signals.
Run the same query on ChatGPT today and tomorrow, and you may get different brands cited. LLMs are probabilistic, not deterministic. Citation scores fluctuate weekly. This creates a measurement challenge: a single snapshot can be misleading. Meaningful trends only emerge over quarterly periods. Anyone claiming to offer real-time GEO tracking is measuring noise, not signal. This is why our methodology uses structured query frameworks across multiple time windows.
Our multi-country audits show that the same business can be recommended in the UK but invisible in Germany, or cited in the US but not in France. Language matters. Local sources matter. Regional search behaviour patterns influence AI training data. Most GEO services operate in English only, in a single market. This leaves a massive blind spot for any business operating internationally. We believe international GEO research is one of the most underexplored areas in the field.
When a business tells the same story across multiple sources — website, press, industry directories, LinkedIn, review sites — AI models appear to develop a stronger "understanding" of what that business does and who it's for. Businesses with fragmented or contradictory positioning across different channels tend to receive weaker, less specific citations. The implication: GEO is partly a business consistency exercise.
The GEO space is evolving rapidly. Two years ago, none of these services existed. Now there's an emerging ecosystem of tools, agencies, and methodologies — all trying to solve the same problem from different angles.
Dashboard tools like Otterly.ai, Sight, and Peec.ai offer self-serve LLM monitoring. PR agencies like Hotwire, Brands2Life, and Ambitious PR are adding GEO to retainer services. Analytics platforms like Authoritas provide the underlying data infrastructure. Agency services like Muck Rack's Generative Pulse, Impression Digital, and C8 Consulting are developing proprietary GEO frameworks.
What's notably absent from most of these approaches is regular, data-led international research. Most operate in a single market. Most focus on English-language queries. Most publish case studies rather than ongoing research. The field needs more data, shared openly, with honest confidence levels.
That's the gap we're aiming to fill.
We're developing ongoing international GEO research tracking how AI answers vary across countries, languages, and platforms. More to come.
Our ongoing GEO research programme is designed around the questions that matter most to businesses. These are the themes we're actively investigating through our audit data and dedicated research queries.
How do ChatGPT, Perplexity, Gemini, Claude, and Bing Copilot differ in what they cite and how? Which platforms are most volatile? Which are most consistent? When one platform starts citing a business, do others follow?
How do AI recommendations differ between the UK, US, Germany, France, and other markets? Do localised queries produce fundamentally different business recommendations? How much does language affect citation?
What types of content correlate most strongly with AI citation? How important are schema markup, FAQ pages, structured data, and authoritative third-party mentions? Can we isolate individual signals?
How quickly does new content get picked up by AI platforms? How long does a citation last? Is there a "half-life" for AI visibility? What causes a business to drop out of AI recommendations?
Does GEO work differently in professional services vs. consumer businesses? How do B2B and B2C citation patterns differ? Are some sectors more "GEO-ready" than others?
How directly does traditional PR activity translate to AI citations? Is earned media the strongest GEO signal? How long after publication does a press mention start appearing in AI answers?
New entries are added as we publish findings. This section will grow as our audit dataset expands and we run dedicated research sprints.
When we ran our first Citation Authority audits — on our founder's family members' businesses — the results were stark. A beauty salon in Wiltshire scored 3/100. A bakery in Bristol scored 6/100. An interactive display company scored 28/100. A NASDAQ-listed aviation company scored 62/100. The pattern was clear: most businesses have no idea what AI is saying about them, and most AI platforms are saying very little.
What surprised us wasn't the low scores — it was the variation between platforms. The same business could be recommended by Perplexity but invisible on ChatGPT. Cited correctly by Gemini but described inaccurately by Bing Copilot. This cross-platform inconsistency isn't a bug. It's the fundamental challenge of GEO. And it's what makes single-platform measurement dangerous.
Our first dedicated research piece will analyse citation consistency across five major AI platforms for a set of B2B businesses in the UK. How often do platforms agree? When they disagree, which platform is the outlier? What does this mean for measurement methodology?
Using our multi-country audit framework, we'll compare how the same businesses are cited in UK English vs. US English queries. Initial observations suggest significant variation — this study will quantify it.
We're not an academic institution. We're a commercial business that runs AI visibility audits. Our research comes from a specific perspective: we want to understand GEO well enough to give our clients genuinely useful advice. That means being rigorous about what we claim and honest about what we don't know.
Every Citation Authority audit we run generates data about AI citations across our methodology. We also track patterns across sectors, platforms, and geographies. When we run dedicated research to test specific ideas, we include it here with clear confidence levels.
Every finding includes a confidence level. "High" means we've seen it consistently and it's supported by external research. "Medium" means we've observed the pattern but can't be certain. "Low" means it's emerging or based on limited data. We're honest about the limits of what we know.
We track emerging GEO research and contribute when we can. We don't claim to be leading researchers—GEO is community knowledge. What we do is measure how it works across platforms and share observations that help businesses understand their AI visibility.
This research is part of what makes Known & Cited different. We don't just run audits — we invest in understanding how AI citations work across platforms, countries, and languages, and we share what we learn. Find out why that matters →
Our Citation Authority audits measure your AI visibility across five platforms, in any market, in any language. Start with a free AVS Flash or talk to us about a full engagement.