Only 11% of Domains Get Cited by Both ChatGPT and Perplexity. Generic AEO Is Over.
TL;DR: Leapd's analysis of 680 million AI citations across ChatGPT, Google AI Overviews, and Perplexity shows only 11% of domains are cited by both ChatGPT and Perplexity. The "optimize once, win everywhere" pitch most AEO agencies sell does not match how the engines actually behave. Each platform has its own retrieval pipeline, its own freshness window, and its own preferred sources. In 2026, the small businesses that win are the ones running platform-specific GEO instead of one-size-fits-all AEO.
If you've ever stared at your AEO scorecard and wondered why Perplexity cites you weekly while ChatGPT acts like you don't exist, you're not imagining it. Five engines run AI search. They behave like five different products. Same category, different citation set.
That has a real consequence for small businesses. One AEO checklist run uniformly across a site is leaving most of the citation opportunity on the floor.
The 11% number, and why it matters
Leapd’s 2026 analysis covered 680 million AI citations across ChatGPT, Google AI Overviews, and Perplexity. The headline finding: only 11% of domains receive citations from both ChatGPT and Perplexity. The other 89% are platform-specific. A separate study of 34,234 AI responses found a 46-times variation in citation frequency between platforms, with ChatGPT citing brands at 0.59% of responses versus Perplexity at 13.05%.
These are not noise gaps. The engines are built on different retrieval pipelines and they reward different signals.
The implication is blunt. A business that optimizes a single AEO scorecard and assumes all five engines will respond the same way is optimizing for the smallest possible intersection. The platform the customer happens to open decides whether you show up at all.
Why the engines behave differently
Each engine has its own retrieval pipeline. The differences are not cosmetic. They change the tactics.
| Engine | Retrieval model | Freshness window | Preferred sources | |---|---|---|---| | ChatGPT | Mixed: training corpus plus browsing | 6 to 18 months for trained answers, hours for browsing | Established domains with citation breadth, FAQ-formatted pages, comparison tables | | Perplexity | Real-time retrieval | Days to weeks | Recent content, structured passages, freshness markers | | Google AI Overviews | Real-time retrieval grounded in Google search index | 4 to 8 weeks | Top-ranking SEO results plus pages with strong schema and FAQ markup | | Gemini | Mixed: training plus Google search grounding | Mirrors AI Overviews on grounded queries | Same as Google AI Overviews, plus entity in Knowledge Graph | | Claude | Training corpus plus tool-use retrieval | 6 to 18 months for training, real-time when tools fire | Authoritative public mentions, schema, content depth |
Two engines on the same row of that table are still not interchangeable. ChatGPT and Claude both run on training-corpus latency, but ChatGPT pulls from web crawls weighted toward citation breadth, while Claude weights authoritative reputation higher. Perplexity and Google AI Overviews both run on real-time retrieval, but Perplexity rewards recent independent publishing, while AI Overviews still leans on Google’s SEO authority graph.
The tactical implication: optimizing for one engine pulls levers a different engine ignores.
What still works on all five
Before going platform by platform, get the baseline right. The eleven signals on the EVOIX research methodology page are the umbrella: entity clarity, structured data, content depth, heading structure, FAQ format, information gain, authority outbound links, citation breadth, EEAT and author signals, freshness, and technical hygiene.
Skip the baseline and no platform-specific tuning will save you. Schema, FAQ format, and entity work are table stakes. The differentiation lives one layer above that.
If you haven't read the umbrella primer yet, start with AEO vs SEO: Why Your Small Business Needs Both. This post assumes you have.
Platform-specific tactics for ChatGPT
ChatGPT cites only 15% of the pages it retrieves, and according to Authoritas data, pages with FAQ schema and inline citations are weighted approximately 40% higher in source selection than pages without these elements. ChatGPT’s December 2025 algorithm change pushed citations per response from 5.7 to 10.4, a jump of 81%, with the algorithm tuned to favor recentness and momentum.
Tactical implications for ChatGPT:
- FAQ format with explicit Q+A pairs at the page level. Not buried in a sidebar, structured into the body content.
- Citation velocity: a steady cadence of new mentions, brand pickups in third-party content, and content updates over the trailing 30 to 60 days. ChatGPT now favors momentum over static authority.
- Front-loaded answers. Research shows the first 30% of a page accounts for 44.2% of all LLM citations. TL;DR sections, summary callouts, and direct answers in the first major section are doing disproportionate work.
- Authority breadth across third-party sources rather than one strong page. ChatGPT’s training corpus rewards a brand mentioned in many independent contexts.
Platform-specific tactics for Perplexity
Perplexity is the easiest engine to win on for a small business with a clean site, because it runs on real-time retrieval and rewards freshness over corpus weight. A new page can pick up Perplexity citations within days, not months.
Tactical implications for Perplexity:
- Visible last-updated dates and recent dateModified in schema. Perplexity reads freshness signals literally.
- Structured passages of 134 to 167 words with clear question-format H2s. The format Perplexity extracts cleanly.
- Original data and named frameworks. Perplexity preferentially cites primary sources because it can ground a single citation in a single source.
- Speed of publishing. A small business that publishes new content weekly sees Perplexity citations build faster than one that publishes monthly.
Platform-specific tactics for Google AI Overviews
Google AI Overviews appear in 21 to 48% of searches depending on industry, and 83% of AI Overview citations come from pages outside the organic top 10. SEO ranking still matters, but it is no longer the dominant factor.
Tactical implications for AI Overviews:
- Schema markup completeness, especially Organization, LocalBusiness, Service, and FAQPage, since AI Overviews lean heavily on structured data.
- Long-form service pages, 1,500-plus words minimum. AI Overviews do not cite short posts.
- Google Business Profile activity. Frequent photo uploads and updates are now a top-tier ranking signal for local AIO.
- Question-format headings that match conversational search queries. AI Overviews are triggered by natural-language questions more than keyword searches.
Platform-specific tactics for Gemini
Gemini behaves like a hybrid. On grounded queries, it mirrors AI Overviews. On training-only queries, it leans on its corpus. The practical implication is that optimizing for Google AI Overviews captures most of the Gemini opportunity, with one extra lever: explicit entity presence in the Knowledge Graph.
Tactical implications for Gemini:
- Everything in the Google AI Overviews list above.
- Entity sameAs links in Person and Organization schema pointing to verifiable public profiles (LinkedIn, Crunchbase, Google Business Profile).
- Wikidata and alternative wiki entries are increasingly difficult to land for small businesses, but a Google Knowledge Panel triggered by a verified GBP plus a clean Person schema is the realistic path in 2026.
Platform-specific tactics for Claude
Claude is the hardest engine to optimize directly because most public Claude usage runs through training-corpus answers without real-time retrieval. The lever that moves Claude is reputation in the slice of the public web that ends up in training data.
Tactical implications for Claude:
- Authoritative public mentions in independent sources. Niche industry blogs, podcasts, and journalist quotes weigh more than directory listings.
- Person and Organization schema with credentials, alumniOf, and verifiable hasCredential properties.
- Long-form content with strong outbound citations to .gov, .edu, and recognized institutions. The eleven-signal framework on the GEO pillar guide covers the structural pattern in detail.
How to actually run platform-specific GEO without going broke
You cannot run five optimization programs at once. Pick two. Build the baseline strong enough that the other three benefit indirectly.
For most small businesses in 2026, the right two are Google AI Overviews and ChatGPT. AI Overviews because it sits inside Google search, which still owns the largest slice of small-business discovery. ChatGPT because volume is the highest of any pure AI search engine, and because customers use it most for vendor research.
A practical starting cadence:
- Get the baseline right first. Schema, FAQ format, entity clarity, content depth on primary service pages. The eleven signals.
- Add the AI Overviews layer. Long-form 1,500-plus word service pages with FAQPage schema, GBP weekly photo uploads, schema completeness on every page.
- Add the ChatGPT layer. Citation velocity through a steady publishing cadence, third-party mentions, comparison and listicle pages with clean FAQ extracts.
- Measure per platform. Run the free AI Readiness Audit to see your AI visibility score broken out by ChatGPT, Gemini, and Claude. Run it monthly to track which lever is moving which engine.
The 11% number is the punch line. The lever is platform-specific GEO. The implementation is one cadence, layered correctly.
What to read next
- The EVOIX AEO Score Methodology: the eleven signals scored against a normalized rubric.
- Generative Engine Optimization: The Complete Guide for 2026: the long-form pillar with deeper coverage of each lever.
- How to Get Your Business Cited in ChatGPT and AI Search Results: the foundational tactic post.
- The T in ChatGPT: What AI Search Optimization Actually Requires in 2026: a deeper read on the transformer-layer side of GEO.
Run the free AI Readiness Audit and you will get your platform-by-platform breakdown in about 30 seconds. That is the starting point for any platform-specific GEO program.