SEO
May 3, 2026
Getting Cited by ChatGPT: 90 Days of Citation Tracking Data

What 90 days of citation tracking actually showed
Getting cited by ChatGPT means your page is retrieved through Bing, ranked on domain authority, content quality, and platform trust, then extracted as a citable passage. You can influence each signal. Over 90 days, we tracked citation behavior across Gravidy and client domains to find which changes actually moved the needle on ChatGPT citations SEO, and which did nothing.
The setup was simple. We monitored a tracked set of target queries across Gravidy's own domain and a handful of client B2B SaaS sites, logging which URLs ChatGPT cited in browsing mode. We checked the same query set weekly. Every citation gained, every citation lost, and every structural change made to the underlying pages in between went into the log. More on the tracking methodology itself is in a separate post.
Three patterns produced almost every citation gain in the dataset.
Front-loaded direct answers under 60 words. Structured facts paired with a named inline source. FAQ sections placed before the conclusion. Pages that did none of the three did not earn a single new citation across the 90 days, regardless of their domain rating.
Pages that earned their first citation within seven days of a rewrite all shared the same shape: a one-paragraph direct answer at the top, four to six declarative facts with attribution in the middle, and a question-and-answer block near the bottom. Pages that only added schema, with no rewrite of the prose itself, did not move.
Citation frequency across the broader market doubled between late 2025 and early 2026, according to Profound data reported by Erlin. The window for getting picked up is open right now in a way it was not twelve months ago. If you have been waiting for the dust to settle, the dust is settling on your competitors instead.
For readers new to this framing, our AEO and GEO primer covers the underlying vocabulary.
How ChatGPT selects sources when it browses
ChatGPT does not use Google. In browsing mode, it retrieves pages through Bing, then scores them on three weighted signals: domain authority at roughly 40%, content quality at roughly 35%, and platform trust at roughly 25%, according to ZipTie's analysis. Each response returns three to six clickable citations.
How ChatGPT weights sources in browsing mode
Domain authority (40%) Content quality (35%) Platform trust (25%)
Google rank still matters indirectly. Authority signals tend to travel across both indexes, which is why high-ranking Google pages also tend to surface in ChatGPT. But the direct retrieval mechanism is Bing. If your site has thin Bing indexing, no amount of Google rank fixes that.
The other distinction worth internalizing: ChatGPT mentions brands roughly three times more often than it actually cites them with a clickable URL, again per ZipTie. A mention without a link gives you zero referral traffic and is nearly impossible to track without specialist tooling. Optimizing for citation, not just mention, means making sure a specific URL surfaces, which means making sure a specific passage on that URL is extractable on its own.
Training data also shapes what ChatGPT "knows" about your brand when it is not browsing. Roughly 60% of its training corpus came from Common Crawl, as ZipTie notes. That affects baseline familiarity. It does not affect live citations.
The content signals that predict whether your page gets extracted
Erlin reports that pages with nine or more structured, attributable facts achieve 78% average AI coverage. Position within the page also matters: 44% of all ChatGPT citations come from the first 30% of a page, based on Kevin Indig's 2026 study of 1.2 million citations. Front-loading your core claim is no longer a stylistic preference. It is a retrieval requirement.
Four signals showed up in nearly every cited passage we tracked.
Declarative sentences over hedged language. "Brands updating content monthly see 23% higher AI coverage" gets extracted. "Brands that update content tend to perform better" does not, as Erlin demonstrates. The first sentence is a fact. The second is mush.
Q&A formatting. ChatGPT scans for question-answer pairs because they map directly onto the user's prompt. A question phrased the way someone would ask it, followed by a short answer, behaves like a pre-built citation candidate.
Inline attribution to named sources. Citing a third party inside your prose, for example "according to Statista, 27% of internet users used voice search on mobile in 2023", signals that your content meets the same evidentiary standard ChatGPT itself applies when picking citations.
Self-contained passages. A sentence that requires the reader to scroll up two paragraphs for context cannot be lifted. A sentence that names the subject, the metric, and the source in one breath can.
The 44% positional bias is not laziness on ChatGPT's side. It is an efficiency heuristic that mirrors how a human skims a page when looking for one specific fact. Top of the page, find it, leave. The same principles apply to Google AI Overviews with minor differences in how trust signals get weighted.
Why low domain authority does not disqualify you
Sites with more than 32,000 referring domains are 3.5x more likely to be cited than lower-authority sites, per SE Ranking data cited by Yotpo. The correlation is real. It is also a correlation, not a hard gate.
In our 90-day sample, several pages with domain ratings under 30 earned ChatGPT citations once their content structure met every extractability signal. Authority sets a floor. Content structure lifts you above it.
Platform trust, the third weighted signal at roughly 25%, is partially controllable in ways DA is not. HTTPS. Page speed. Schema. Clean technical hygiene. None of these require a six-month link campaign.
Page speed in particular punches above its weight. SE Ranking found that pages with First Contentful Paint under 0.4 seconds average 6.7 citations, while pages over 1.13 seconds drop to 2.1. Three times the citation rate, purely from rendering faster.
For a B2B SaaS site with a modest link profile, the takeaway is direct: structural and technical changes are the highest-ROI lever available before a long-term link campaign yields DA gains. This is the same shape of SEO engagement we run for clients with strong product-market fit and weak link profiles. You can fix front-loading this week. You cannot fix DA this week.
Citation tactics ranked by observed impact
The table below summarizes how each tactic performed in our 90-day tracking set. Impact is measured as the share of pages that gained at least one new citation after the change was applied. The sample is small enough that directionality matters more than precise hit rates.
Tactic | Observed citation gain | Median days to first citation | Implementation effort |
|---|---|---|---|
Front-loaded direct answer (under 60 words) | High | 4 to 9 days | Low |
FAQ section addressing PAA questions | High | 5 to 12 days | Low |
Structured facts with named inline source | High | 7 to 14 days | Low to medium |
Monthly content refresh with new data | Medium | 14 to 30 days | Medium |
FAQ + Article schema markup | Medium | 7 to 21 days | Medium |
Referring domain growth | Medium-high, compounding | 30 to 90+ days | High |
Two notes from the data. Schema markup alone, without rewriting the prose underneath, did not produce citation gains in any tracked page. The combination of front-loaded answer plus FAQ block was the single highest hit-rate pairing. If you only have time to ship two changes this quarter, ship those two.
The same tactic stack applies to Perplexity citations, with small differences in how platform trust weights resolve.
Frequently asked questions
How does ChatGPT choose its sources?
ChatGPT uses Bing to retrieve candidate pages, then scores them on domain authority (roughly 40%), content quality (roughly 35%), and platform trust (roughly 25%), per ZipTie's breakdown. It prioritizes pages with structured, extractable facts positioned early in the document. The browsing model returns three to six clickable citations per response.
How do I get my website cited by ChatGPT?
Structure your page so the core answer appears in the first 60 words. Add a FAQ section that mirrors the exact phrasing of common search queries. Use declarative sentences with named sources rather than hedged claims. These three changes drove citation gains across pages with varying domain authority levels in our 90-day tracking set.
Can ChatGPT cite a low-authority website?
Yes. Domain authority raises your probability of citation but does not set a hard minimum. Pages with domain ratings under 30 earned citations in our sample once their content structure met every extractability signal. The structural signals are the lever you can move fastest.
What the 90 days do not yet answer
Three findings hold up across the dataset. Content structure predicted citation gains more reliably than DA on the time horizons we tracked. The combination of front-loaded answers, FAQ blocks, and inline-attributed facts produced the highest yield. Schema alone did nothing without prose rewrites underneath.
What the data does not yet answer: how citation behavior differs by query category (commercial vs informational), how it varies by sector, and how stable any individual citation is across model updates. We are still tracking. The honest answer to "is this a permanent ranking factor map" is no, not yet.
Most B2B sites we audit have three to five extractability fixes sitting in plain sight: a buried answer, a missing FAQ block, hedged language where a number should be. If you want to know which ones are costing you ChatGPT citations right now, book a free SEO audit call. Thirty minutes, specific findings, no slide decks.


