TL;DR: LLMs.txt is a proposed file intended to control how AI models use website content.
Despite widespread claims, OpenAI, Google, and Anthropic have all stated that they do not currently use LLMs.txt for search, answers, or training.
This article explains what LLMs.txt is, why it has no practical effect today, and what UK site owners should focus on instead.
What Is LLMs.txt?
LLMs.txt is a proposed text file placed at the root of a website, intended to communicate how large language models may use site content.
It is often described as “robots.txt for AI”, but that comparison is misleading. Robots.txt is a long-established standard that search engines voluntarily respect. LLMs.txt is neither standardised nor widely supported.
There is no governing body, no enforced syntax, and no shared interpretation of what its directives mean.
Do OpenAI, Google, or Anthropic Use LLMs.txt?
The short answer is no.
OpenAI, Google, and Anthropic have all stated publicly that LLMs.txt is not used to control training, search results, or generated answers.
In June 2025, Google’s John Mueller stated unequivocally:
“No AI system currently uses llms.txt.”
He emphasised that server logs clearly show AI bots aren’t even checking for these files, making them essentially useless for website owners.
OpenAI relies on a combination of robots.txt, publisher agreements, and internal dataset curation. Google has explicitly said that it does not use LLMs.txt for Search, AI Overviews, or Gemini outputs. Anthropic publishes crawler guidance, but does not recognise LLMs.txt as an authority.
This is not speculation. It is a stated position from each vendor.
Why There Is No Agreed Standard
LLMs.txt fails where robots.txt succeeded because the underlying problem is harder.
There is no consensus on what “training” means in operational terms. There is no clear separation between inference, summarisation, and model improvement. There is also no agreement on how partial permissions should be interpreted.
From a UK legal perspective, relying on an informal text file is weaker than relying on copyright law, database rights, or contractual licensing.
Until a recognised standards body defines behaviour and obligations, AI vendors have little incentive to comply.
Does LLMs.txt Do Anything Today?
In practical testing, no measurable effect has been observed.
LLMs.txt does not increase AI citations, prevent summarisation, or control how answers are generated. It does not override robots.txt, nor does it carry legal force.
At best, it functions as a policy signal. It expresses intent, not control.
Treating it as an AI SEO tactic is a category error.
A Real Example From QED
QED does have an LLMs.txt file in place.
However, QED content has been cited by large language models independently of that file. No change in AI visibility was observed before or after its introduction.
What did matter was structure, clarity, and UK-specific framing. Pages explaining legislation, compliance, and technical concepts were cited because they were easy to interpret accurately.
That aligns with vendor statements and with observed behaviour. For more information on “How to Get Cited by Google, ChatGPT, and Other AI Tools“, we have a post to help.
What Actually Drives AI Citation
AI systems favour content that is explicit, structured, and narrow in scope.
Clear definitions outperform opinion pieces. UK-specific context outperforms generic advice. Internal linking that reinforces topical depth improves comprehension.
This mirrors what we already see in sustainable SEO practice. If a page is unclear to a human reader, it will be unclear to a language model.
Common Misconceptions About LLMs.txt
LLMs.txt does not block AI access.
It does not guarantee attribution.
It does not replace copyright notices or licensing.
Most importantly, it does not compensate for weak content.
Treating it as a defensive mechanism distracts from the fundamentals that actually influence visibility.
When LLMs.txt Might Matter in Future
There is a plausible future where an AI usage control file becomes meaningful.
That would require a ratified specification, vendor adoption, and alignment with copyright and licensing law. None of those conditions exists today.
Until they do, LLMs.txt remains aspirational rather than operational.
What UK Site Owners Should Do Instead
Initially, avoid people who talk about GEO / AIO / AI search like it’s a separate thing to SEO – See our post on: SEO is Dead
Ignore anyone who mentions LLMs.txt like it makes a difference to getting your content cited by ChatGPT or Claude. See our post on: How to get cited by AI tools
For brevity, if you want visibility in AI answers, focus on what works now.
- Write definitions, not waffle.
- Answer questions directly.
- Add UK-specific legal, pricing, or regulatory context.
- Build intentional internal linking between related topics.
If your content cannot be summarised accurately, no text file will fix that.
If you want help auditing where clarity breaks down, contact QED, and we will show you exactly where interpretation fails.


