How Claude Scores Your llms.txt (It's Not Pass/Fail)
Claude doesn't just check if llms.txt exists - it grades it on a 4-level rubric. Tidio's 251-line llms.txt helped earn a +14 Claude bonus. Crisp's zero governance signals? That's a 34.
Questions this article answers
- ?How does Claude use llms.txt to rank and cite websites?
- ?What should I include in my llms.txt file to improve Claude visibility?
- ?Does llms.txt quality affect Claude scores differently than ChatGPT?
Summarize This Article With AI
Open this article in your preferred AI engine for an instant summary and analysis.
Quick Answer
Claude grades llms.txt on four levels: Level 1 (exists with basic description), Level 2 (structured URLs for key pages), Level 3 (detailed descriptions per section), Level 4 (links to llms-full.txt with full content inventory). Tidio's 251-line llms.txt - packed with product descriptions and integration docs - landed at Level 3 and helped drive a +14 Claude bonus. HelpSquad had no llms.txt at all: Claude 42, ChatGPT 47. That -5 penalty came straight from governance neglect.
Before & After
Before - Generic llms.txt
# Acme Corp Acme Corp is an industry-leading solution for customer engagement and support. Learn more at https://acme.com
After - Level 3 llms.txt with citable facts
# Acme Corp > Live chat platform serving 12,000 businesses > since 2018. Average response time: 28 seconds. ## Products - /product/live-chat: Real-time chat widget - /product/chatbot: AI chatbot builder ## Documentation - /docs/api: REST API reference (v3.2) - /docs/integrations: 85+ integrations
Put on Claude's Glasses
Here's what Claude actually sees when it reads your llms.txt - and it's reading harder than you think.
Level 1: the file exists, there's a paragraph about your business. That's a handshake. Claude registers it, gives you a small trust bump, and moves on. No navigational value. No content map.
Level 2: structured URLs for key pages - products, docs, FAQ, about. Now Claude's building an internal map of your site. It can cite specific pages instead of waving vaguely at your domain. Level 3 adds descriptions per section, giving Claude semantic context it doesn't have to infer from URL slugs. Level 4 - the top shelf - links to llms-full.txt with a full content inventory: page-level descriptions, content types, update dates, topic taxonomies. Claude treats a well-maintained llms-full.txt as a governance statement: this site cares about machine-readability at a granular level.
But here's the twist - volume doesn't win. Claude checks consistency between your llms.txt and your actual site. A 300-line file pointing to 404s scores lower than a tight 50-line file where every URL resolves. Accuracy beats length every time.
Claude also reads the prose. "Industry-leading solution for customer engagement" gives it nothing. "Live chat platform serving 300,000 websites with average response time under 30 seconds" - that's a fact block Claude can cite. It's hunting for citable data, not superlatives.
Why This Is a Claude-Only Lever
ChatGPT and Perplexity treat llms.txt as binary - it exists or it doesn't. File quality? Barely registers. Google AI Overviews doesn't parse llms.txt at all.
Claude's different. The gap between a Level 1 and Level 4 llms.txt shifts Claude's citation confidence by a measurable margin. Here's why: Claude's architecture weights explicit machine-readable signals over inferred ones. ChatGPT will reconstruct your site structure from nav menus and sitemaps. Claude preferentially uses the structured summary you hand it in llms.txt.
This creates an asymmetric opportunity. Investing in a comprehensive llms-full.txt produces disproportionate gains on Claude versus the same investment's impact on ChatGPT. Claude rewards sites that make its job easier through explicit documentation rather than forcing it to piece context together from raw HTML.
The bottom line: if you only care about ChatGPT, a minimal llms.txt (or none) won't hurt you much. If you're targeting Claude citations, you need Level 3 or Level 4 to unlock the compound trust benefit Claude provides to well-documented sites.
The Scoreboard (Real Audit Data)
Tidio.com ran away with this one. Their 251-line llms.txt included product descriptions, integration docs, pricing references, and developer API links - a solid Level 3. Tidio's Claude bonus: +14. That means Claude scored them 14 points higher than technical metrics alone would predict. The llms.txt quality was a major driver.
LiveChat.com kept it tighter - a well-structured Level 2 with accurate URLs and factual descriptions. Key move: LiveChat used specific metrics ("33 million chats monthly") instead of generic marketing. Combined with other governance signals, LiveChat earned a Claude bonus of +12.
Now the other side. HelpSquad.com had no llms.txt. None. Claude had to reconstruct their entire site purpose from raw HTML - a process that introduces uncertainty and kills citation confidence. The result: HelpSquad scored 42 on Claude vs. 47 on ChatGPT. That -5 penalty hit hardest on Claude because ChatGPT doesn't depend on this signal.
The most revealing case: Crisp.chat (overall score: 34) still got a +17 Claude bonus partly from a basic llms.txt. For a low-scoring site, llms.txt quality has outsized impact on Claude - it compensates for weaknesses elsewhere.
LiveHelpNow.net (ChatGPT: 52) sits in the middle - their llms.txt exists at Level 1 with a basic description but no structured URLs. That's a gap waiting to be exploited.
Start Here: Optimization Checklist
Start here: build a Level 2 and iterate up. Create llms.txt at your domain root with a one-paragraph business description in factual, third-person prose. No superlatives. Use specific numbers, service descriptions, geographic scope. Include your business name, founding year, primary service category.
Add structured URL sections for every major content area. Each URL gets a one-line description. Group by category (Products, Documentation, Blog, Support, About) with clear section headers. Verify every URL returns 200 - broken links in llms.txt actively erode Claude's trust score.
For Level 4, create llms-full.txt. Include page-level descriptions, content type indicators (article, product, FAQ, docs), last-updated dates, topic tags. Reference it from llms.txt with a clear link in the header. This file can run 500 to 2,000 lines for a comprehensive site.
Update both files when you publish new content, remove pages, or restructure your architecture. Claude penalizes staleness - a file referencing deprecated URLs signals neglect. Set a quarterly review cadence minimum, monthly for active sites. Automate URL verification with a script that checks for 200 responses.
Test it: ask Claude directly "What does [your domain] do?" Compare its response to your llms.txt. If Claude echoes your description, the file's being parsed. If Claude gives a vague or wrong summary, your llms.txt has a clarity problem - or isn't accessible at the expected path.
Resources
Key Takeaways
- Claude grades llms.txt on a 4-level rubric - not just whether it exists.
- Level 3+ (detailed descriptions per section) is the minimum for meaningful Claude trust gains.
- Use specific, citable facts in your llms.txt prose - not marketing superlatives.
- Every URL in llms.txt must resolve to a 200 status - broken links erode Claude trust.
- Link to a comprehensive llms-full.txt for Level 4 scoring and compound governance benefits.
How does your site score on this criterion?
Get a free AEO audit and see where you stand across all 10 criteria.