AEO Site Rank: How We Calculate Your 0-100 Rating
Your AEO Site Rank is not a guess. It is a deterministic score built from 48 criteria grouped into five fixed-weight pillars. Each criterion is scored 0-10, converted into an effective weight after confidence and overlap controls, and rolled into a base site score. AEORank then applies the topic-coherence gate and blends that result with a page-fleet score so thin templates cannot hide behind strong sitewide infrastructure.
Part of the AEO scoring framework - the current 48 criteria that measure how ready a website is for AI-driven search across ChatGPT, Claude, Perplexity, and Google AIO.
Quick Answer
Your AEO Site Rank is a deterministic 0-100 score built from 48 criteria across five pillars: Answer Readiness (largest share), Content Structure, Trust & Authority, Technical Foundation, and AI Discovery. AEORank scores each criterion 0-10, adjusts weights for confidence and overlap, calculates a base site score, applies a coherence gate for topic focus, and blends that with page-level quality scores. Audits also expose foundation, content-fleet, and confidence outputs so you can see whether the problem is infrastructure, page quality, or both.
Audit Note
In our audits, we've measured AEO Site Rank: How We Calculate Your 0-100 Rating on live sites, we've compared implementations, and we've audited the...
How is the AEO Site Rank calculated?
Two sites.
What are the 48 criteria in an AEO audit?
Put on ChatGPT's glasses for a second.
Which criteria have the highest weight in the AEO Site Rank?
Numbers on a page mean nothing until you see them in action.
What is a good AEO Site Rank?
We do not use fuzzy labels.
How can I improve my AEO Site Rank quickly?
One technical detail catches more sites than you would expect: HTTPS availability directly gates your Clean HTML score.
Summarize This Article With AI
Open this article in your preferred AI engine for an instant summary and analysis.
48 criteria grouped into 5 pillars that determine your 0-100 score
Before & After
Before - No AEO optimization
# example.com - Score: 34/100 llms.txt: 0/10 (missing) Schema.org: 1/10 (no JSON-LD) Q&A Content: 2/10 (no question-answer format) Clean HTML: 3/10 (no HTTPS) Entity Authority: 4/10 (inconsistent business info) robots.txt: 5/10 (blocks AI crawlers) FAQ: 0/10 (no FAQ section) Original Data: 3/10 (no proprietary stats) Internal Linking: 6/10 (basic navigation) Semantic HTML: 4/10 (divs everywhere)
After - Targeted AEO optimization
# example.com - Score: 82/100 llms.txt: 9/10 (comprehensive, 150+ lines) Schema.org: 8/10 (Organization + FAQPage + Article) Q&A Content: 8/10 (every page has Q&A sections) Clean HTML: 9/10 (HTTPS + minimal JS bloat) Entity Authority: 8/10 (consistent NAP + author schema) robots.txt: 9/10 (allows all AI crawlers) FAQ: 9/10 (dedicated FAQ with 50+ items) Original Data: 7/10 (case studies with numbers) Internal Linking: 8/10 (topical clusters + breadcrumbs) Semantic HTML: 8/10 (proper headings + landmarks)
How Is the AEO Site Rank Calculated?
Two sites. One scores 34. The other scores 88. The gap is not taste, not opinion, not some AI in a different mood. It is a stack of measurable points distributed across 40 checks a script can run in seconds.
The scoring works in three stages:
Stage 1: Criterion scoring. Each of the 48 criteria is scored 0-10 by deterministic checks - things a script verifies without asking an LLM for its opinion. Each criterion carries a weight that reflects how much AI engines care about that signal when deciding whether to cite your content. Weights are adjusted internally for confidence and overlap so related criteria do not double-count.
Stage 2: Pillar aggregation. The 48 criteria roll up into five pillars - Answer Readiness, Content Structure, Trust & Authority, Technical Foundation, and AI Discovery. Answer Readiness carries the largest share because AI engines need content worth citing before anything else matters. A topic coherence gate prevents sites with scattered, unfocused content from achieving high scores regardless of technical implementation.
Stage 3: Page-fleet blending. AEORank samples your actual pages - not just the homepage and FAQ. If a site has polished infrastructure but hundreds of weak product or category pages, the page-level quality pulls the score down. A template-heavy site with strong sitewide schema but thin content on individual pages scores honestly, not inflated.
The gap between 34 and 88 is not mysterious. Every point traces back to a specific implementation gap. Specific means fixable.
How the 48 Criteria Break Down
Put on ChatGPT's glasses for a second. Someone asks about your industry. You need to decide which sites to cite. What do you check?
That is exactly what these 48 criteria measure. They are organized into 5 pillars:
Answer Readiness is the heaviest pillar. It asks whether the content is actually worth citing. This is where topic coherence, original data, content depth, fact density, citation-ready writing, answer-first structure, helpful purpose alignment, first-hand experience, and duplicate-content controls live.
Content Structure asks whether machines can parse the answer cleanly. We look for direct answer density, Q&A format, query-answer alignment, FAQ coverage, extractable tables and lists, definition patterns, and entity disambiguation.
Trust & Authority asks whether an AI engine can verify who is behind the content and why it should trust it. That includes entity and brand authority, internal linking, freshness, visible dates, author schema, schema markup, creator transparency, and methodology transparency.
Technical Foundation checks whether the content is machine-readable at the page level. Semantic HTML, clean crawlable HTML, extraction friction, image context, schema coverage, and speakable support sit here.
AI Discovery covers whether engines can find and revisit the site efficiently. That includes llms.txt, robots.txt, sitemap completeness, RSS, canonical URLs, publishing cadence, licensing signals, and cannibalization risk across overlapping pages.
The important point is that the model is not trying to "detect AI writing." It is scoring whether a site is focused, original, trustworthy, machine-readable, and discoverable enough to be cited. That is why the biggest gains usually come from tightening topic focus, adding original evidence, improving answer formatting, and making the entity behind the content easier to verify.
How Do the Weights Add Up in a Real Audit?
Numbers on a page mean nothing until you see them in action. Here is how the current model works in practice.
A site with strong infrastructure and solid pages might look like this:
Pillar / Output Score Share Contribution
--------------------------------------------------------
Answer Readiness 78 40% 31.2
Content Structure 72 25% 18.0
Trust & Authority 69 15% 10.4
Technical Foundation 84 10% 8.4
AI Discovery 91 10% 9.1
--------------------------------------------------------
Base site score 77.1
Content fleet score 63
Page-fleet weight 0.22
--------------------------------------------------------
Final overall score 74
That site is in good shape, but the page fleet is weaker than the infrastructure. The homepage, schema, crawlability, and discovery layer are doing fine. The page templates need stronger answer blocks, evidence, and page-level depth. The split headline scores make that obvious: foundation is strong, content fleet trails.
Now compare it with a site that has scattered content and weak templates:
Base site score before gate 61
Topic coherence 3/10
Coherence cap 50
Base score after gate 50
Content fleet score 41
Page-fleet weight 0.18
--------------------------------------------------------
Final overall score 48
That is the same reason the coherence gate matters so much. Even if a technically polished site earns decent points elsewhere, scattered content can still cap the score. Then the page-fleet blend makes sure weak product or category templates keep dragging the overall result until the real content quality improves.
What Do the Score Ranges Actually Mean in Practice?
We do not use fuzzy labels. Here is what each range translates to in practice - with receipts.
86-100: Excellent - AI-first content architecture. Your site is built for AI citation. Multiple schema types, comprehensive llms.txt, Q&A-structured content throughout, strong entity signals. Sites in this range show up consistently in AI-generated answers across multiple engines.
76-85: Good - Strong AI visibility. You have done meaningful AEO work. Most criteria are well-covered. AI engines can find, understand, and cite your content reliably. The remaining gaps are usually in one or two specific criteria. Fix those, and you cross into the top tier.
61-75: Moderate visibility. Some AEO foundations are in place, but significant gaps remain. AI engines can partially understand your content but miss key context. You are getting cited inconsistently - sometimes yes, often no. The fix usually involves improving topic coherence and adding original data - the two heaviest criteria accounting for 24% of your score.
46-60: Below-average visibility. AI engines see your site but struggle to extract structured information. Here is the thing about this range: it often represents sites where the content is actually solid. The writing is good. The product is real. But the technical AEO layer is completely absent. Adding that layer can produce dramatic 20+ point jumps.
0-45: Critical - minimal to no visibility. Your site is largely invisible to AI answer engines. They might know you exist from external references, but they cannot confidently cite your content. Multiple foundational criteria are missing. But the silver lining? There is nowhere to go but up. The first round of fixes - llms.txt + FAQ + basic schema - typically produces a 20-30 point jump.
How Does HTTPS Affect Your AEO Site Rank?
One technical detail catches more sites than you would expect: HTTPS availability directly gates your Clean HTML score.
No HTTPS? Criterion #4 is capped at 3/10. No exceptions. No matter how clean your actual HTML is. The direct score impact is meaningful but not dominant - and HTTPS is also a trust signal that affects how AI engines treat your entire site. A missing SSL certificate signals to crawlers that the site may not be well-maintained.
This is not us being picky. It is a security and trust signal that every major AI engine factors into crawling and citation decisions. Google AI Overviews explicitly deprioritize non-HTTPS content. ChatGPT and Claude crawlers prefer HTTPS sources. If you cannot even encrypt the connection, what else might be unreliable?
We test HTTPS by attempting a connection to your domain over port 443. If it fails or redirects to HTTP, the cap applies. This catches sites with misconfigured SSL certificates, hosting that does not support HTTPS, and CDNs that strip encryption.
The fix takes about as long as making coffee. Most hosting providers hand you free SSL through Let's Encrypt. If your shared hosting does not support it, that alone might be reason to migrate.
What Are the Fastest Ways to Improve Your AEO Site Rank?
Not all criteria are equal. If you want the biggest jump for the least effort, here is the priority order. We have watched hundreds of sites improve using exactly this playbook.
The biggest lever: Topic Coherence This is the heaviest single criterion and it has a gating effect. If your blog covers random unrelated topics, your coherence score stays low - and the coherence gate caps your maximum overall score. Sites with very low coherence cannot break out of the 40-60 range no matter how strong their technical implementation is. Only when coherence reaches a sufficient level does the gate lift completely. The fix: focus your blog on 2-3 core topics that relate to your business. Stop publishing one-off posts about tangentially related subjects. A focused content strategy can add several points on its own and remove the cap entirely.
The content substance play (original data + depth): - Publish case studies with real numbers. "Customer X reduced costs by 34%" is original data AI cannot find elsewhere. Moving original-data pages from thin marketing copy to documented evidence produces meaningful score gains. - Ensure your key pages have substantive content - 1500+ words with proper heading structure. Thin landing pages with a headline and three bullet points score poorly on depth. Detailed, well-organized pages score much better.
The quick technical wins (lower weight, still valuable): - Add a /llms.txt file to your domain root. Twenty minutes of writing. It will not rescue weak content, but it is still one of the fastest missing-foundation fixes. - Create a /faq page with 15-20 real questions and answers. Add FAQPage schema. FAQ is 3% weight - a quick afternoon project worth 1-2 points. - Allow AI crawlers in robots.txt. If you are currently blocking GPTBot or ClaudeBot, removing those blocks is a one-line edit.
The sustained effort (10-15 point potential): - Build topical content clusters with internal cross-linking. Hub pages linking to related content. This improves both Topic Coherence and Internal Linking simultaneously. - Add question-answer formatting to your top pages. Direct answers, Q&A format, and alignment all matter, but the model also now checks how consistently those patterns show up across eligible pages. - Implement Author schema with real bios and credentials. Person schema with jobTitle, knowsAbout, and sameAs links.
Start here: Run your free audit at aeocontent.ai. Look at your Topic Coherence, Original Data, and Content Fleet results first. If any of those are below 5/10, that is your highest-ROI starting point.
How Does the Fix Plan Projection Model Work?
When we generate a fix plan, we project what your score could become after implementing the recommended changes. The projection model is deliberately conservative - we would rather under-promise than have you invest effort and feel misled.
Diminishing returns. Each successive fix contributes less than the previous one. The first high-impact fix gets full credit. By the tenth fix, the marginal gain is a fraction of its theoretical value. This models reality: the first few improvements are transformative, later ones produce smaller marginal gains.
Confidence weighting. Not all fixes are equally certain to succeed. Adding a robots.txt file is near-certain - you will almost certainly do it and it will work. Improving Topic Coherence across 200 blog posts requires months of content strategy. The projection weights each fix by how likely it is to be fully executed, producing a conservative "expected" score alongside an optimistic "best case."
Gap cap. The model caps the total projected improvement to prevent unrealistic projections. A site scoring 40 will not project to 100 even if every criterion has a fix available. The cap ensures projections feel achievable.
The result: two numbers on your fix plan. The "expected" score is what you will likely achieve if you execute the plan competently. The "best case" is what happens if every fix lands perfectly. The gap between them tells you how much uncertainty exists in the plan.
Every fix also carries a reach multiplier based on how many pages it affects. A fix touching 100 pages contributes more than one touching 3 pages. Fixes that require the same underlying work (like multiple schema improvements) are grouped so they do not double-count.
Why Does Deterministic Scoring Matter?
We could have used AI to evaluate sites. Ask Claude "How well is this site optimized for AI visibility?" and get a score. Easier to build. Probably reasonable-sounding results.
We deliberately chose not to do that. Here is why.
Deterministic scoring means every audit is reproducible. Run the same audit twice on the same site. Same score. Change one thing and re-audit - the score change traces directly to what you changed. No prompt sensitivity. No model temperature variance. No "the AI was in a different mood today."
This matters because AEO is an optimization process. You need a reliable feedback loop: make a change, measure the impact, decide what to do next. If the measurement tool itself introduces noise, the whole loop breaks. You cannot optimize against a moving target.
It also means you can trust the benchmarks. When we say the average SaaS company scores 58 and the average healthcare provider scores 41, those numbers come from the same deterministic algorithm applied to every site. They are directly comparable. An apples-to-apples benchmark across 11,000+ domains and 15 sectors.
The tradeoff? Deterministic scoring cannot capture everything. A script cannot tell if your FAQ is genuinely helpful or keyword-stuffed. It cannot verify if your Organization schema is accurate or fabricated. We accept that tradeoff because the alternative - scoring that changes unpredictably - is worse for everyone trying to improve.
Your score is a contract. It tells you exactly what you earned and exactly what you need to do to earn more. No black boxes. No surprises.
Where Can You Find Additional AEO Resources?
Key Takeaways
- Your AEO Site Rank is a deterministic score built from 48 criteria, each scored 0-10 by fixed checks rather than AI opinions.
- Answer Readiness carries the largest share of the model. Content substance matters more than technical implementation.
- A coherence gate prevents sites with scattered, unfocused content from achieving high scores regardless of technical perfection.
- AEORank blends the base site score with page-level quality so template-heavy sites with weak content cannot hide behind strong infrastructure.
- Fix plan projections use diminishing returns, per-fix confidence, and a gap cap to produce conservative, realistic estimates.
- Scores 76+ indicate good AI visibility. Scores below 46 mean AI engines are essentially guessing about your business.
How does your site score on this criterion?
Get a free AEO audit and see where you stand across all 34 criteria.