Scoring System
48 weighted criteria across 5 pillars determine your AEO Site Rank. Learn how effective weights, confidence, and page-fleet scoring combine into the final score.
Deterministic scoring engine
AEORank powers every AEO Content AI audit. Review the engine overview and methodology: Docs
Overview
Your AEO Site Rank (0 - 100) measures how well AI engines can discover, parse, and cite your website. The score is deterministic: every point traces to a specific check, and the same crawl state produces the same result.
The current model is not a single raw weighted average anymore. AEORank calculates a base criterion score, applies a topic-coherence gate when needed, then blends that base score with a page-fleet score based on the types and quality of sampled pages.
Scoring Flow
Each of the 48 criteria is scored 0 - 10 individually, but the final score is assembled in stages. Raw criterion weights are adjusted for heuristic confidence and overlap between closely related criteria, normalized inside each pillar, then aggregated into a base score.
Normalization and overlap control
Closely related criteria do not all get full raw weight at once. AEORank dampens overlap in three clusters: the question-answer system, freshness signals, and provenance/trust signals. The pillar targets still remain fixed at 40/25/15/10/10.
Site Aggregation
After the base score is calculated, AEORank scores the sampled page fleet. Page reviews are classified by helpful page type such as homepage, editorial, product, category, catalog, support, reference, and landing. The model then blends the base score with the weighted page-fleet score.
This is why audits now return split headline scores. Foundation emphasizes technical foundation, AI discovery, and trust infrastructure. Content Fleet reflects the sampled page mix and content quality. A site can have strong foundation but weak page templates, and the audit will show that gap explicitly.
Prevalence beats isolated examples
Several content criteria now score by coverage across eligible pages instead of presence anywhere on the site. That includes Q&A format, query-answer alignment, citation-ready writing, and evidence packaging.
Five Pillars
The 48 criteria fall into five pillars with fixed target weights. Answer Readiness is normalized to 40% of the final model, Content Structure to 25%, Trust & Authority to 15%, Technical Foundation to 10%, and AI Discovery to 10%.
“Is your content worth citing?”
Normalized to 40% of the model. Determines whether AI engines have substantive, original, citation-ready material worth referencing, including first-hand evidence and duplicate-content resistance.
“Can machines extract and cite your content?”
Normalized to 25% of the model. Covers answer density, Q&A patterns, FAQ, tables, definitions, and entity disambiguation.
“Do AI engines trust your content?”
Normalized to 15% of the model. Covers entity authority, internal linking, freshness, visible dates, schema, authorship, and methodology trust signals.
“Is the markup AI-friendly?”
Normalized to 10% of the model. Covers semantic HTML, clean crawlable markup, extraction friction, image context, and schema depth.
“Can AI crawlers find you?”
Normalized to 10% of the model. Covers cannibalization avoidance, llms.txt, robots.txt, publishing velocity, licensing, sitemaps, canonicals, and RSS feeds.
All 48 criteria
Every criterion has a fixed effective weight that determines how much it contributes to the base score. The table below shows the current effective weights after confidence weighting, overlap damping, and pillar normalization. Values are rounded for readability.
Effective weights are rounded
The engine normalizes pillar weights exactly, but the public table rounds criterion weights to whole percentages. That means the displayed total may land slightly above or below 100%.
Score Ranges
Your overall score maps to one of six tiers. These labels appear on audit reports and in the API response.
HTTPS Factor
Criterion #4 (Clean, Crawlable HTML) includes HTTPS availability. Sites without HTTPS are capped at 3/10 on this criterion, resulting in an approximate 3 - 4 point overall penalty. The audit checks HTTPS first and falls back to HTTP for all subsequent checks.
No HTTPS = guaranteed penalty
Even if your HTML is perfectly clean, lacking HTTPS caps criterion #4 at 3/10. This is one of the easiest points to recover - install an SSL certificate and gain 3 - 4 points immediately.
Benchmark Comparison
Your score is compared against peers in your sector and category. The API returns sector averages, and the web dashboard shows “Above Average”, “Average”, or “Below Average” badges based on a +/- 5 point threshold from the sector mean.
For example, if the average score in “Developer Tools > Cloud Infrastructure” is 62, a score of 68 or higher earns “Above Average”, 57 or lower shows “Below Average”, and anything in between is “Average”.
View the full benchmark data across all sectors and categories at /benchmarks.