Response Efficiency: Why Bloated Pages Make AI Skip You
When your HTML response exceeds 250KB, AI crawlers spend more time downloading than reading. Response Efficiency measures the payload size of your pages - the raw bytes AI has to fetch before it can even begin parsing your content. Lighter pages get crawled more often and processed faster.
Part of the AEO scoring framework - the current 48 criteria that measure how ready a website is for AI-driven search across ChatGPT, Claude, Perplexity, and Google AIO.
Quick Answer
Keep your HTML payload under 100KB for a perfect score. Pages under 250KB score well. Pages over 500KB score poorly. The scorer evaluates the raw HTML response size in kilobytes. This criterion (1% weight, Technical Foundation pillar) is low-weight but compounds with Critical Path Efficiency and Document Weight - all three measure how easily AI crawlers can access your content.
Audit Note
In our audits, we've measured Response Efficiency: Why Bloated Pages Make AI Skip You on live sites, we've compared implementations, and we've audited the...
What is Response Efficiency and how does it affect AI crawling?
Response Efficiency measures the raw size of your HTML response in kilobytes.
How large can my HTML pages be before AI crawlers deprioritize them?
The biggest offender is inline CSS.
What causes bloated HTML responses?
**Step 1: Measure your actual HTML size** Use curl to check the raw response size for your key...
Summarize This Article With AI
Open this article in your preferred AI engine for an instant summary and analysis.
HTML payload size thresholds
Before & After
Before - 450KB HTML with inline everything
<html>
<head>
<style>/* 120KB of inline Tailwind */</style>
<script>/* 80KB of inline app state */</script>
</head>
<body>
<script>window.__NEXT_DATA__={/* 100KB */}</script>
<!-- Actual content: 20KB -->
</body>
</html>After - 85KB HTML with external resources
<html>
<head>
<link rel="stylesheet" href="/styles.css">
<script src="/app.js" defer></script>
</head>
<body>
<!-- Actual content: 20KB -->
<!-- Total HTML: ~85KB -->
</body>
</html>What Is Response Efficiency?
Response Efficiency measures the raw size of your HTML response in kilobytes. When an AI crawler requests your page, the first thing it downloads is the HTML document. Everything else - images, stylesheets, scripts - comes later (or not at all for many AI crawlers). The HTML payload is the gatekeeper: if it is too large, the crawler either times out, truncates the response, or deprioritizes your page for future crawls.
The scorer checks the byte size of the HTML document and maps it to a 0-10 scale: - 100KB or less: 10/10 - 101-250KB: 7/10 - 251-500KB: 4/10 - 501-800KB: 2/10 - Over 800KB: 0/10
These thresholds reflect how AI crawlers actually behave. Most AI crawlers (GPTBot, ClaudeBot, PerplexityBot) have timeout limits and response size caps. A 100KB HTML page downloads in milliseconds and gets fully parsed. An 800KB page may hit timeout limits or get truncated, meaning the crawler never sees your content below the fold.
What Causes Bloated HTML Responses?
The biggest offender is inline CSS. CSS-in-JS frameworks and utility-first approaches like Tailwind can inject 100KB+ of CSS directly into the HTML document instead of serving it as an external stylesheet. This CSS is invisible to users but counts against your AI crawl budget.
The second offender is server-rendered state. Next.js applications using getServerSideProps embed a __NEXT_DATA__ JSON blob that can reach 50-200KB on data-heavy pages. Similar patterns exist in Nuxt (__NUXT__), Gatsby, and other SSR frameworks.
Inline JavaScript is the third source. Analytics scripts, A/B testing frameworks, and chat widgets that embed their code directly in the HTML instead of loading from external files add up quickly.
Excessive JSON-LD can also contribute. While structured data is essential for AI visibility, a page with Organization + WebSite + Article + FAQ + BreadcrumbList + HowTo schema can add 10-30KB of JSON-LD. This is usually fine, but combined with the other sources it pushes pages over the threshold.
The key insight is that AI crawlers do not execute JavaScript. They see only the HTML response. So your "light" single-page app that loads content via API calls after render looks like an empty shell to AI crawlers - and the HTML it does see is often bloated with framework bootstrap code.
How Do You Reduce HTML Payload?
Step 1: Measure your actual HTML size
Use curl to check the raw response size for your key pages:
curl -s -o /dev/null -w "%{size_download}" https://yoursite.com/
Compare this number against the thresholds above.
Step 2: Move inline CSS to external stylesheets
If your framework injects CSS into the HTML, configure it to extract CSS into a separate file. For Next.js, ensure optimizeCss is enabled. For Tailwind, purge unused utilities and serve via external stylesheet.
Step 3: Externalize JavaScript
Move inline scripts to external files with defer or async attributes. This reduces the HTML payload and improves Critical Path Efficiency simultaneously.
Step 4: Minimize server-rendered state
For Next.js: avoid passing large datasets through getServerSideProps - fetch only what the page needs to render above the fold. Consider pagination for data-heavy pages.
Step 5: Audit JSON-LD size
Keep structured data lean. If your JSON-LD exceeds 10KB, check for unnecessary nesting, redundant properties, or schemas that duplicate information already present in the HTML.
Score Impact in Practice
Response Efficiency carries 1% weight in the Technical Foundation pillar. On its own, the direct scoring impact is small. But it is part of the Page Speed trio (Response Efficiency + Critical Path Efficiency + Document Weight), and together these three criteria carry 3% of the total weight and share an overlap group - meaning improvements compound across all three.
More importantly, response efficiency affects crawl behavior beyond the scoring formula. AI crawlers have limited budgets. A site with 50 pages at 100KB each requires 5MB of crawl budget. The same site with 500KB pages requires 25MB. When crawl budget is limited (and it always is), the lighter site gets crawled more thoroughly and more frequently.
In practice, most sites score 7-10/10 on this criterion. The failures we see are concentrated in: - SPA frameworks with heavy inline state (Next.js, Nuxt) - Enterprise CMS platforms that inject inline styles - Marketing pages with multiple embedded chat/analytics/A/B scripts - Pages with very large FAQ sections rendered entirely server-side
How AI Engines Evaluate This
GPTBot (ChatGPT) has a documented preference for faster, lighter pages. OpenAI's crawling documentation recommends keeping pages accessible and lightweight. Pages that timeout or return oversized responses get crawled less frequently.
ClaudeBot processes the full HTML response but applies quality scoring that considers content-to-markup ratio. A page where 80% of the bytes are CSS/JS and 20% are content gets a lower content density score than a page where the ratio is inverted.
PerplexityBot operates in real-time and has tight timeout constraints. When assembling an answer, Perplexity needs to fetch and process multiple sources quickly. Pages that respond slowly or with large payloads are more likely to be skipped in favor of faster sources.
Googlebot has well-documented crawl budget management. While this criterion primarily targets AI crawlers, the same payload efficiency principles apply to Google's crawling of your site for AI Overview source material.
External Resources
Key Takeaways
- Keep HTML responses under 100KB for a perfect score. Under 250KB is acceptable.
- The scorer measures raw HTML payload size, not rendered page weight. Images loaded via <img> tags do not count.
- Common bloat sources: inline CSS frameworks (Tailwind utilities), inline JavaScript bundles, server-rendered state blobs, and excessive JSON-LD.
- Move CSS to external stylesheets and JavaScript to external scripts with defer/async to reduce HTML payload.
- This criterion is part of the Page Speed trio (Response Efficiency + Critical Path Efficiency + Document Weight) - all three contribute 1% each to Technical Foundation.
How does your site score on this criterion?
Get a free AEO audit and see where you stand across all 34 criteria.