Weak AI visibility with 6 of 10 criteria passing. Biggest gap: robots.txt for ai crawlers.
Verdict
nearuandme.com has a solid technical baseline for AEO with HTTPS, indexable XML sitemaps, basic JSON-LD, and a published llms.txt file. The biggest gaps are answer-oriented content and AI-crawler policy controls: both `/faq` and `/frequently-asked-questions` return HTTP/2 404, and `robots.txt` has no GPTBot/ClaudeBot-specific directives. The site currently exposes mostly foundational schema (`Organization`, `WebSite`, `WebPage`, `BreadcrumbList`) without FAQ/HowTo/Service-level structured data. This creates a moderate AEO readiness profile with clear quick wins that could lift discoverability in AI answer engines.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Most sites run default platform robots.txt with zero AI-specific rules. That's not a strategy -it's an accident. Explicit Allow rules for GPTBot, ClaudeBot, and PerplexityBot signal that your content is open for citation.
Our site runs 87 FAQ items across 9 categories with FAQPage schema on every one. That's not excessive -it's how we hit 88/100. Each Q&A pair is a citation opportunity AI can extract in seconds.
AI assistants are question-answering machines. When your content is already shaped as questions and answers, you're handing AI a pre-formatted citation. Sites that do this right get extracted -sites that don't get skipped.
AI has a trust hierarchy for sources. At the top: proprietary data and first-hand expert analysis. At the bottom: rewritten Wikipedia articles. We've watched AI preferentially cite sites with original benchmarks -even over bigger competitors.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.