Moderate AI visibility with 7 of 10 criteria passing. Biggest gap: llms.txt file.
Verdict
nyic.org is crawlable over HTTPS and has solid baseline technical SEO signals (Yoast JSON-LD, robots.txt, and XML sitemap infrastructure), but it is not yet optimized for AI retrieval workflows. The site does not publish an llms.txt file, and robots.txt has no AI crawler directives for GPTBot/ClaudeBot. FAQ content exists at multiple URLs, but implementation is fragmented and lacks FAQPage/QAPage schema. Improving machine-readable AI guidance, modernizing FAQ architecture, and adding organization-level entity schema would materially improve AEO readiness.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Tidio has a 251-line llms.txt. Crisp has zero. The score gap: +29 points. This single file tells AI assistants exactly what your site does -and without it, they're guessing.
Most sites run default platform robots.txt with zero AI-specific rules. That's not a strategy -it's an accident. Explicit Allow rules for GPTBot, ClaudeBot, and PerplexityBot signal that your content is open for citation.
Our site runs 87 FAQ items across 9 categories with FAQPage schema on every one. That's not excessive -it's how we hit 88/100. Each Q&A pair is a citation opportunity AI can extract in seconds.
AI systems don't cite websites -they cite entities. A verifiable business with an address, named authors, and social proof. Our self-audit (88/100) still loses points here because we lack a physical address. That's how strict this criterion is.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.