Weak AI visibility with 5 of 10 criteria passing. Biggest gap: llms.txt file.
Verdict
nyconnects.ny.gov has solid technical crawl foundations (HTTPS, robots.txt, and sitemap.xml) and clear government contact identity, but it lacks AI-facing directives and structured schema markup. Core FAQ content appears client-rendered in JavaScript rather than present in server HTML, reducing extractability for some AI crawlers. The site is navigable and semantically decent, but major AEO gains depend on adding llms.txt, JSON-LD, and server-visible Q&A content.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Tidio has a 251-line llms.txt. Crisp has zero. The score gap: +29 points. This single file tells AI assistants exactly what your site does -and without it, they're guessing.
Tidio runs 4 JSON-LD schema types. Crisp runs zero. That's not a coincidence -it's the difference between a 63 and a 34. Structured data is the machine-readable layer AI trusts most.
Most sites run default platform robots.txt with zero AI-specific rules. That's not a strategy -it's an accident. Explicit Allow rules for GPTBot, ClaudeBot, and PerplexityBot signal that your content is open for citation.
AI has a trust hierarchy for sources. At the top: proprietary data and first-hand expert analysis. At the bottom: rewritten Wikipedia articles. We've watched AI preferentially cite sites with original benchmarks -even over bigger competitors.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.