Moderate AI visibility with 7 of 10 criteria passing. Biggest gap: llms.txt file.
Verdict
sbhny.org is technically crawlable and uses HTTPS, server-rendered HTML, and Yoast-powered sitemaps, which gives it a solid indexing foundation. It also includes JSON-LD, but the schema is limited to WebPage/BreadcrumbList/WebSite and does not expose entity-level medical organization data or FAQ schema. The site has FAQ content and strong internal navigation, yet AI-crawler-specific guidance is thin because llms.txt is missing and robots.txt does not include GPTBot/ClaudeBot directives.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Tidio has a 251-line llms.txt. Crisp has zero. The score gap: +29 points. This single file tells AI assistants exactly what your site does -and without it, they're guessing.
Most sites run default platform robots.txt with zero AI-specific rules. That's not a strategy -it's an accident. Explicit Allow rules for GPTBot, ClaudeBot, and PerplexityBot signal that your content is open for citation.
AI has a trust hierarchy for sources. At the top: proprietary data and first-hand expert analysis. At the bottom: rewritten Wikipedia articles. We've watched AI preferentially cite sites with original benchmarks -even over bigger competitors.
AI systems don't cite websites -they cite entities. A verifiable business with an address, named authors, and social proof. Our self-audit (88/100) still loses points here because we lack a physical address. That's how strict this criterion is.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.