Strong AI visibility with 9 of 10 criteria passing. Biggest gap: robots.txt for ai crawlers.
Verdict
smartsheet.com has a strong AEO foundation with a live llms.txt, accessible sitemap index, and robust internal content architecture. Structured data is present but uneven: homepage source shows WebPage JSON-LD, while FAQPage schema appears on Help Center content rather than a dedicated /faq hub. The largest gap is AI crawler governance in robots.txt, which currently lacks explicit directives for GPTBot, ClaudeBot, and similar agents. Closing that policy gap and expanding schema coverage would materially improve answer-engine reliability.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Most sites run default platform robots.txt with zero AI-specific rules. That's not a strategy -it's an accident. Explicit Allow rules for GPTBot, ClaudeBot, and PerplexityBot signal that your content is open for citation.
Tidio runs 4 JSON-LD schema types. Crisp runs zero. That's not a coincidence -it's the difference between a 63 and a 34. Structured data is the machine-readable layer AI trusts most.
AI systems don't cite websites -they cite entities. A verifiable business with an address, named authors, and social proof. Our self-audit (88/100) still loses points here because we lack a physical address. That's how strict this criterion is.
Our site runs 87 FAQ items across 9 categories with FAQPage schema on every one. That's not excessive -it's how we hit 88/100. Each Q&A pair is a citation opportunity AI can extract in seconds.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.