Weak AI visibility with 5 of 22 criteria passing. Biggest gap: llms.txt file.
Verdict
keephq.dev has a strong crawl foundation (HTTPS enabled, robots.txt and sitemap returning 200, and 44 internal homepage links), but it is not yet AEO-ready at a structural level. Core machine-readable signals are missing: no llms.txt (404), no JSON-LD schema blocks, no canonical tag, no FAQ endpoint (/faq returns 404), and no RSS/Atom feed. Content extractability is also weak with zero question headings, zero direct answer patterns, no lists/tables, and no H1 despite substantial indexable text. In short, discovery works, but interpretation and answer extraction for AI engines are underpowered.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Tidio has a 251-line llms.txt. Crisp has zero. The score gap: +29 points. This single file tells AI assistants exactly what your site does -and without it, they're guessing.
Tidio runs 4 JSON-LD schema types. Crisp runs zero. That's not a coincidence -it's the difference between a 63 and a 34. Structured data is the machine-readable layer AI trusts most.
AI assistants are question-answering machines. When your content is already shaped as questions and answers, you're handing AI a pre-formatted citation. Sites that do this right get extracted -sites that don't get skipped.
Our site runs 87 FAQ items across 9 categories with FAQPage schema on every one. That's not excessive -it's how we hit 88/100. Each Q&A pair is a citation opportunity AI can extract in seconds.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.