Weak AI visibility with 1 of 10 criteria passing. Biggest gap: robots.txt for ai crawlers.
Verdict
The domain supports HTTPS and publishes a large XML sitemap index, but most key pages requested by automated crawlers are blocked behind Radware bot protection/CAPTCHA. In this crawl, `llms.txt`, homepage, and FAQ URLs returned challenge pages instead of machine-readable or user content, and `robots.txt` returned a 404 page body with no crawler directives. AEO signals that require readable on-page content (schema, Q&A formatting, semantic structure, NAP consistency) are therefore largely missing or unverifiable from bot-accessible responses.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Most sites run default platform robots.txt with zero AI-specific rules. That's not a strategy -it's an accident. Explicit Allow rules for GPTBot, ClaudeBot, and PerplexityBot signal that your content is open for citation.
Tidio has a 251-line llms.txt. Crisp has zero. The score gap: +29 points. This single file tells AI assistants exactly what your site does -and without it, they're guessing.
Tidio runs 4 JSON-LD schema types. Crisp runs zero. That's not a coincidence -it's the difference between a 63 and a 34. Structured data is the machine-readable layer AI trusts most.
AI assistants are question-answering machines. When your content is already shaped as questions and answers, you're handing AI a pre-formatted citation. Sites that do this right get extracted -sites that don't get skipped.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.