Weak AI visibility with 5 of 22 criteria passing. Biggest gap: llms.txt file.
Verdict
nunu.ai currently has low AEO readiness with an overall score of 21/100, driven by 12 criteria scoring 0 and multiple foundational discovery signals missing. Core machine-consumable assets are absent, including llms.txt (404), sitemap.xml (404), canonical tags, and any JSON-LD schema, which limits reliable interpretation by AI systems. The site does show useful underlying content depth (158 quantitative data points; Fact/Data Density score 7) and moderate technical base signals (Clean HTML and Semantic/Accessibility both scored 6), but these strengths are not packaged in extractable answer formats. The fastest path to improvement is to add crawl guidance and structured data first, then reformat content into Q&A, FAQ, and definition/list patterns.
Scoreboard
Fix It With AI
Copy-paste these prompts into Claude Code or Cursor to fix each criterion.
These prompts are designed for projects where you have direct access to the codebase (Next.js, React, static HTML, WordPress, etc.). If your site runs on a hosted platform like Webflow, switch to the "Webflow" tab for platform-specific instructions. Using a different hosted platform? Contact us for custom guidance.
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Tidio has a 251-line llms.txt. Crisp has zero. The score gap: +29 points. This single file tells AI assistants exactly what your site does -and without it, they're guessing.
Tidio runs 4 JSON-LD schema types. Crisp runs zero. That's not a coincidence -it's the difference between a 63 and a 34. Structured data is the machine-readable layer AI trusts most.
AI assistants are question-answering machines. When your content is already shaped as questions and answers, you're handing AI a pre-formatted citation. Sites that do this right get extracted -sites that don't get skipped.
Our site runs 87 FAQ items across 9 categories with FAQPage schema on every one. That's not excessive -it's how we hit 88/100. Each Q&A pair is a citation opportunity AI can extract in seconds.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.