Moderate AI visibility with 7 of 10 criteria passing. Biggest gap: comprehensive faq section.
Verdict
goddard.org has strong technical crawlability fundamentals, including HTTPS, a valid robots.txt, a live XML sitemap index, and meaningful JSON-LD on core pages. It also publishes a large llms.txt file and exposes clear organization identity signals such as phone, address, and nonprofit trust badges. The biggest AEO gaps are question-answer readiness: both /faq and /frequently-asked-questions return 404, and no FAQPage/HowTo schema was detected on the homepage crawl. Prioritizing FAQ architecture plus AI-crawler-specific robots directives would materially improve discoverability for answer engines.
Scoreboard
Top Opportunities
Improve Your Score
Guides for the criteria with the most room for improvement
Our site runs 87 FAQ items across 9 categories with FAQPage schema on every one. That's not excessive -it's how we hit 88/100. Each Q&A pair is a citation opportunity AI can extract in seconds.
AI assistants are question-answering machines. When your content is already shaped as questions and answers, you're handing AI a pre-formatted citation. Sites that do this right get extracted -sites that don't get skipped.
Most sites run default platform robots.txt with zero AI-specific rules. That's not a strategy -it's an accident. Explicit Allow rules for GPTBot, ClaudeBot, and PerplexityBot signal that your content is open for citation.
AI has a trust hierarchy for sources. At the top: proprietary data and first-hand expert analysis. At the bottom: rewritten Wikipedia articles. We've watched AI preferentially cite sites with original benchmarks -even over bigger competitors.
Want us to improve your score?
We build citation-ready content that AI engines choose as the answer.