Alex S.
Founder, Opal · opalspend.com
A detailed breakdown of how Windgrove took Opal from invisible in AI search to being cited and recommended across ChatGPT, Perplexity, and Google AI Overviews.
Avg LLM Position
AI Mentions
AI Visibility Score
Days
When we spoke with Opal in late March of 2026, the product was strong but AI search visibility was non-existent. The site had only 4 indexed pages, no blog, weak metadata, and no sitemap in Google Search Console. Anyone searching for Opal's category in ChatGPT, Perplexity, or Google AI Overviews was finding only competitors. Opal wasn't in the conversation.
The core issue: Opal is an extremely compelling product with no AI visibility. They were losing out on organic pipeline and category authority while competitors captured the conversation across AI search engines.
Most sites make the same mistake: they publish content before fixing what's broken underneath. Content on a weak technical foundation doesn't compound, it just sits there. So before we wrote a single word or built a single listing, we cleared every blocker between Opal's pages and AI crawlers.
We started by auditing the entire site — mapping every indexing gap, identifying what AI engines could and couldn't read, and diagnosing exactly why Opal wasn't surfacing in LLM responses. Only once we had a complete picture did we start executing.
With the foundation solid, we launched the first layer of AEO-targeted content. Every piece went through the same three filters before we touched it: search volume, buyer intent, and competitive opportunity. Not what seemed interesting about Opal. What a specific buyer was already searching for, and where Opal had a realistic shot to rank and get cited right now.
To get that right, we pulled from multiple sources. Google Search Console data showed us what queries were already generating impressions without clicks. Prompt tracking and LLM topic cluster analysis showed us how AI engines were framing Opal's category and which questions were going unanswered. And direct analysis of what Opal's prospects were already searching gave us the bottom-funnel angles that convert, not just traffic that looks good in a dashboard.
AEO articles published
Core pages rewritten
BOFU pages in production
AI Visibility Score
AI Brand Mentions
Site Health Score
AEO Articles Live
AI Visibility: 0 → 15.9% — From undetectable in AI search to being cited and recommended. Opal is now in the conversation where the buyers are.
Brand mentions: 0 → 1,766 — Opal's product is now being surfaced inside LLMs in response to queries their buyers are typing, with 0 ad spend, and compounding.
Site health: 66.2 → 80.7 — That puts Opal in the top 10% of sites benchmarked for technical health. Every article published from here compounds on a stronger foundation.
Articles live: 0 → 8 — Each one optimized for how buyers search, how LLMs answer, and where Opal has the highest chance of getting cited.
Within one week of launching the Ad Pay page, Opal ranked #2 for “ad pay” and #2 for “ad spend cards.” These are not vanity terms. Businesses searching “ad spend cards” or “ad pay” already know what they want. They are in the market, evaluating options, and ready to move.
Ranking #2 for both inside the first week means Opal is in that conversation before a competitor gets a chance to close it.
For context, most sites take three to six months to see meaningful movement on bottom-funnel terms like these. Opal was there in seven days. That is what happens when the technical foundation is clean, the content is structured around buyer intent, and the page is built for how AI engines and search algorithms actually evaluate relevance.

Keyword rankings within 7 days of content launch.
Opal is now appearing as a named recommendation inside Perplexity AI responses. When a buyer searches something like “what cards allow you to pay ad invoices with a credit card,” Opal Ad Pay surfaces in the answer. Not as an ad. Not as a sponsored result. As the answer, alongside a description of the product, supported platforms, and a direct link to opalspend.com.
This is what structuring content for LLMs actually produces. Most sites are invisible to AI engines because they were never built for them. Opal is not one of those sites anymore. And this is month one. As content compounds and citation frequency increases, that AI visibility score climbs. The 15.9% we hit in 31 days is a starting point, not a ceiling.

Perplexity AI surfacing Opal as a recommended product with a direct link.

Opal cited as the best-fit recommendation in an AI-generated response.

Opal listed as the top recommendation for agency ad spend cards in Perplexity AI.
A site health score of 80.7 puts opalspend.com in the top 10% of sites benchmarked for technical health. That is not a vanity metric. It means AI crawlers and search engines can actually read, index, and cite Opal's pages without hitting dead ends.
Every piece of content published from this point forward lands on a foundation that amplifies it rather than suppresses it.

Site health trend over 30 days — 80.7 overall, top 10% of benchmarked sites.
Every piece of content we published created a new surface for AI engines, third-party sites, and industry tools to reference and cite Opal. That is how brand mentions compound. You do not chase them. You build the content infrastructure that earns them, and they follow.
In 31 days, Opal accumulated 1,766 brand mentions across LLMs and the web. That is Opal's product showing up in answers, comparisons, and recommendations that their team did not write, did not pay for, and did not have to distribute. Pure earned visibility, at scale, in the first month.

1,766 brand mentions across LLMs and the web in 31 days.
Before Windgrove, Opal's AI visibility existed entirely in the middle of the funnel. The company occasionally appeared in comparison-style prompts, reaching a 26.0% mention rate at the consideration stage. But outside of that narrow window, Opal was effectively invisible across AI search.
At the top of the funnel, where buyers first discover categories and vendors, Opal had a 0.0% mention rate. At the bottom of the funnel, where users ask AI systems which platform to choose or purchase, visibility was also 0.0%. That meant competitors were controlling both discovery and buying-intent conversations across ChatGPT, Perplexity, and other LLMs, while Opal was only appearing sporadically during vendor comparison.
31 days later, Opal expanded visibility across the full buyer journey.

Windgrove expanded the AI visibility across all funnel stages.
If your product is strong but buyers aren't finding you in ChatGPT, Perplexity, or Google AI Overviews, the gap may be smaller than it looks. We can audit your current AI visibility, identify the highest-leverage opportunities, and show you what a 30/60/90 day AEO plan looks like for your situation.
No obligation, no generic deck.