đź‘• WTSMerch now available
🎟️ WTSFest 2026 tickets on sale for London, Portland, & Philadelphia
Author: Jasveer Matharu
Last updated: 05/12/2025
Every day, LinkedIn is full of posts about optimising for AI surfaces like Google’s AI Overviews, and large language models (LLMs) like ChatGPT. This practice is often referred to as either Generative Engine Optimisation (GEO) or AI Optimisation (AIO).
The reactions to these posts are often split. On one hand, there’s plenty of scepticism with people pointing out that bold claims are being made without much evidence, and that a lot of what’s out there is still untested theory. Fair enough. But, on the other hand, I personally find it useful when people share their assumptions and experiments. Even if some of it turns out to be wrong, it sparks fresh ideas and gets me thinking about things I might not have considered otherwise. That’s where the real value is.
With this article, my aim is to help make sense of what we know for sure, versus what’s still speculation. I’ve focused on four tactics that, to my mind, are well-supported and worth implementing, and two that don’t have strong evidence yet, but seem like easy tests to run. I’ve also included a priority matrix of other tactics I’ve collected.
Side note - I recognise that we have something of a naming convention problem right now: some people use the term GEO, some people use the term AIO, and others prefer simply to continue to use the term SEO. AIO is the term I use with my clients, and so I have chosen to use that acronym moving forward.
The short answer: because user behaviour is shifting, and the data proves it.
Research by Sparktoro and Datos (2024) showed that more than half of Google searches now end without a click, and AI has only accelerated that trend. ChatGPT, Claude, Perplexity, Bing Copilot, and Google’s AI Mode all serve answers directly in their results, meaning users often never reach websites at all.
More recent findings back this up. Pew Research (2025) reported that when an AI summary appeared, users clicked on organic links in only 8% of searches, compared to 15% when no summary appeared. And, according to Semrush by March 2025, AI Overviews were triggered for over 13% of queries, a 72% month-on-month increase.
This shift has created what many SEOs are calling “the Great Decoupling”: content can achieve massive exposure inside AI summaries, but generate little or no traffic. In other words, your content becomes the raw material for AI answers, but your site might never receive a visit. This means traditional SEO metrics, such as clicks, impressions, and rankings, now tell only half the story. Your content might still be seen, quoted, or cited without a user ever landing on your page.
This might be a slightly risky confession from an SEO, but as a user, I much prefer an AI Overview that just gives me the recipe straight, instead of having to scroll through a 10,000-word blog post trying to track down the actual ingredients for that “high-protein three-ingredient banana bread” I saw on Instagram five days ago.
That said, traffic that does come through is often more valuable. Seer Interactive (2025) reported that one client saw ChatGPT referrals convert at nearly 16%, compared to just 1.8% for Google Organic over seven months. In other words, AI is narrowing the funnel but sharpening it.
The apparent contradiction of fewer clicks but higher value, comes down to how people are using LLMs. A RESONEO study of 87,725 ChatGPT conversations showed that most queries are about facts and opinions rather than products.
Alt text: RESONEO Research - Graph showing how people interact with ChatGPT
Semrush data reinforces this: users often start with ChatGPT for exploration and context, but still rely on Google for verification, comparison, and action-oriented searches. Put simply, LLMs are capturing the front end of the journey, while Google remains dominant at the decision and conversion stages as depicted in the image below:
Alt text: Semrush research (2025) - Google usage after ChatGPT adoption
Search Engine Optimisation (SEO) has always focused on improving visibility in search engine results pages (SERPs): with the aim of improving rankings, and increasing clicks and impressions.
AI Optimisation (AIO), on the other hand, is about making your content understandable, reusable, and quotable by AI systems like ChatGPT, Google’s Gemini, and Perplexity. Instead of chasing rankings, the goal is to become a reliable source that these AI systems can confidently draw from when generating answers.
Here’s the good news: most of what we already do for SEO still applies. The fundamentals, creating content people actually want, structuring it clearly, and clear authority signals, remain just as important. What changes is the context.
For example:
So, think of AIO as SEO with an adjusted lens. The tactics often look the same, but what drives results is subtly different.
Here is a side-by-side comparison:
Alt text: Table showing the difference between traditional SEO and AI Optimisation
There’s a lot of noise in the AIO/GEO space, but a few tactics are backed by enough evidence that we can say with confidence are worth paying attention to, and investing resources in. They’re not theories, they’re patterns we’ve seen in platform documentation, research, and real-world use.
Here’s where the evidence is strongest, and where you should focus your efforts first:
One of the clearest signals we have is bot accessibility.
You don’t have to guess whether they’re visiting your site; you can check your server logs. And while you can’t control how AI uses your content once it’s retrieved, you can control whether (and where) these bots are allowed access through your robots.txt file. That’s a lever worth paying attention to.
Alt text: Table showing the different AI crawlers and their purpose
Both Google and Microsoft have confirmed that structured data (e.g., schema markup) plays a crucial role beyond traditional SEO. It’s foundational for how LLMs understand and reuse content as they lean on knowledge graphs and semantic cues to make sense of information.
The difference is measurable. A review by Data.World found that LLM accuracy on enterprise Q&A tasks jumped from 16% to 54% when queries ran over knowledge graphs instead of raw SQL tables.
So which schemas matter most?
That said, schema shouldn’t be seen as just a checklist of properties to tick off. It’s better understood as a maturity journey, with increasing levels of impact for both search and AI.
Martha van Berkel, founder of Schema App, frames it brilliantly:
I’d also recommend checking out Martha’s articles on Search Engine Journal; they genuinely made me view schema in a fresh way (and yes, even got me excited about implementing it… which is saying something).
Another tactic that’s moved out of the “hypothesis” bucket and into “validated” is content architecture.
Here’s what the research is saying:
In other words, the clearer and more structured your content is, the easier it is for AI systems to pull the right pieces out and reuse them.
So what does this look like in practice?
It’s important to note that this doesn’t change what you write about (we still need to create content people are searching for), but it does change how you present it. You’re not only writing for readers, you’re also laying breadcrumbs for machines. And the easier you make it for them to follow the trail, the more likely they are to pick up and reuse your content.
Just like traditional SEO, where Google rewards trusted domains, generative engines lean heavily on brands and sources they already “trust”.
Here’s what the evidence tells us:
However, AI doesn’t treat all brands equally. Studies show LLMs lean heavily toward sources that are already widely linked and cited, which makes it harder for smaller or newer voices to break through (Lichtenberg et al., 2024). Experiments even found GPT-4 and Llama-3 consistently associating positive traits with global brands like Nike and Apple, while undervaluing local competitors.
If this trend continues, AI search won’t “democratise” visibility; it will amplify the advantage of already-established players. Authority signals, consistent mentions, and deliberate brand-building are no longer nice-to-haves. They’re essential if you want to show up in AI answers.
Not everything in AIO is proven; some (ok, most!) tactics are still in the “maybe useful, maybe hype” category. These are worth keeping an eye on, and in some cases testing, but they’re not guaranteed wins (yet). I picked two that I have seen spoken about the most, but they are by no means the only ones, or the most important.
Think of this as robots.txt for LLMs. Instead of telling search crawlers what they can and can’t do, llm.txt provides machine-readable guidance for AI models. That said, llm.txt is still very experimental. As of mid-2025, leading AI crawlers (OpenAI’s GPTBot and ChatGPT-User, Anthropic’s ClaudeBot, PerplexityBot, and Google-Extended) continue to rely mostly on robots.txt, structured data, and schema.
Could llm.txt pay off in the future? Possibly. But for now, I’d treat it as an exploratory tactic rather than a guaranteed visibility booster.
Personally, if it’s low effort to implement (say, via a plugin), I don’t see the harm. It gives you a chance to test, and even if it doesn’t move the needle, it won’t hurt you.
This one’s interesting. No LLM provider has confirmed Markdown is used as a signal, but some in the community are testing whether Markdown syntax, headings (##), bold, code blocks, gives content an edge.
Here’s why the idea has traction: a lot of developer and technical content (which is often written in Markdown) shows up disproportionately in AI answers. That might be because when content is scraped, converted, or stored in training pipelines (like Common Crawl), it often gets flattened into something that looks a lot like Markdown.
So the assumption is that if your content already has Markdown-style headings, lists, and tables, it may survive that flattening process more cleanly, making it easier for LLMs to extract and reuse.
That said, there’s no hard evidence that Markdown outperforms clean, semantic HTML. The safest move is still to make sure your HTML is tidy and structured so it survives the same process just as well.
That being said… since I’ve been building a few apps and sites with AI recently, I’ve seen firsthand how useful Markdown files are for retaining context. Which is why I’m genuinely curious to test this one as an AIO tactic.
How should you prioritise your AIO work? The AIO Tactics Priority Matrix is what we’re using internally at Oxford Comma right now. It serves as directional guidance, not a definitive recipe, and will evolve as the field matures.
Alt text: Oxford Comma Digital, AIO Tactics Priority Matrix
If you’ve spent any time with AI platforms, you’ve probably seen hallucinations in action. False information delivered with total confidence, followed by a sheepish “Good catch” when you point out the mistake.
Hallucinations can misattribute content, repeat competitor bias, or even invent features entirely.
This is why AIO isn’t just about visibility. It’s about ensuring LLMs can access and understand your content so that when you are cited, it’s accurate, transparent, and trustworthy. In a world where AI systems increasingly “decide” what’s true, brands have a vested interest in providing clarity.
But here is the exciting part: just as SEO once started with hunches before becoming a data-rich discipline, AI search optimisation is still in its assumption phase. Which means: you get to help shape it.
Yes, focus on the tactics with strong evidence, but don’t shy away from testing promising hypotheses. Something that didn’t work for someone else might work for you because your context (or even way of executing) is different.
The best part? Because we’re all experimenting, the stakes are lower right now. It’s okay if something doesn’t pan out (as long as it doesn’t negatively impact your results), you’ll still come away with valuable insights that will make you a better SEO and AIO professional.
Profound helps brands win in generative search. Monitor and optimize your brand's visibility in real-time across ChatGPT, Perplexity, Google AI Overviews, Microsoft Co-pilot, Grok, Meta AI, & DeepSeek.