🌲 WTSFest Portland - May 7th 2026 | 🥨 WTSFest Philadelphia - October 1st 2026
Author: Hannah Smith
Last updated: 10/03/2026
We recently had the pleasure of hosting Tory Gray and Patrick Hathaway for an Ask Me Anything (AMA) in our Slack #seo-technical channel.
Tory is Founder and CEO at Gray Dot Co, a search marketing agency. Her work sits at the intersection of technical SEO, data, and strategy, with a focus on reducing risk, clarifying priorities, and translating complex signals into insights leaders can trust. Gray Dot Co supports clients in a range of ways including: technical SEO for complex enterprise websites; site migrations and platform changes; JavaScript & Rendering SEO; and data-informed content and brand strategy.
Tory is particularly interested in how SEO intersects with product, engineering teams, analytics, and brand — and how organizations can adapt as search continues to fragment across platforms and LLM-driven experiences. She loves to share her expertise with the wider marketing community: she speaks and writes regularly about technical SEO, data, and the future of search; and is one of the most active members of the Women in Tech SEO Slack community.
Patrick is Co-founder and CEO at Sitebulb. Sitebulb offers robust desktop and cloud software that makes website audits easier, more efficient, and more accurate. He is passionate about providing website data that is prioritized, easy to understand, easy to analyze, affordable, and accessible, whatever your budget.
Patrick would rather things are “done right” than simply “done”. His in-built perfectionism is both a boon for QA and a massive annoyance to his fellow Co-founder and CTO, Gareth Brown. He spends his time looking after Sitebulb Cloud customers, and crafting the funniest, sweariest release notes in SEO.
Tory and Patrick answered questions from our community on a range of topics, including AI search, prioritisation, gaining buy-in, and more! You can read the session highlights below, plus you might also like to check out Sitebulb’s Technical SEO in 2026 AMA - Key Learnings post too.
Our live AMA sessions take place on the WTS Slack Workspace, a safe, private space for community members to ask questions and share their knowledge. Out of respect for our members and their privacy, rather than publishing full transcripts of these sessions, we curate edited recaps which capture a selection of the questions and answers from each session.
Want to take part in our next Slack AMA? Join the WTS community!
Tory: I see “GEO” as “SEO+”. By this I mean, SEO plus multichannel marketing, and with some additional tech complexity. For “GEO”, I caution clients about budgets, timelines and goals. If they think SEO will take a long time to yield results for their business, AI is a whole other ballgame (in terms of growing a funnel of enough visibility / traffic to be meaningful to their bottom line, if they are just getting started. Even for enterprise, AI traffic is TINY relative to search.)
Patrick: Since we don't offer agency services here at Sitebulb, feel free to ignore my highly opinionated opinion. I feel like all these new acronyms aren't a great place to start (I remember circa 2015 when lots of folks sprung up with new job titles like Inbound Marketer that fell by the wayside). The neatest implementation I have seen that feels pretty future-proof is simply SEO & AI Search.
Tory: My take is that – wherever it makes sense – you should be experimenting with structured data. This does NOT mean “go all in and fight for resources to do ALL THE THINGS, IN ALL THE PLACES, AT ALL COSTS.”
Why? While there is interesting evidence worth exploring, it’s by no means guaranteed to work, and that’s why you have to balance this type of experimentation with “safer bet” work.
Example: if you work at a company with limited development resources, and a culture requiring best practices, case studies, etc as “proof” for all site updates and SEO changes: schema, beyond rich markup, may not be for you. If you work at a company that’s engineering-first, with a culture of testing and experimentation, and dev-resources at-the-ready, then absolutely yes – dig deep on experimentation.
There is a caveat though: how you implement this matters. Schema is a specific kind of rich markup that can be inline OR “fetched” e.g. called to via JavaScript. Fetched content won’t be visible to AI bots that don’t render (which is most of them, in most circumstances. More from Giacomo Zucchini on which LLMs render what, today). Therefore, for best results in today’s AI ecosystem, it’s important that whatever you implement is inline (or embedded in the Response HTML) so that AI bots can access it.
Why do I think it’s worth experimenting with?
So, it’s really a matter of priorities and resources and the culture of the company testing its use. It’s not for everyone, and that’s okay.
In terms of formats / ontologies, I don’t have strong feelings about which kinds of markup should be implemented. We’re in active exploration mode. There’s not inherently a front runner that LLMs are stating they favour – if and when that happens, sure, lean into that format.
Again, this is not a proven tactic, so don’t over extend yourself and get yourself in hot water with your clients, but if and when you have the resources to explore its use, using a test-based approach methodology – then heck yes, do it. The era of AI is the TIME to experiment!
Patrick: For most sites, I still think rich result eligibility is the primary driver / leverage with clients. When working with websites that have lots of deep / rich data, getting that all marked up and exposed through schema makes a lot of sense in terms of building out your own knowledge graph.
Somewhat related, I thought this was great from SearchPilot recently - The Most Underrated Retail SEO Levers for 2026: What Experts Really Think . The amount of focus put on product data from these experts (including Emina Demiri-Watson) is particularly telling, I think.
Patrick: Internal linking is 100% still important. I'd probably avoid trying to explain how LLMs work to clients, and instead focus on what we know – all the AI platforms are using search to ground their answers. You rank highly in search by doing traditional SEO, of which internal linking is hugely important for URL discovery and link equity distribution.
Tory: Agree with Patrick re: the importance for SEO. Also watch this (same as previously linked) presentation by Jori Ford – it references a study about fetching data and how “deep” AI bots crawl beyond the prompted page. Answer was ~3 pages (live fetch testing). This means that linking contextually still matters.
Plus, let’s not forget about the actual humans using your site, who need those links to find pertinent information and, you know, give your business money. AI and SEO aren't the be-all and end-all!
Tory: From what I’ve seen, most clients don’t want to block AI crawlers – they want to reduce costs, reduce risk, and maximize return. We need scalpel solutions, not hammer solutions; blocking all the bots feels a lot like a hammer.
For the “turn pages into markdown” feature, I’m experimenting… cautiously? But to speak to some of the pushback on their new markup transformation feature – I’m not sure the web needs that much potential for cloaking. Also, the environmental benefits are maybe not as strong as they could be. Sure, if everyone moved to this model, the token usage is categorically better. But that doesn’t seem likely, and in the meantime… the result is net more resources for net more crawling (of markdown files AND HTML file variants.)
That said, I appreciate the experimentation, and that a big tool is trying to set standards, particularly when tech companies certainly seem willing to break all the rules (legal, moral, ethical) in their efforts to win “the AI wars”. But also, if I’m honest, I’m not sure I trust that Cloudfare’s motivations are “pure” with these features. Maybe I’ve been burned one too many times by big tech companies, but it seems more like an effort to reduce their own costs, get good PR, and grab a bigger market share.
I’m happy that they’re doing it, and I wish more companies would. I’m just not… putting all my eggs in that particular basket, is all.
Patrick: I have no agency angle here, so giving the tools angle. We are seeing bots blocking Sitebulb ALL the time, it's become the first onboarding email with cloud customers now (“add this IP address to your allow-list”). If I were agency-side, I'd definitely include a regular check-up on the bot blocking situation with all clients.
Patrick: I'd focus on things you can fix without dev involvement: title tags, meta descriptions, internal linking, canonical tags, and making sure your crawl data is clean enough to actually prioritise what matters most (Sitebulb can help with all of this!).
Then I'd look for any critical issues that could really hinder you, and then consider fighting for dev resources to fix these.
Tory: Certainly I’d focus on non-dev work, to Patrick’s point. But, I’d also do a complete tech audit, and evaluate each potential item on a variety of scales:
Overall, I’d encourage you to evaluate what could be done, and then work to define what should happen now versus what you can wait for, and prioritise accordingly.
Patrick: Quick answer on PageSpeed Insights score: no. The gains from going from a score of 80 to 90 is marginal in all but the most competitive verticals. If the site is genuinely SLOW, I'd focus on fixing genuine Core Web Vitals failures (especially Largest Contentful Paint (LCP) and Cumulative Layout Shift (CLS), which have the clearest correlation with user experience) and leave it there unless there's a specific reason to go further.
For SMB prioritization in general, I always start with crawlability and indexability (i.e. can Google find and understand the site?), then rendering, on-page fundamentals, and structured data.
I still see lots of folks stressing way too much about things like 301s, 404s, and image alt text, than the core technical problems that actually impact whether your pages show up in search results (a page can't rank if it isn't indexed!).
Tory: I agree with Patrick here. My approach to site speed is usually just “make it good enough” relative to user happiness, and your competition. Enterprise sites may be an exception in some cases, where given the size of the audience, speed can be a bigger lever re: conversion rates (in that faster may mean more $$$).
Generally I prioritise tech SEO for companies of all sizes relative to their goals, budgets, and resources. We use our SEO Roadmap framework to help us do that.
Tory: The short answer is to build a business case. Check out this post on How to Create an SEO Business Case That Gets Traction, you’ll also find templates there, to help get you started.
Here’s a slightly longer answer:
The metrics used will vary based on your recommendations. E.G. some sort of rich markup implementation might be measured based on CTR when that markup is present in the SERPs. If it’s a site speed improvement, I’d look to conversions primarily – across channels, and via organic too.
Patrick: Congrats on bringing the gnarliest problem to this AMA! The root issue I guess is that the schema is being generated at a different point in the stack to where the actual price data is being served. I'd definitely approach this as a dev problem, which means as the SEO, your role is to evidence the issue and convince them it is a problem worth fixing.
The first thing I'd do is systematically test what Google actually sees. Use URL Inspection in Google Search Console and cross-reference against a fresh crawl with JavaScript rendering enabled. Sitebulb has an option to 'Save HTML' as you crawl, which could be helpful here. This should tell you which version of the data is making it through to Google.
The dispatcher/CDN layer inconsistency is worth logging in Google Search Console's Rich Results Test over time – if Google is regularly seeing mismatches it may suppress the rich results anyway, which is at least a signal something needs fixing (and may help with buy-in).
From there, get the dev and / or infrastructure team involved and present your case. You want to figure out when the schema is generated versus when the cache is warmed. If the price data is inherently dynamic (live Online Travel Agent rates, for example), you may need to accept that schema will lag slightly; in which case being explicit about that in the schema (e.g. using priceValidUntil) is better than serving stale data that Google might eventually flag as a rich result quality issue.
The most important thing is still probably building the business case to fight for it to get prioritised (which Tory can speak about with much more experience and authority than I).
Tory: Did I hear BUSINESS CASE?! You’re speaking my language. Check out this guide: How to create an SEO Business Case.
Tory: AI has shifted internationalisation and the fallout hasn’t been fully realised yet. ChatGPT has a HIGHLY western & English-language bias, plus LLMs (AI bots) don’t use, care about, or respect hreflang. SEO-wise, not much is different - but AI is certainly messing with things.
Patrick: Good point, Tory! On the technical SEO side, hreflang is still the core of it, and that hasn’t really changed much in the last few years. It is still fiddly and often broken!
The other shift I'd call out is JavaScript-heavy internationalisation frameworks. A lot of sites now handle locale-switching client-side, which means Googlebot doesn't always see the hreflang tags at all. This can also be super annoying to audit. Crawling with JavaScript rendering enabled and cross-checking what Google actually sees versus what your CMS thinks it's serving has become a lot more important.
Patrick: It’s generally considered best practice for paginated pages to have self-referencing canonicals. You can inadvertently introduce other problems with widespread canonicalisation – particularly in relation to crawl budget.
Before doing anything here, I'd test out what your current situation is. Pick a subfolder and let Sitebulb crawl only the pagination hierarchy (or use it with list mode as the crawl source). Enable URL Inspection via Google Search Console and check which pages are actually indexed right now, plus, also check “days since last crawl” to see which pages Google is returning to regularly. We’ve created URL Inspection Report documentation to help you do that.
Tory: Yes, setting the canonical incorrectly (pointing to page 1) will have an impact on those listed items' ability to rank (specially, items on page 2+). Period.
So the question becomes: how much do you care about the linked items?
If / when I DON’T care about the value of the paginated items - as in, those pages are not providing business or user value – I probably prefer to block crawling (robots.txt) or noindexing (meta robots) rather than using a canonical. I have… many opinions on pagination. You might like to check out a guide we created: Pagination: SEO Best Practices and Need-to-Know Nuances.
Patrick: If I had to pick one thing it would be JavaScript rendering – understanding how Google actually crawls and renders pages, where it falls down, and how to diagnose issues when it does. It underpins so many other problems (indexing, crawl budget, content visibility) and it's the area where self-taught technical SEOs tend to have the biggest knowledge gaps.
It's also super important for AI search, since LLM Crawlers are also unable to render JavaScript (at least for their training data, this feels like an area of rapid change when it comes to RAG retrieval).
We have a JavaScript SEO training course that we put together with both WTS and Tory, and you can sign up for free.
Tory: My answer is that it’s all contextual. The ability to analyse, understand, and use data remains my number one skill recommendation for SEO (and most digital channels). If you don’t understand the numbers and metrics, and what they mean, how can you possibly effectively determine what work needs doing? What should be emphasised and what should be deprioritised?
Beyond that, I think it depends on the types of clients you’re working on (e.g. small businesses versus startups versus enterprise; ecommerce versus SaaS; etc) - I’d have a different answer for each vertical. Your clients’ tech stack might also be a consideration too, are we talking Jamstack or vanilla WordPress?
Overall, if you FORCE me to pick one (I hate picking one!) I’d pick rendering.
That includes JavaScript SEO; but it’s also about gaining a clear understanding of how rendering works, and what the challenges are, not just for search bots, but also for AI bots, and eventually AI agents.
In short, we’re going to have to figure out how these new bots consume the web, in addition to old bots. There’s a TON to explore and learn, and it should be an interesting adventure as it inevitably evolves!
Patrick: To keep your finger on the pulse without doing tech SEO all day, I'd suggest reading a few newsletters by trusted folk, who will filter out some of the noise but still keep you up to date. I’d recommend:
Tory: At Gray Dot Co we use:
Patrick: Only Sitebulb :)
Hannah Smith - Head of Content, Women in Tech SEO
Hannah is the Head of Content at WTS!
Hannah also offers creative content consultancy, training & support to help develop teams, improve processes and deliver results. Her work for clients has won multiple awards, & she’s spoken at numerous conferences including MozCon, SMX, SearchLove, & BrightonSEO.
Sitebulb has been a huge supporter of the WTS community for years - and their tools reflect the same practical approach to technical SEO that our community values!
Their desktop and cloud-based crawling tool makes technical SEO audits easier, more efficient, and more accurate - helping you fast-track the audit phase and get to actionable insights quicker.
We pay our authors, speakers & team to bring you helpful content like this.
We aim to always keep our content and community free and accessible.
If you've found value in WTS, please consider supporting us through our Buy Me a Coffee initiative.