🌲 WTSFest Portland - May 7th 2026 | 🥨 WTSFest Philadelphia - October 1st 2026

Back to Knowledge Hub

From Archive to Asset: How to Optimise Existing Videos for Search & AI in 2026

Author: Georgie Kemp

Last updated: 23/02/2026

When I joined VEED nearly two years ago, the team was already sitting on something valuable: a video editing and creation company that had leveraged SEO as a key growth driver. Every video we’d published was created to be discovered, based on extensive Google and YouTube-specific keyword research. What I didn't realise at the time was that the team had accidentally future-proofed themselves for AI search.

Fast forward to today, and approximately 10% of our citations are coming from YouTube as a direct result of that legacy authority. Not from new content frantically created to "feed the AI." From videos published months or even years ago, already optimised for traditional search, which are now being discovered and cited by ChatGPT, Perplexity, Gemini, and Claude.

That's when it clicked for me: the real opportunity isn't in creating more content. It's in systematically optimising what you already have.

Google’s John Mueller has been clear about this in a recent comment: go after your customers wherever they are. Being in the industry, we've seen video content increasingly pulled into SERPs since Universal Search in 2007, and this has accelerated over the past few years with AI Overviews, and now TikTok and Instagram appearing in search results.

Want to know what is currently driving AI visibility? Recent Ahrefs research from the wonderful Louise Lineham analysed ~75,000 brands and found that YouTube mentions outperform every other factor; including your domain rating, backlink profile, and how often your brand gets mentioned on other websites. And this pattern holds true across ChatGPT, Google's AI Mode, and AI Overviews.

Source: Ahrefs Top Brand Visibility Factors (2025) Ahrefs correlation chart showing YouTube mentions at 0.735+ for AI visibility, highest of all factors tested including domain rating and backlinks

And digging deeper into why YouTube performs so well, recent analysis from Profound found that Gemini is citing videos with a median of just 4,394 views, half that of ChatGPT's median. I explored this data further on a recent webinar with Profound if you’re keen for the full data break-down. This means you don't need viral content to show up in AI citations. You need relevance and structure, which is exactly what your existing video library likely already has.

I'm going to show you the framework I use at VEED for identifying which videos to optimise, how to actually do it, and how to prove it's working to make decisions that drive visibility across both traditional search and AI platforms.

TL;DR: The 3 Principles which Guide this Framework

  1. AI platforms read transcripts, not videos

It's too expensive for LLMs to parse video directly. They're crawling your transcripts and supporting text. Optimise the text layer first.

  1. Optimise existing content before creating new

Getting mentioned in sources AI platforms already trust compounds faster than building authority for new content from scratch.

  1. Think in questions, not keywords

AI search rewards content that clearly answers specific questions. Frame everything around what questions your video solves.

As we all know, the AI search space is evolving rapidly. YouTube citations will continue to grow as videos answer user questions in clear, helpful ways that meet E-E-A-T standards. The platforms that demonstrate expertise, provide genuine value, and make content accessible through proper text support will help to drive impact as AI recommendations.

Why Video Optimisation Matters

LLMs are playing a "consensus and context" game, as opposed to engaging in exhaustive web discovery. They pull heavily from a cluster of pre-trusted sources including: Reddit (especially for ChatGPT and Perplexity), Wikipedia (used as an entity graph), YouTube (for AI Overviews and AI Mode), Medium, LinkedIn, and trusted review sites like G2 and Capterra.

The Ahrefs correlation data showing YouTube mentions outperforming domain rating and backlink counts which I mentioned above, demonstrates that breaking into AI citations isn’t necessarily about creating more content. It’s about being present in the sources LLMs already trust. And YouTube’s dominance is staggering. According to BrightEdge, up to 29.5% of Google’s AI Overviews cite YouTube, making it the top cited domain overall. That’s a 200X advantage over Vimeo’s 0.1%.

Gemini’s traffic share has jumped from 5.7% to 21.5% in the past 12 months (according to Similarweb), which means that Google’s leverage of YouTube in AI Overviews and AI Mode is compounding. Once a source is trusted for a category, it keeps getting reused. Breaking into that citation set with net-new content is hard. Making optimisations inside sources that are already trusted? That’s a little easier, right?

Source: Similarweb: Global AI Tracker (2026) Gemini AI traffic share growth chart: 5.7% to 21.5% in 12 months, source Similarweb

This doesn’t hold true only for Google platforms, Perplexity and ChatGPT are increasingly citing YouTube too, especially for tutorials, product demos, pricing, comparison, and reviews.

This is why I'm so focused on existing video optimisation. For mid-to-bottom-funnel searches especially, getting mentioned in sources LLMs already cite is more effective than publishing new content.

What AI Platforms do with your Videos

Before I get into the tactics, let me explain something that changed how I think about video optimisation entirely. Think about all the content that's basically been invisible to search engines until now; PDFs, podcasts, videos, entire films. Eli Schwartz recently shared an insight that LLMs will eventually unlock all of that, but right now? They're reading transcripts, not actually watching videos. And honestly, that's brilliant news for us because it means we know exactly what to fix.

And here's the critical part that shapes my entire strategy: it is much more expensive for an LLM to parse a video or an image than it is for text. The data's pretty clear; AI platforms are reading transcripts, not watching the actual videos. Which is great news for us, honestly. As SEOs it means we need to nail the text layer: transcripts, descriptions, everything that helps AI understand what's in the video.

This fundamentally changes how I think about video optimisation. I'm not optimising for thumbnails or watch time when it comes to AI search (though of course those still matter for YouTube's algorithm). I'm optimising for the text that makes video content parseable, understandable, and citation-worthy.

Something I learned the hard way: only create video when it's actually the right format. Demos where you need to show something? Tutorials where steps matter? Complex explainers that benefit from visuals? Absolutely. But if you're just making video to tick an AEO box, you're wasting time.

To help with prioritisation, I’ve become quite picky about which videos deserve the optimisation effort. If a topic works just as well as a blog post, it’s not forced into video format. But when video genuinely helps someone understand something better – a software walkthrough, a visual comparison, a step-by-step process, that's when it’s worth investing in proper optimisation. And by 'proper,' I mean making sure the text layer is there so both people and AI can actually use it.

How to Prioritise which Videos to Optimise

After auditing our library, I developed a simple system for deciding which videos to tackle first.

Four questions guide every decision:

  1. Is it already performing?

I pull the basics first: how many views it's getting, whether people are actually watching it all the way through, engagement signals, and if it's already showing up in search for anything useful. If traditional algorithms value it, that's a signal AI platforms might too.

  1. Is anyone still searching for this?

I check our keyword tools, but I also literally type the question into ChatGPT and Gemini to see what comes up. I ask myself: what specific question does this answer? Is this important to our ICPs (ideal customer profiles)? Is someone researching a problem they have, or are they ready to buy?

  1. How much work does it need?

A great video with poor metadata is a quick win. A mediocre video with fundamental content issues isn't worth the investment right now.

  1. Does it matter to the business?

Sometimes a video deserves optimisation even if the metrics aren't screaming at you. Maybe it shows off something you’re doing differently, or it supports a bigger company goal. These are usually videos where the format genuinely serves the content: demos, tutorials, complex explainers.

I then sort our videos into three buckets. First, quick wins: great videos that just need better metadata and transcripts. Second, videos that need more work but matter strategically to the business. Third, decent videos for niche topics that I'll get to when I have time.

What makes video content citation-worthy?

Through various tests across ChatGPT, Claude, Perplexity, Gemini, and AI Overviews, I've identified a few patterns.

Transcript quality is non-negotiable

AI platforms are reading, not watching. I've literally watched videos with half the views getting cited by AI platforms over more popular competitors. Why? Better transcripts. The AI could actually understand what was being said.

So, I break transcripts into readable paragraphs, add speaker labels where it matters, fix the punctuation that auto-captions always mess up.I also add timestamp markers at key moments and include context where needed. If a video references "this chart," I add brief clarifying notes in brackets.

Descriptions that answer real questions

Every video optimised gets supporting text that spells out exactly what problem it solves, who should watch it, and what they'll learn. This isn't just for the YouTube description - I'm talking companion blog posts, proper summaries, FAQs. This means that there are multiple places where AI can find and understand the context.

Metadata with the question framework in mind

Titles: I balance keyword inclusion with natural language, always with the question framework in mind. "How to optimise video transcripts for better search visibility" works because it clearly signals the question being answered.

Descriptions: First 150 characters serve as a clear summary that could work as an AI snippet answer. Then I explicitly state what questions this video answers, who it's for, key takeaways with timestamps, resources and citations, and related questions.

Tags: Mix of question-based tags, topic tags, and branded terms. Think about how people actually ask questions.

Schema markup

I implement VideoObject schema markup on pages that embed or host our videos, such as landing pages or blog posts. The schema reinforces what questions the video answers, who created it and their credentials, how it fits into a broader content cluster, and interaction signals that validate quality.

For tutorials, I add HowTo schema that breaks down the question-answer structure clearly.

Building the ecosystem

AI platforms cite videos way more often when they're not just sitting there on their own. So for videos that matter, I build out companion blog posts that tackle the same questions from different angles, add FAQs that expand on the topic, and link to related resources to create multiple entry points for discovery.

This means a single video becomes multiple citation touchpoints. And critically, it provides the text layer that makes video content accessible to AI systems.

How to Optimise Videos for AI search

Once I've identified priority videos, here's what I actually do:

  1. Audit and prioritise

Pull analytics, check search visibility, manually test videos across AI platforms. The question I always ask: does this actually need to be a video? Does it help someone understand better than a blog post would? If yes, then I will move to the next step.

  1. Fix the transcripts:

Sometimes that means paying for professional transcription. Other times it means going through the auto-generated version line by line and fixing any mistakes. I make sure to use logical paragraph breaks, speaker labels, accurate punctuation, timestamp markers, and contextual notes in brackets. I also provide transcripts in multiple formats: YouTube's native format, downloadable text files, and embedded in companion posts.

  1. Build the question framework:

Document the primary question it answers, 3-5 secondary questions covered, who this is for, where in the funnel it belongs, and key takeaways that could serve as snippet answers. This shapes everything else.

  1. Rewrite metadata:

I start with the question the video answers right at the top. Then I add the key takeaways someone would actually care about, timestamps that describe what's happening (not just "00:45 - Introduction"), related questions people might ask next, and any resources or citations.

  1. Create supporting content:

I create blog posts that embed the video but aren't just transcript dumps. The transcript is there, formatted so it's actually readable, but I also add proper heading structure based on the questions being answered, FAQs that go deeper, summaries for people who want the quick version, and links to related content that builds out the topic.

  1. Implement schema:

I use VideoObject schema on pages that host or embed videos, including name (question-focused), description (including questions answered), transcript text or URL, creator credentials, and interaction statistics.

How to Measure Success

I split my tracking between traditional and AI channels because they tell different stories.

Traditional SEO: Here, I'm reviewing organic traffic to the video pages, noting where the content is ranking for both our brand terms and non-branded keywords, whether YouTube is suggesting our videos, and the engagement signals – watch time, completion rate, that sort of thing.

AI search visibility: I regularly query AI platforms with relevant questions and track whether our videos are being cited. I document which platforms cite them, in what contexts, and how prominently. That 14% citation rate comes from systematic tracking of where our citations originate.

I also track what percentage of our video library has professional-quality transcripts, related blog posts with full text support, proper schema markup, and question-focused metadata. These become leading indicators.

Business impact: I track leads that come from video content, how well video viewers convert compared to other sources, and how long it takes someone who's watched our videos to actually convert. I've built dashboards that show all this over time, and I mark exactly when each video is optimised so I can see what's actually moving the needle.

Mistakes I've Made, and What I’ve Learned

  • Over-optimising for keywords: Early on, I focused too much on keyword density. AI models reward clarity and usefulness, not keyword stuffing. When I shifted to thinking in questions and prioritised readability, I saw better results.
  • Neglecting the text layer: In my early attempts, I focused too much on video-specific elements like thumbnails and watch time optimisation. Once I understood that AI platforms are crawling transcripts, not watching videos, my entire approach shifted. The text layer is everything.
  • Optimising in isolation: Video content performs best as part of a broader content strategy. I learned to connect optimised videos to blog content, social posts, email campaigns, and other assets. The ecosystem approach creates the consensus signal that AI platforms look for.

What I'm Focused on for 2026

Remember those three principles I outlined at the start of this article? They're guiding everything I'm doing in 2026:

  1. The text layer is still everything

Even as AI capabilities evolve, transcripts and supporting text remain the foundation. I'm doubling down on transcript quality and comprehensive companion content.

  1. Existing content optimisation is compounding.

Videos optimised months ago are seeing renewed citations as AI search platforms grow their market share. With Google's ecosystem advantage, that >10% citation rate from YouTube acts as a baseline that improves as optimisations are refined.

  1. Question-focused content wins across platforms.

Whether it's ChatGPT, Gemini, or Perplexity, the videos that clearly answer specific questions get cited.

And that advice about going after your customers wherever they are? It matters more than ever. I’m constantly watching video integration across platforms, from YouTube to TikTok to Instagram in search results, and adapting our strategy accordingly.

The platforms I’m prioritising:

  • Gemini and AI Overviews (as Gemini grows market share, properly optimised YouTube content will compound)
  • ChatGPT (still the largest player)
  • Perplexity (growing steadily and particularly good at citing sources transparently)

I'm also watching closely as AI platforms develop better capabilities to parse video and audio directly. When that happens, I believe that videos with strong text support will still have an advantage, because they'll be accessible in multiple ways.

Where to Start with Video Optimisation

The boundaries between traditional SEO, AI search, and social discovery are basically disappearing. The people who win this year won't be the ones pumping out endless content. They'll be the ones who make what they've already got impossible to ignore, no matter where people are searching.

So, if you’re keen to jump into video, here's where I'd start:

  1. Audit your top 10 videos

Be honest about what's actually there. Do they have quality transcripts? Supporting text? Question-focused metadata?

  1. Choose three quick wins

Select three videos where the format genuinely serves the content that need minimal work. Focus on transcript quality and supporting text first.

  1. Test AI citation

Query AI platforms with questions your videos should answer. Are you being cited? If not, start by adding comprehensive transcripts and supporting text.

  1. Think in questions

Reframe your video strategy around the questions your content answers. This shift will guide everything from titles to schema markup.

  1. Build the text layer

For your priority videos, ensure they have quality transcripts, summaries, FAQs, companion posts.

Your existing video library is already super valuable. It's time to make it discoverable by giving AI platforms the text layer they need to understand, parse, and cite your content.

Georgie Kemp - SEO Lead, VEED

Leading 25+ specialists at the intersection of AI, SEO and video, Georgie champions Search Everywhere Optimisation TM, blending traditional SEO with test-and-learn AI tactics to drive measurable ROI across today's evolving search landscape.

WTSKnowledge Sponsor

ZipSprout connects brands with local nonprofits and events to build sponsorship links that drive local SEO and community impact.

Since 2016, they’ve facilitated 25,381 placements with community organizations across the US, raising over $9.2M in sponsorships.