🎤 WTSFest Philadelphia up next on October 7th
🎟️ Tickets also on sale for Melbourne, London, & Portland
Author: Clarissa Fonseca Chen
Last updated: 04/06/2025
When I first started experimenting with AI for content operations, my default approach was prompting: ask a question, refine the response, rinse, repeat. It worked well enough for quick drafts or brainstorming sessions, but it didn’t take long to notice the limitations: outputs were inconsistent, sometimes outdated, and rarely reusable.
So, I moved on to create custom GPTs. While these were reusable, I was still using a significant amount of time piecing together outputs and verifying accuracy.
When Eoin and Steven from the AirOps team reached out about their cohort program - I knew this was my chance.
I joined because I didn’t just want better prompts, I wanted a better system.
Over the two-week program, I realized the real unlock wasn’t just learning to “prompt smarter.” It was learning how to build workflows that made research more relevant, repeatable, and more connected to how I actually work.
Instead of stitching tools together manually, I was building an AI-powered pipeline that could support my day-to-day process. All this without starting from zero every time.
I want to share the nuggets of knowledge I gained and how they shaped my new approach to leveraging AI.
At its core, prompting is reactive. You’re asking one question at a time, and the model gives you an answer, usually pulling from a snapshot of data that might be weeks or even months old.
This is fine for brainstorming or quick drafts, but it breaks down fast when you’re trying to:
Workflows, on the other hand, are intentional, modular, and built for scale.
Using the AirOps platform, I built a workflow that automated everything from topical relevance and SERP scraping to localization-ready briefs and link suggestions.
All of this without me having to rewrite prompts every time.
For example, this is the content creation grid I created on AirOps:
It hosts 5 different workflows covering topic research, content brief creation, article draft, content optimization and localization (AU, UK only at this point).
A process that would take 2-4 weeks to complete can be done in half that time, taking into account human checks for tone and accuracy, as well as process bottlenecks if teams are short on bandwidth.
Additionally, this grid is not limited to one industry - I was able to set it up in a way where I can indicate my industry of focus and can easily add other brand kits to adjust the focus of the outputs.
I expect that I can speed this up even more by taking advantage of the CMS integrations available, then I will be unstoppable!
One of the biggest wins from the workflow approach was finally getting research outputs that reflect what’s happening right now.
Instead of relying solely on static model knowledge, I pulled in real-time search snippets, People Also Ask questions, Reddit threads, and publisher headlines, and more. This allowed me to see and evaluate how a topic was evolving, not just how it was defined six months ago or longer.
This was especially useful for:
And because these inputs were baked into the workflow, I didn’t have to manually refresh them or wonder if I was missing something.
Every output was grounded in live context.
As of this post, I asked Chat GPT for their knowledge base cutoff date. It replied with the following:
My knowledge base is current up to June 2024. If you need information after that - like reent news, updates, or product releases - I can look i up for you. Would you like me to check something recent?
This means an almost one-year gap in knowledge!
However, LLMs can now search for topics online and return all the information you might need.
Should you trust it? Personally, I do not trust the raw output from LLMs, even from the custom GPTs I have prompted into the exact output I am looking for.
This is where a workflow also shines.
You can add a step for human review to add an extra layer of verification to ensure your final output is accurate, relevant to your brand and reflects your target audience’s interests.
The other challenge with prompting is consistency. One day your LLM of choice gives you gold; the next, it’s hallucinating statistics or skipping sections. That’s not scalable.
By building a structured workflow, I could define exactly what I wanted at each step:
Each of these was its own component, with defined logic and format.
It didn’t just make the output better, it made it usable by other stakeholders and tools. It could plug into a CMS, brief a writer, or inform a CRO test.
That’s the difference between a smart idea and a smart system.
A lot of folks treat LLMs as a tool for spinning up content faster. That’s fine, but it’s only part of the picture.
What the AirOps cohort helped me see is that if you want LLMs to work with your team—not just for it—you need to go beyond the template.
Workflows let you turn instincts into infrastructure. They help you scale research without losing nuance. And they give you back time without giving up control.
If you’re still prompting, ask yourself: what would this look like if it were a system?
AirOps helps you build powerful LLM workflows that combine your data, GPT-4, Claude, Gemini and more to drive real growth in your business.