Contact us
Do you wish to exchange more thoughts with us on how to thrive and grow from within? Join us at our next Comprend day or get it touch now.
At Comprend, our Media & MarTech team has been exploring this shift hands-on. Over the past few months, we've gone from experimenting with large language models to building and orchestrating AI agents, scoped, capable, task-specific systems that can execute multi-step workflows with minimal human intervention. It's early, but the results have been striking enough to change how we think about what's possible.
The distinction that matters
When you open ChatGPT or Claude and start a conversation, you're talking to an unscoped model. It's the full breadth of a large language model's knowledge, responding to whatever you ask. That's useful, but it's also general, and it lacks the context, tools, and focus needed to execute complex marketing tasks reliably.
An agent is different. It's a model that's been given a narrow, specific task, trained with clear instructions on how to do it, and equipped with capabilities, the ability to browse the web, access data sources, read documents, or connect to platforms. Instead of asking an AI to help you write an SEO brief, you build an agent that is an SEO brief specialist: it knows what data to pull, how to analyse it, and exactly what format to hand over to the next agent in the chain.
This is the key difference. With GenAI, you're the operator. With agents, you're the strategist, and the agents execute.
What we've been building
To test this in practice, we started with a workflow our team knows well: SEO content creation. From keyword research through SERP analysis, copy briefing, content writing, quality assurance, and refinement, it's a process that typically takes days and requires multiple specialists to coordinate.
We built a six-step agentic workflow using Optimizely's Opal platform, where each stage is handled by a dedicated agent. A SERP analyser examines the top competitors for a keyword set and produces a detailed report covering ranking factors, content structures, semantic patterns, and EEAT signals. That report feeds into a copy brief agent, which cross-references the analysis with brand knowledge, tone of voice, audience profiles, positioning, to create a detailed writing brief. A content agent then produces the copy, followed by a quality assurance agent that scores the output against both the original SERP analysis and brand guidelines. A final refinement agent tightens everything based on that feedback.
The results from our first pilot were genuinely surprising. A page that started with 350 words and a quality score of 35 came out as a 2,400-word, fully structured landing page scoring 91, complete with FAQ schema, internal linking recommendations, and metadata. The entire workflow took around ten minutes. Perhaps more impressive: the agents generated over 22,000 words of internal analysis, briefs, and instructions for themselves during the process. That's the equivalent of a small book's worth of research and strategic thinking, executed autonomously.
And this was built in roughly a day, using Claude to help write the agent prompts themselves. The barrier to entry is lower than most people expect.
Beyond single workflows: the browser-native layer
Agentic AI isn't limited to structured workflow platforms. We've also been exploring browser-based AI tools, Claude's computer use capabilities and Perplexity's Comet browser, that extend agentic thinking into everyday tasks.
For a recent campaign brief, we used Claude to go into our SharePoint environment, find relevant case studies, pull our growth marketing methodology, and generate a tailored pitch presentation, all from a single prompt. It wasn't perfect on formatting, but the strategic content required almost no editing.
With Comet, we've been building LinkedIn audiences by prompting the AI to research a product line, construct targeting criteria, and then go directly into LinkedIn Campaign Manager to build the audience, including scraping irrelevant job titles from the member list and excluding them. For an account-based marketing project, it crawled over a hundred pages of industry association websites to compile a company target list.
These aren't theoretical use cases. They're tools our team is actively using to compress research, audience building, and competitive analysis from hours into minutes.
Starting without failing
Here's the uncomfortable truth about enterprise AI: roughly 95% of initiatives fail. In most cases, it's not the technology, it's ambition outpacing maturity. Teams try to transform everything at once, the first pilot disappoints, and momentum dies.
At Comprend, we use a structured approach called Project Runway to help teams, both internal and client-side, avoid this. It starts with understanding where a team sits on the AI maturity curve. Then, through a focused workshop, we map common tasks, bottlenecks, and time sinks. The most valuable use cases tend to be things that are high impact and easy to implement. Not the flashiest ideas, but the ones that will actually work and build confidence.
One of the most compelling examples we've seen came from an insurance company workshop. The team came in expecting to build sophisticated marketing automation agents. What they actually needed first was a compliance agent, because every piece of content had to go through a single legal officer who wasn't a marketer, and the review cycle was their biggest bottleneck. The agent they built checks content against compliance rules in real time as it's being written, dramatically reducing review spirals before anything reaches the legal team.
The lesson: let the bottleneck reveal the first agent, not the ambition.
From there, it's about embedding agents where teams already work, inside the CMS, the content marketing platform, the tools people use daily, so that AI becomes part of the workflow rather than a separate task. And critically, it's about building team capability alongside the technology: giving people structured, progressive challenges to build, test, and debug agents so the knowledge doesn't live with one person.
Where we're headed
Our team currently has several pilot projects running across SEO, paid search, audience intelligence, and content workflows. We're exploring how agents can automate trend detection from search data, generate topic clusters rather than single pages, and integrate subject matter expert input into automated content pipelines. We're also looking at how scheduled agents, triggered by time or data signals rather than manual prompts, can shift campaign optimisation from reactive to continuous.
None of this replaces strategic thinking. The human remains the strategist, the quality gate, the one who decides what matters. But the execution layer is changing fast, and the teams that build capability now, even starting small, will have a meaningful advantage as these tools mature.
As Comprend's first global strategic AI partner for Optimizely, and as the recent AI Innovator of the Year, we're in a unique position to develop and scale these approaches for clients at any maturity level. But honestly, the most important thing we've learned so far is simpler than any partnership or award: start with one bottleneck, build one agent, and let the results speak for themselves.
The shift from prompts to pilots is already underway. The question isn't whether agentic AI will reshape media and MarTech, it's whether your team will be building agents or watching others build them.
Do you wish to exchange more thoughts with us on how to thrive and grow from within? Join us at our next Comprend day or get it touch now.