For most of my career, marketing operations was the layer that made the machine run.
If there was a campaign idea, an audience to define, a nurture flow to build, a launch to support, or a reporting issue to sort out, marketing ops was usually where the work became real. We translated strategy into execution. We figured out how to operationalize the plan inside the systems. We managed the data gaps, the process gaps, the handoffs, the approvals, the exceptions, and all the invisible work that sits between “we should do this” and “it’s live.”
That is why I find some of today’s AI conversation a little too narrow.
A lot of it still sounds like a productivity story. Faster content creation. Faster research. Faster campaign setup. Faster reporting. Faster this, faster that. And to be fair, those things matter. Anyone who has spent time inside a real operating environment knows there is plenty of manual work worth reducing.
But I do not think speed is the most important thing changing right now.
What is changing is the operating model.
The old model was built on handoffs
Most enterprise marketing and sales organizations were built around specialization.
One team owns the data. Another owns the systems. Another owns the campaigns. Another owns the brand. Another owns the product story. Another owns the sales process. Another owns customer communications. Another owns reporting. Another owns compliance. None of that happened by accident. These are important disciplines, and in many cases specialization made the work stronger.
But it also created a model where progress depends on coordination.
Ideas move through briefs. Work moves through tickets. Decisions move through meetings. Assets move through review cycles. Data moves through integrations. Leads move through routing rules. Campaigns move through operations. Opportunities move through sales stages. And along the way, every team adds value, but every handoff also creates latency, ambiguity, and room for inconsistency.
That was manageable when the pace of work was slower, the channels were fewer, and the volume of data was lower.
It is much harder to manage when revenue teams are expected to respond to constantly shifting signals, tighter buying windows, changing customer expectations, more products, more channels, more content, more personalization, and more pressure to prove commercial impact.
That is where the current moment gets interesting.
Because AI is starting to do more than help people complete individual tasks.
It is beginning to participate in the work itself.
This is where “agentic” becomes useful language
I realize “agentic” can still sound vague, or worse, like another industry buzzword. But I think it points to something real.
We are moving toward a world where software is not just waiting for instructions at every step. It can increasingly interpret signals, reason over available context, recommend next actions, assemble inputs, trigger downstream steps, monitor outcomes, and adapt based on what happens next.
That is different from traditional automation.
Traditional automation is usually rule-based and narrow. If X happens, do Y. It is useful, but it tends to be rigid. It works best when the path is known in advance.
Agentic systems are more dynamic. They can work across a broader set of signals and choices. They can help determine not just how to execute a process, but what action is most appropriate within a defined set of goals, rules, and constraints.
That matters a lot in marketing and sales, because so much revenue work is not purely linear:
- Which accounts should we prioritize right now?
- What is the strongest cross-sell opportunity in this segment?
- What message is most relevant given what this customer already owns?
- Which motion should happen first: sales outreach, a digital program, a service-led conversation, or no action at all?
- When performance starts to drift, what should change?
- When a new signal appears, who should act on it?
In the old model, humans had to piece those answers together across teams, systems, spreadsheets, dashboards, and experience.
Increasingly, software can help coordinate that work directly.
That does not mean humans disappear.
It does mean the structure of the work starts to change.
The shift is not from human work to no work
This is one place where I think leaders need to be careful.
When people hear phrases like AI agents or autonomous execution, they often jump to one of two extremes. Either they imagine a future where the software does everything, or they dismiss the whole concept as hype because the real world is too messy for that to work.
I do not think either view is especially useful.
What I see instead is a shift in where human effort sits.
In the traditional model, a huge amount of effort goes into assembling the work. Pulling the audience. Checking the data. Finding the right content. Confirming the offer. Deciding which tactic to use. Building the campaign. Routing the leads. Reviewing the responses. Adjusting the flow. Coordinating across teams.
In the emerging model, more of that work can happen inside systems that are continuously reasoning over enterprise data, approved business rules, and curated knowledge.
That changes the human role.
People spend less time stitching together the basics and more time defining the rules, shaping the strategy, supervising the exceptions, improving the inputs, and deciding where autonomy actually makes sense.
That is not a small change.
It means some teams that have historically been treated as execution layers become much more important as design authorities. It means product truth owners become stewards of a knowledge layer AI depends on. It means operations leaders need to think not just about process flow, but about how best practices, templates, thresholds, and guardrails get encoded into the system. It means revenue leaders need to get clearer about which decisions they want humans to make and which ones they are comfortable delegating.
That is why I keep coming back to the operating model.
Because this is not just a feature story.
It is a design story.
Better AI will not fix a weak operating foundation
This is another point I feel strongly about.
A lot of organizations are excited about the promise of AI in marketing and sales, but they are still standing on top of fragmented data, inconsistent definitions, disconnected systems, and scattered commercial knowledge.
That is not a model problem. That is a foundation problem.
If customer and account data are incomplete, the recommendations will be weak. If product information is stale or inconsistent, the outputs will be unreliable. If workflows are unclear, the system will create friction instead of leverage. If no one has defined the rules for what AI is allowed to do, every decision becomes a debate.
The companies that do this well will not just have good models or polished demos. They will have done the less glamorous work underneath the surface:
- They will have connected the right data.
- They will have clarified ownership.
- They will have curated usable product truth.
- They will have standardized the workflows that matter most.
- They will have defined guardrails, approvals, and measures of success.
That is where the conversation gets real.
Because once software can participate meaningfully in revenue work, the quality of the result depends heavily on the quality of the system around it.
What leaders should take from this
If I were talking to a room of marketing and revenue leaders, I would not tell them to start by asking how many AI features their stack has.
I would start somewhere more practical:
- How does work actually get done today?
- Where does it slow down?
- Where does it depend on tribal knowledge?
- Where do handoffs create friction?
- Which decisions are repeated often enough that they should be made more systematically?
- What intelligence would the system need in order to make those decisions well?
- And where would you actually be comfortable letting software take the lead?
Those are better questions.
Because the real shift is not from analog work to digital work. We made that shift a long time ago. The shift now is from human-coordinated execution to system-coordinated execution, with humans increasingly setting direction, governing the rules, and stepping in where judgment matters most.
That is a different operating model. And I think it is the change many organizations are still underestimating.
I say that as someone who spent years in the execution layer and now works on strategy for the applications meant to support these teams. From both vantage points, the same conclusion keeps showing up for me: the next chapter is not about making the old machine run a little faster.
It is about redesigning how the machine works.
In the next post, I want to dig into the intelligence layer underneath all of this, because AI is only as useful as the truth and signals it has access to.
