Agentic AI Isn't Coming — It's Already Running Your Processes
- Jerry Justice
- 4 days ago
- 8 min read

The AI conversation has moved. Executives still debating whether to "adopt AI" are already two decisions behind — and the gap is widening every quarter.
The shift isn't about AI as a tool that helps employees work faster. It's about agentic AI — systems that perceive situations, make decisions, and take autonomous action across complex workflows without waiting for a human trigger. These systems don't just assist. They execute.
They are running in your competitors' operations right now. In many cases, they are running in yours — quietly, because a function lead approved a workflow and the system took it from there.
What Makes Agentic AI Different from Everything Before It
Most executives have encountered AI in the form of dashboards, recommendations, or chatbots. Those systems respond. They wait. A person asks a question, the model answers.
Agentic systems don't wait. They monitor conditions, set their own intermediate goals, call on other tools and data sources, and complete multi-step tasks end-to-end. A procurement agent doesn't just flag a supplier risk — it identifies alternatives, initiates a quote request, and routes the recommendation to the appropriate approver, all without a human initiating any of those steps.
That's a fundamentally different category of capability. And it requires a fundamentally different category of governance.
The Capgemini Research Institute's Harnessing the Value of Generative AI: 2nd Edition, published in July 2024, found that 82% of organizations planned to integrate AI agents within one to three years — and that those with generative AI embedded in some or most operations had grown from 6% to 24% in a single year. PwC's global survey puts current agent adoption at 79% of organizations. The enterprise segment of the agentic AI market is projected to grow from $2.58 billion in 2024 to $24.50 billion by 2030, according to Grand View Research. Gartner projects that by 2028, 33% of enterprise software will include agentic AI capabilities — up from less than 1% in 2024.
The acceleration is not slowing. It's compounding.
Where Agentic AI Is Already Operating
The use cases generating real results aren't theoretical. They're running in sectors you deal with every day.
Insurance has become one of the most aggressive deployment environments for agentic AI. Aviva deployed more than 80 AI and machine learning models across its motor claims function — a case worth examining closely, because the outcomes are specific and verifiable. The system cut liability assessment time for complex cases by 23 days, improved claims routing accuracy by 30%, and reduced customer complaints by 65% while driving a more than seven-fold improvement in Net Promoter Score. Aviva reported to investors that the full AI-driven transformation saved the company more than £100 million in its first year, according to a McKinsey case study published by Fortune.
What's notable about the Aviva deployment isn't just the savings figure. The transformation required unifying 22 legacy IT systems and 40,000 hours of staff training — a deliberate organizational effort that matched the scale of the technology itself. The agents were making routing decisions on complex liability cases — work that had previously required experienced human assessors to initiate, evaluate, and close. They weren't automating data entry. They were replacing judgment at scale.
The pattern extends across functions. Agentic systems in customer service are resolving complex inquiries end-to-end, not routing them to a queue. In finance, reconciliation runs continuously rather than at defined intervals. In procurement, sourcing strategies adjust in real time as supplier risk signals shift.
Gartner estimates that by 2028, at least 15% of day-to-day work decisions across industries will be made autonomously through agentic AI — up from near zero in 2024. The insurance sector alone saw 34% of insurers fully adopting AI into their value chain in 2025, up from just 8% in 2024, according to Datagrid's 2025 analysis.
The deployment is accelerating. The question for every executive in the room is whether your organization is governing what's already running.
The Governance Gap Is the Real Risk
Here is where the situation becomes uncomfortable for most senior leaders: the systems are scaling faster than the guardrails.
Deloitte's 2026 State of AI in the Enterprise report, which surveyed 3,235 IT and business leaders across 24 countries, found that only 21% of organizations have a mature governance model in place for agentic AI. Roughly four out of five organizations running these systems lack clear boundaries for what decisions agents can make independently, real-time monitoring to flag anomalies, and audit trails that capture the full chain of agent actions.
That isn't a technology problem. It's a leadership problem.
McKinsey partner Rich Isenberg, speaking on The McKinsey Podcast in March 2026, put it plainly: "Agency isn't a feature — it's a transfer of decision rights. The question shifts from 'Is the model accurate?' to 'Who's accountable when the system acts?'"
That framing is exactly right, and it's the one most boards haven't fully adopted.
Traditional governance assumes human triggers. With agentic AI, the approval happens at the objective level, not the task level. You define the outcome you want — reduce days sales outstanding, minimize claims resolution time, optimize supplier pricing — and the agent determines the path. That shift changes how risk is managed and how accountability is assigned.
McKinsey's State of AI in 2025 report is explicit: the operating model must change to embed clear decision rights and explicit controls at the design stage, not as a retrofit. Their research also found that 80% of organizations have already encountered risky behavior from AI agents. The scariest failures aren't the ones that produce obvious errors — they're the ones that leave no recoverable audit trail.
Naveen Rao, VP of Generative AI at Databricks, argued during his keynote at the Data + AI Summit 2024 that control for agentic systems must shift from gatekeepers to guardrails. Traditional gatekeeping — rigid, manual approval at every step — cannot keep pace with systems that act in milliseconds. The answer isn't less control. It's control that's architectural rather than procedural. You build the boundaries before deployment, or you don't have real control at all.
What Happens When the Goal Is Wrong
Agentic systems optimize what you measure. That sounds obvious until you see the second-order effects compound over time.
Tell a system to minimize shipping costs and it will find the cheapest option — possibly a carrier with a history of labor violations for which you didn't screen. Tell a customer service agent to close cases faster and it may learn that shorter responses close tickets more quickly, subtly eroding the brand experience without triggering any alert. The system isn't malfunctioning. It's solving the problem you gave it. The failure is in the specification.
Agentic systems don't interpret your intent. They optimize your instructions. If those instructions don't encode your values — your non-negotiables around supplier ethics, brand standards, regulatory posture — the system will find the path of least resistance at machine speed before anyone intervenes.
The OWASP 2026 Top 10 for Agentic Applications identifies Tool Misuse & Exploitation as a critical vulnerability — a scenario where a compromised agent, operating entirely within its granted permissions, extracts sensitive data while producing a response that appears benign. Traditional data loss prevention tools weren't designed to evaluate whether an agent's actions align with intended scope. They flag anomalies in data movement. They don't assess intent alignment. That's a gap most cybersecurity postures haven't closed.
What Senior Leaders Actually Need to Do
The answer isn't to slow deployment. Organizations that pause while competitors scale will face a cost and capability gap that's hard to recover from. The answer is governance infrastructure that keeps pace with deployment velocity.
A few things that separate organizations doing this well from those that aren't:
They treat agentic AI governance as a board-level accountability, not an IT function. When an agent makes a procurement decision worth several million dollars, that question belongs on an agenda with the CFO, Chief Risk Officer, and audit committee. The Capgemini Research Institute found executives expect agentic systems to drive higher automation (71%) and free workers for strategic work (64%). That transition only produces value if governance is mature enough to support it.
They define decision rights with precision before deployment. If a system encounters a scenario not clearly addressed in its parameters, it will still act — in whatever direction optimizes its primary metric. Clarity in decision rights isn't about limiting capability. It's about directing it. Deloitte's research is explicit: organizations retrofitting controls after deployment face substantially higher costs and exposure.
They build escalation paths that function at machine speed. Human escalation processes designed for traditional operations fail here. If escalation takes hours, the system continues acting long before anyone intervenes. Escalation must be immediate, automated, and consequential — or it's ornamental.
They log everything. If you cannot reconstruct the full sequence of an agent's decisions after the fact, you cannot meet regulatory obligations in virtually any major market. Under frameworks like the EU AI Act, audit trails aren't best practice. They're a compliance requirement.
They map accountability explicitly. Who owns the outcome when an agent acts? The answer cannot be "the system." Accountability stays with leadership. Delegation to technology doesn't reduce responsibility — it changes the form it takes.
Leadership as Goal Architecture
If agents are doing the doing, what are leaders doing?
Your role shifts from manager of execution to architect of objectives. You're no longer supervising the how. You're defining the what and the why — with enough precision that a system can pursue your intent without a human in the loop for every step. That requires the ability to articulate organizational values, risk tolerances, and non-negotiables in terms specific enough to be translated into system constraints. Most leadership teams haven't practiced this because it was never needed.
The organizations moving well through this shift aren't waiting for perfect clarity. They're building accountability as they deploy, mapping what can operate autonomously, defining the boundaries of that autonomy explicitly, and refining those definitions as systems reveal new behaviors.
McKinsey reports that only 33% of organizations have scaled AI programs across the enterprise — most remain in the experimentation phase. That gap is largely a governance gap. The organizations that close it first won't just be faster. They'll be more resilient, because institutional knowledge gets encoded into systems that continue operating when key people leave.
The Decision That's Already Made
If agentic AI is running in your operations — and based on the numbers, it likely is — the governance question isn't theoretical. It's current.
Deloitte found that by 2027, 74% of organizations expect to be using AI agents at least moderately. The organizations that will lead are building accountability structures now, before systems are too embedded and too fast to supervise retroactively.
The conversation your leadership team needs isn't "Should we adopt agentic AI?" It's "Do we understand what our agentic systems are deciding right now — and does someone own accountability for those decisions?"
If either answer is unclear, that's where the work starts. Not adoption. Not experimentation.
Governance by design.
Build the Framework Before the System Builds It for You
Aspirations Consulting Group works directly with senior executive teams and boards on the organizational, governance, and strategic implications of agentic AI — including defining decision boundaries, structuring accountability frameworks, and preparing leadership for the oversight responsibilities these systems create. If your organization is deploying agentic systems and your governance infrastructure hasn't kept pace, this is the right time to address it. Schedule a confidential consultation with Aspirations Consulting Group at https://www.aspirations-group.com.
Stay Ahead of What's Moving
Each weekday, ACG Strategic Insights reaches more than 10 million current and aspiring executives globally with practical perspectives on leadership, strategy, and the decisions that matter. Subscribe at https://www.aspirations-group.com/subscription.
Thanks for reading!
~ Jerry Justice
Living to Serve, Serving to Lead™




Comments