Every vendor has an agentic AI pitch right now. Most of them are selling the same thing: a future where AI agents resolve complex customer issues end-to-end, without human involvement, across every channel, at a fraction of current cost.
Some of that is coming. But not as fast as the demos suggest, not as cleanly as the business cases assume, and not without a set of operational decisions that most of the vendor content conveniently skips.
I’ve been writing about AI in the contact centre since 2018 — when most of what was marketed as AI was glorified keyword routing dressed up in a press release. I’ve since implemented AI triage and routing redesign at Q4 Inc, watched the LLM wave arrive faster than almost anyone predicted, and spent the last two years separating what’s actually working in support operations from what’s still aspiration dressed as product.
Here’s my honest practitioner’s read on where agentic AI actually stands — and what support leaders should be thinking about before they sign anything.
What “Agentic AI” Actually Means
The term is doing a lot of work right now, so it’s worth being precise.
A traditional chatbot responds within predefined boundaries. It matches input to a decision tree and outputs a scripted response. When the input doesn’t match anything in the tree, it fails — usually by apologising and transferring to a human.
An agentic AI system is fundamentally different in architecture. It can reason about a goal, break it into steps, take actions across multiple systems, evaluate the outcome of each step, and adjust. When a customer says “my order is late and I want a refund or a return,” a standard bot gets confused. An agentic system can authenticate the customer, look up the order, evaluate refund eligibility against policy, initiate the refund, and send confirmation — without human involvement in any step.
Gartner estimates that by 2026, 40% of enterprise applications will feature task-specific AI agents, and that conversational AI deployments in contact centres will reduce agent labour costs by $80 billion globally as automation handles a growing share of interactions. Forrester predicts contact centres as we know them will transform within 20 to 28 months.
Those numbers are real. The technology is genuinely advancing. The question isn’t whether agentic AI will change contact centre operations — it will. The question is what the gap is between where the technology is today and where the demos suggest it is.
What’s Actually Working in 2026
In my assessment, based on what I’ve seen in real deployments and the operational conversations I’ve had with peers, agentic AI is genuinely delivering in four areas right now:
Intelligent triage and routing. Not the old keyword-matching routing, but genuine intent detection that routes based on what a customer actually needs rather than what menu option they pressed. At Q4, we redesigned routing as part of a broader operational overhaul — cutting first response time from 17 hours to 2 hours. AI triage was a meaningful part of that. It gets the right work to the right person faster, and it scales in ways that headcount-based routing can’t.
Post-interaction automation. Summarising calls, populating case notes, tagging tickets, updating CRM records. This is where the ROI is cleanest, the risk is lowest, and the agent experience improvement is most immediate. Agents report feeling more supported when they’re not spending 10 minutes after every call doing administrative work that an AI can do in 30 seconds. If you haven’t implemented this yet, it should be your first AI investment — not your third.
Real-time agent assist. Surfacing the relevant knowledge article, suggesting the next best action, flagging when a customer’s sentiment is deteriorating before the agent notices. This doesn’t replace agents — it makes them faster and more consistent, particularly for newer team members who don’t yet have the institutional knowledge to navigate edge cases quickly.
Quality monitoring at scale. Traditional QA can review a fraction of interactions. AI conversation analysis can review all of them. The shift from spot-checking to pattern recognition — identifying systemic issues rather than individual incidents — changes what your quality function can do. It’s the difference between knowing one agent is struggling and knowing that a particular product issue is generating a specific failure pattern across the team.
What Isn’t Working Yet — Or Not as Advertised
End-to-end autonomous resolution for complex queries. The demos show AI handling multi-step, nuanced customer issues without human involvement. In production, the failure modes are more frequent and more consequential than the demos suggest. Complex queries involve edge cases, customer history, policy exceptions, and emotional context that current systems handle inconsistently.
The McKinsey research on agentic AI in customer care captures this well: executives agreed that agentic AI might be the right answer in some cases but not all, and that maintaining a balance between AI and human interaction is essential to ensure high-quality service. “We should not do AI because we can or want to save cost” was a direct quote from one executive — it reflects where the honest thinking is, versus the vendor conversation.
The data prerequisite. Most agentic AI implementations fail not because the AI is bad but because the data it needs to make good decisions doesn’t exist or isn’t accessible. Research from Gladly frames this clearly: for an AI system to act autonomously, it needs the full customer picture — purchase history, previous contacts, how those contacts were handled, what matters to this customer. Most contact centre platforms are built around tickets, not customers. Every interaction opens a new ticket. History is scattered across dozens of closed cases and separate systems. The AI has no memory of the relationship.
Before you implement agentic AI, audit your data architecture. If your AI can’t answer “what has this customer’s last three experiences been like and what was unresolved,” it’s going to make bad decisions and you’re going to own the consequences.
Governance is harder than vendors admit. When an AI agent makes a wrong decision at scale — authorises refunds it shouldn’t, gives incorrect policy information to hundreds of customers, routes a wave of tickets to the wrong team — you need to know about it immediately and you need a recovery mechanism. Most vendor conversations focus on what the AI can do when it works. The operational question is: what happens when it doesn’t, and who owns it?
The Metric Shift You Need to Make Before You Implement
If you’re still measuring your contact centre primarily by Average Handle Time, you’re optimising for the wrong thing in an AI-augmented environment.
AHT made sense when every interaction was a human-to-human transaction. It incentivised speed, which often came at the cost of resolution quality. In an AI-augmented operation, longer conversations with human agents are frequently the high-value interactions — the ones the AI escalated because they required judgment, empathy, or authority. Penalising your agents for spending time on these is actively counterproductive.
The metrics replacing AHT in leading operations in 2026 are:
Containment Rate — what percentage of inquiries were resolved end-to-end without human handoff? This is the primary measure of your AI’s operational effectiveness. A healthy target for routine inquiry types is 60%+.
Downstream Friction — does an AI-resolved issue stay resolved, or does the customer contact you again within 48 hours? High containment with high downstream friction means your AI is deflecting, not resolving.
Agent Sentiment — are your human agents happier with the work they’re doing? By offloading administrative and repetitive tasks to AI, your team should be spending more time on the interactions that require what they’re actually good at. If agent sentiment isn’t improving alongside AI adoption, something is wrong with your model.
These are connected to the broader KPI framework I’ve written about — the principle that metrics need to reflect what you’re actually trying to achieve, not what’s historically been easy to measure.
Three Questions to Ask Before You Sign Anything
1. What does your AI actually know about my customers?
Not at demo time — in production. Does it have access to CRM history, previous case data, customer tier information, and open issues across channels? Or does each conversation start from scratch? The answer determines whether you’re buying automation or intelligence.
2. What’s the failure mode and who owns it?
Ask the vendor to walk you through what happens when the AI makes a wrong decision at scale. What alerting exists? What rollback capability? What’s the human-in-the-loop mechanism for catching errors before they affect thousands of customers? If they don’t have a clean answer to this, the governance model isn’t ready.
3. What does your implementation actually require from my side?
Implementation costs are routinely underestimated. So is the internal resource required — data cleaning, system integration, change management with agents and supervisors, and the ongoing tuning that any AI system requires after go-live. Get the full picture in writing before you commit.
The Honest Summary
Agentic AI is real, it’s advancing faster than most people expected, and it will materially change how support organizations operate over the next three to five years. The leaders who navigate it well won’t be the ones who adopt fastest. They’ll be the ones who ask the right questions before they commit, build the data infrastructure that gives AI something to work with, and design their human workforce around what humans are genuinely better at rather than treating AI as a headcount replacement.
The technology is ready for the right use cases. The harder work — the data architecture, the governance model, the operational redesign — is what most organizations underinvest in. That’s where the gap between the demo and the production outcome comes from.
If you’re evaluating AI investment right now, start with the use cases where the data is clean, the failure modes are low-stakes, and the ROI is measurable. Post-interaction automation and real-time agent assist are both strong starting points. Build the discipline there, then expand.
The vendors will be patient. The technology will improve. Your competitive position doesn’t depend on being first. It depends on being right.
Related reading:
- I Wrote About AI in the Contact Centre in 2018 — Here’s What I Got Right
- How We Cut Response Time from 17 Hours to 2 Hours at Q4
- The CX Leader’s KPI Playbook — Free Download
Hutch Morzaria is a CX and Support Leadership professional with 19 years of experience building and leading support organizations across SaaS, Fintech, and enterprise technology. He has held Director-level roles at Q4 Inc, AudienceView, Johnson Controls, and others, and holds ITIL Expert certification across V3 and V4.



