Your Next CX Metric Might Be “AI Work Units,” Not CSAT
Plus: Once agent work becomes countable, accountability shows up fast.

To opt-out of receiving DCX AI Today go here and select Decoding Customer Experience in your subscriptions list.
I’m obsessed with Wispr! Get a Free Month of Wispr PRO.
📅 February 27, 2026 | ⏱️ 5 min read
Good Morning!
Today is all about the difference between “cool demos” and “production reality.” One company is trying to meter agent work. Others are pushing agents into phones, messages, and internal workflows. The pattern is clear: agentic AI is moving from novelty to operations, fast.
The Executive Hook:
Here’s my POV: AI agents should be managed like a workforce, not a feature. If you can’t measure what an agent actually completes, you’ll get surprised by two things: runaway spend and silent CX damage. The orgs that win won’t have the smartest model. They’ll have the cleanest definition of “work done,” plus the discipline to review it weekly.
🧠 THE DEEP DIVE: A Decision Memo For Making AI Agent Work Auditable
The Big Picture: A new metric called Agentic Work Unit (AWU) is Salesforce’s attempt to make AI agent output measurable, comparable, and governable.
What’s happening:
Salesforce is defining an AWU as a discrete task completed by an AI agent, not a vague “AI interaction.”
The point is to shift leadership conversations from capacity metrics (like tokens) to outcome metrics (work finished).
This also tees up better controls: you can track which intents are safe to automate, which ones spike escalations, and which ones quietly create rework.
Why it matters: In CX, the real cost is not the AI bill. It’s the cleanup. If an AI agent “helps” but the customer contacts you again, you just paid twice. AWUs give you a way to manage automation like operations: volume, quality, rework, and risk by intent.
The takeaway: Here’s the quiet shift AWUs signal: we’re moving from “AI as a capability” to “AI as labor.” And labor changes how executives think. When work becomes countable, it becomes tradable, benchmarkable, and disputable. It also becomes political, fast. Teams will argue over what “counts” as work, what “good” looks like, and who owns the failures. That’s not a downside. That’s maturity. The moment your AI has a unit of work, it stops being a project and starts being a line item with accountability.
Source: Constellation Research
📊 CX BY THE NUMBERS: Skills Are Expiring Faster Than Your Training Cycles
Data Source: Draup: Fortune 500 Hiring Trends
By 2027, 40% of today’s core tech skills will be partially obsolete — For CX, this means “tool training once a year” is dead. Your teams need monthly refreshers on the AI workflow, not annual slide decks.
The half-life of tech skills is now less than 2 years — If your Quality Assurance (QA) rubric stays fixed while tools change, you will grade the wrong things and miss real failure modes.
Tasks executed primarily by technology are predicted to rise from 22% (2025) to 34% (2030), a 50% increase in five years — The workforce shift is not optional. Your future team is smaller on routine work and larger on exceptions, coaching, and governance.
The Insight: Stop treating “AI adoption” like a project. Treat it like a continuous operating rhythm: skills, QA, and escalation rules evolve every month.
🧰 THE AI TOOLBOX: Qmatic Aiva
Intent: If you run appointments, queues, or high-volume inbound calls, Qmatic Aiva - AI Voice Agent for Enterprise is a practical voice automation move that’s more “ops” than “hype.”
The Tool: A multilingual AI voice agent that answers inbound calls, handles common requests, and connects into appointment and queue workflows.
Problem: Phone volume spikes wreck service levels, and agents get stuck doing repetitive scheduling and status calls.
Solution: Picture an agent walking in Monday morning and the voicemail mountain is… gone. The voice agent picks up instantly, books or reschedules appointments, checks availability, and sends confirmations. It handles routine FAQs, and lets your humans focus on messy cases where judgment matters.
Benefits:
Time: Fewer calls hit humans, less after-call cleanup.
Quality: More consistent handling of routine flows (when the flow is well-defined).
Experience: Shorter waits for customers who just need to book, move, or confirm.
Where it sits: Front stage.
Best Fit:
Works best when your top call reasons map to clear actions (book, reschedule, confirm, status).
Not a great fit when calls are emotional, ambiguous, or policy-heavy (billing disputes, cancellations, medical nuance).
Key Takeaway: Use it for structured service flows, not for exceptions and empathy moments.
Source: Qmatic
⚡ SPEED ROUND: Quick Hits
Sinch expands its platform with agentic conversations for AI-powered customer engagement — The real story is the “communications layer” problem: agents are about to flood messaging, voice, and email, and enterprises will need orchestration plus compliance-grade rails.
ServiceNow Launches Autonomous Workforce — More agentic automation is heading into internal service, which will raise expectations for similar “self-driving” experiences in customer-facing service.
Google and Samsung just launched the AI features Apple couldn’t with Siri — Agentic task automation is moving onto consumer devices, which means customers will soon expect your support journeys to work cleanly through AI intermediaries.
📡 THE SIGNAL: The Era Of “Work Receipts”
We’re entering the phase where AI stops being a demo and starts being a coworker. That’s exciting, and also dangerous, because coworkers need job descriptions, rules, and performance reviews.
Your leadership move this week is simple: choose receipts over romance. Don’t celebrate “agent launches.” Celebrate “work completed with quality, at a known cost.”
What would change in your org if every AI promise needed a weekly scorecard and a cancel button?
See you Monday,
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.
📬 Feedback & Ideas
What’s the biggest AI friction point inside your CX organization right now? Reply in one sentence — I’ll pull real-world examples into future issues.







