AI’s New Promise: “Jobs, Jobs, Jobs”
Plus: the “AI adoption gap” is turning into a CX gap
Start every workday smarter with CX AI news, tools, and data. Become the go-to person on your team for what’s next.
📅 January 26, 2026 | ⏱️ 4-min read
Good Morning!
The Executive Hook:
Davos has a talent for turning messy change into clean storytelling. This week’s story is, “AI creates jobs.” And sure — it will. But here’s the part that doesn’t make it onto the stage: the frontline feels the disruption first, long before anyone feels the upside.
Because “jobs” is a macro conversation. Work is personal.
When AI shows up in service, customers don’t experience it as “innovation.” They experience it as: Did you help me, or did you hide behind automation? And employees don’t experience it as “productivity.” They experience it as: Am I still valued here, and what exactly am I responsible for now?
That’s the gap CX leaders have to close — not with a pep talk about AI, but with clarity, accountability, and a workflow that doesn’t leave either the customer or the agent hanging.
🧭 THE DEEP DIVE: Davos Leans into an AI Jobs Narrative
The Big Picture: Business leaders at Davos are publicly reframing AI as a driver of job growth, even as skepticism remains about how companies will actually use it in practice.
What’s happening:
Executives talked up AI-driven job creation across “infrastructure” roles (energy, chips, data centers), not just software jobs.
At the same time, labor leaders warned that “AI” can become a convenient label for layoffs that were already planned.
The optimism is real — but even the optimists are signaling that productivity gains are the immediate focus.
Why it matters:
CX leaders sit right where this becomes personal. When a company says “AI creates jobs,” your service org hears, “My job is changing.” If you don’t get ahead of that emotional reality, you’ll see it in the metrics: lower confidence, higher transfers, more escalations, and a dip in empathy.
The takeaway:
Don’t lead with “AI will help.” Lead with role clarity: what the AI handles, what the human owns, and what “great” looks like in the new workflow — especially in high-stress moments.
Source: Reuters
📊 CX BY THE NUMBERS: Shoppers use AI to browse — but don’t trust it to decide
Data Source: Salsify – “2026 Consumer Research Report Reveals AI Trust Gap, Gen Alpha Trends, and Product Preferences” (published Jan 22, 2026; survey of ~3,000 shoppers across the U.S., U.K., and Canada).
22% already use AI tools like ChatGPT in their buying journeys — AI is officially part of discovery.
Only 14% trust AI shopping recommendations enough to purchase based on them alone — trust is lagging behind usage.
31% say they’re persuaded to buy only when AI includes detailed product descriptions and specs — specifics beat vibes.
The Insight:
The assumption that “AI recs will close the sale” is wrong—for now. AI is a discovery layer, not a trust layer. If you’re a CX or product leader, put your energy into rich, accurate product content and transparent details that AI can surface reliably—because a shiny assistant can’t cover for thin, messy product experiences.
🧰 THE AI TOOLBOX: Trace turns “AI agent” Chaos into a Clean, Accountable Workflow
The Tool: Trace is a workflow automation platform that breaks work into steps and routes each step to the right “agent” — human or AI — while keeping you in control.
What it does:
It maps how work actually flows across your apps, then helps you automate repetitive steps using templates, prebuilt AI components, and “bring your own agent” options — with human-in-the-loop review, logs, and permissions baked in.
CX Use Case:
Deflection without the dead-end: Turn common service tasks (status checks, simple policy answers, triage) into multi-step flows that end with a human handoff when confidence is low or the customer’s situation is high-stakes.
Clean handoffs across teams: Route a case step-by-step (Support → Billing → Ops) with notifications and approvals so customers don’t get bounced around while your org “figures it out.”
Trust:
The biggest trust-killer in AI service isn’t the AI being wrong — it’s the customer feeling like nobody’s accountable. Trace’s “human in loop,” permissions, and execution logs help you keep governance and clarity: who did what, when, and why.
Source: Trace
⚡ SPEED ROUND: Quick Hits
🎙️ LiveKit hits a $1B valuation on real-time voice AI infrastructure. CX impact: voice automation is moving from “IVR 2.0” to genuinely conversational experiences — which means your QA standards and escalation design need to mature fast.
🧃 The Verge warns of an AI ad ‘slop’ wave. CX impact: when customers feel flooded by low-effort content, they punish brands with skepticism — and support teams inherit the frustration.
Sen. Ed Markey is pushing AI companies about ads inside chatbots. CX translation: the second customers suspect the bot is selling at them instead of helping for them, trust drops through the floor.
🛰️ THE SIGNAL: Trust has to be designed
Across today’s stories, the same tension shows up: usage is moving faster than trust. Leaders can frame AI as job creation, but the frontline experiences it as uncertainty unless roles and handoffs are clear. Consumers are willing to use AI to browse, but they hesitate to let it decide without solid product facts behind it. And when chatbots and content start feeling like ads, the customer’s guard goes up instantly.
For CX leaders, the job isn’t to “sell” AI internally or externally. It’s to design an experience where accountability is obvious: what the AI does, what the human owns, how escalation works, and how customers can tell they’re being helped — not managed.
See you tomorrow,
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.
📬 Feedback & Ideas
What’s the biggest AI friction point inside your CX organization right now? Reply in one sentence — I’ll pull real-world examples into future issues.








Strong signal, Mark. What keeps showing up for me is that trust breaks when accountability isn’t legible in the experience itself.
Not whether AI is helpful in theory, but whether people can clearly see what it does, what a human owns, and how escalation actually works. That clarity does more for trust than any reassurance ever could.