ServiceNow and OpenAI are Aiming at the Messy Part of CX
PLUS: If your process is sloppy, AI just helps you get the wrong answer faster

📅 January 21, 2026 | ⏱️ 4-min read
Good Morning!
The Executive Hook:
CX leaders keep asking, “What could an AI agent do for our customers?” That’s the wrong question. The real question is, “What would we let it change in our systems, and who owns the result when it does?” Once AI can push approvals, edit records, and trigger workflows, it stops being a smarter chatbot and starts acting like a new operator inside your org chart. At that point, every broken policy, fuzzy exception, and ownership gap stops being an internal headache and starts being the customer experience.
🧠 THE DEEP DIVE: ServiceNow + OpenAI Push AI From “Answers” to “Action”
The Big Picture: ServiceNow and OpenAI announced a multi-year agreement to put OpenAI models inside ServiceNow workflows—so AI can help decide what to do next and take steps to get it done, including voice experiences.
What’s happening:
OpenAI models become a preferred intelligence layer inside ServiceNow—where a massive number of enterprise workflows already run.
The focus is “actionable” AI: not just summarizing or drafting, but moving work through real steps (like approvals, updates, and workflow execution).
They also call out speech-to-speech / native voice capabilities—basically, more natural service interactions without everything going through text first.
Why it matters:
If AI can take action inside your systems, the CX bar shifts. Customers stop caring that you explained the policy—they care that you fixed the issue. Which means your internal mess (duplicate fields, unclear ownership, “tribal knowledge,” inconsistent exceptions) becomes customer-facing, fast.
The takeaway:
Pick one workflow where customers currently bounce around (refund exceptions, address changes with verification, appointment reschedules with rules). Map every step. Anywhere it takes 5+ touches is prime territory for AI—but only after you clean up the process. Otherwise you’re automating chaos.
Source: OpenAI (OpenAI)
📊 CX BY THE NUMBERS: Your Team Thinks AI Is Coming—Whether You’re Ready or Not
Data Source: Randstad “Workmonitor” 2026
80% of workers expect AI to impact their daily tasks. Translation: your frontline is already bracing.
Demand for “AI agent” skills is up 1,587% across job postings tracked in the report. The market is sprinting.
95% of employers forecast growth vs 51% of employees. That optimism gap? It shows up as change fatigue and quiet resistance.
The Insight:
If your agents believe AI is a cost-cutting weapon, your rollout will get the corporate equivalent of a polite nod. Want adoption? Lead with “here’s what we’re removing from your plate,” not “here’s what the bot can do.”
🧰 THE AI TOOLBOX: Scroll (Because Your AI Needs Receipts)
The Tool: Scroll helps you build AI “experts” grounded in your knowledge base—so the answers come with depth, context, and less improvisation.
What it does (no fluff):
You add sources (docs, help centers, Notion/Confluence, spreadsheets, webinars, etc.). Scroll processes them, builds an “expert world model,” and uses a grounding approach that tries to anchor responses in supporting evidence.
CX Use Case (where you’ll actually feel this):
Build a “Refund & Exceptions Expert.” Not refunds in general—refunds plus the annoying edge cases that always escalate. Consistency goes up, escalations go down.
Create specialists by journey stage (onboarding, billing, troubleshooting, returns) instead of one mega-expert that kinda-sorta knows everything. Scroll’s docs basically warn that broad experts end up serving no one well.
Trust:
Scroll’s best-practice guidance is the stuff most teams skip: don’t dump in everything, don’t assume “more sources = better,” and keep experts focused so governance is easier. This is how you avoid the dreaded customer moment: “Your bot just made that up.”
⚡ SPEED ROUND: Quick Hits
🏦 UK lawmakers push regulators to stop “wait-and-see” on AI in financial services—calling for tougher oversight like stress tests as AI spreads across credit, insurance, and advice. (If you serve regulated customers, your CX is now a compliance story too.)
🔐 TechCrunch flags “rogue agents” and shadow AI—investors are piling into security as action-taking AI becomes a bigger risk surface. (If it can act, it can also act out.)
🧾 IBM + e& roll out enterprise “agentic AI” for governance and compliance—positioning action-oriented AI inside controlled, auditable environments. (This is the grown-up version of AI adoption.)
📡 THE SIGNAL: Your New Differentiator Is “Safe Speed”
The next wave of CX advantage is simple to describe and hard to do: move work at AI speed, with outcomes you can stand behind.
As AI steps into real workflows, you’re no longer judged on how quickly you responded. You’re judged on whether the action was correct, consistent, and owned. Can you explain what your AI is allowed to do, where its answers come from, and who is accountable when something breaks? If not, you don’t have an AI strategy—you have a risk strategy waiting to happen.
The CX teams that pull ahead will treat AI agents like operators, not toys. They’ll clean up workflows before automating them, ground answers in real knowledge instead of vibes, and build governance that assumes regulators, auditors, and customers will all ask the same question: “Why did your AI do that?” Teams that can answer clearly earn trust and keep speed. Teams that can’t will hit a wall—set by their own customers and their own risk teams.
See you tomorrow,
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.
📬 Feedback & Ideas
What’s the biggest AI friction point inside your CX organization right now? Reply in one sentence — I’ll pull real-world examples into future issues.




