The Race to the Bottom has Begun
Plus, the ‘AI Agent’ tool that is finally moving past the chatbot script.

📅 December 3, 2025 | ⏱️ 4-min read
Good morning.
There’s a lot of “innovation” talk in AI right now. Look closely at the biggest players, and you’ll see a race to shape one asset: the customer’s baseline expectations.
When giants deliver seamless, autonomous service for free, the gap between your effort and customer perception widens—and that’s the real operational challenge.
Here’s how that’s playing out in the last 24 hours.
🛍️ Google and Amazon Deploy AI Shopping Agents for the Holidays
Major retailers and tech platforms, including Google, Amazon, and Walmart, have launched or updated their AI shopping assistants for the crucial holiday season. These new tools go beyond basic chatbots, allowing users to make natural language requests, track real-time price drops, and even command an AI to call a local store to confirm product availability.
The psychological truth of this move is brutal: Google and Amazon are conditioning the mass market to expect autonomous, perfect service now. Their goal isn’t just a holiday revenue boost; it’s to embed a new standard of commerce where a customer’s request is instantly fulfilled without human intervention or channel friction.
If you’re a leader operating a siloed service desk or a slow-moving contact center, understand this: your new competitor is the frictionless, invisible efficiency of the agent that just called a store for a customer. The cost of friction has just gone up.
Source: Los Angeles Times
🛑 Character.AI Shifts Strategy, Ends Open-Ended Chat Experience for Minors
Character.AI, one of the leading consumer platforms for engaging with digital personalities, is ending its open-ended chat experience for users under the age of 18. This strategic move, following lawsuits and safety concerns, shifts the focus away from AI companionship and toward creative role-playing and entertainment features.
This is the unavoidable collision point between innovation and responsibility. The strategic failure here is the belief that you can introduce powerful, human-like AI to a consumer audience—especially a vulnerable one—without anticipating and engineering for the deepest human-psychology pitfalls.
For every CX leader looking at “emotional AI” or “personalized companionship,” this is a clear warning: the liability of emotional or psychological harm must be priced into your product from day one. When you build an AI that acts like a friend, you inherit all the ethical responsibilities of a friend.
Source: Techcrunch
💰 Cash App Introduces ‘Moneybot’ to Simplify User Financial Insights
Cash App has rolled out a new AI-powered assistant named “Moneybot” to a subset of its users. The tool is designed to move beyond simple transactions, using conversational AI to answer personalized questions about income, spending habits, and savings behavior to help users manage their money more effectively.
The transition from a passive transaction app to an active financial advisor is the fundamental hypothesis being tested here. This is a game of trust framed by convenience. If you are going to put an AI agent inside a customer’s financial life, its accuracy and integrity must be non-negotiable.
The lesson for all CX leaders is that as you move from informational chatbots to autonomous agents that handle money, logistics, or health, the consequences of hallucination or inaccuracy scale exponentially. The only way to build trust is to accept zero tolerance for error.
Source: Tech Research Online
🗓️ PayNearMe Plans AI-Powered Virtual Agent Launch for 2026
Payment technology firm PayNearMe is developing an advanced AI-powered Intelligent Virtual Agent (IVA) for contact centers, with plans for a full rollout in 2026. The agent is intended to automate complex customer interactions within the payment and collections process.
A 2026 target provides time to integrate securely with sensitive data, define permissions, and design end-to-end procedures. Healthy skepticism is warranted on two fronts: customer expectations for autonomous service are accelerating now, and IVA performance tends to hinge on live iteration more than planning. Teams with later timelines can hedge by moving earlier on prerequisites—limited pilots on high-volume journeys, verified data access, strict auditability, and rollback paths—so the program earns trust before the official launch.
Source: PayNearMe
🛠️ Tool of the Day: Fin AI Agent
Fin is an AI Agent built by Intercom, specifically engineered to handle and resolve complex, multi-step customer service queries across chat, voice, and email. It is trained not just on knowledge bases, but on pre-defined ‘Procedures’ that dictate specific steps for resolution.
Most chatbots fail because they are simple knowledge retrieval systems—they can only quote a policy. Fin’s strategic difference is its reliance on ‘Procedures,’ transforming it from an automated FAQ system into an operational agent. This is the difference between an AI that knows what to do and an AI that is authorized to do it. The problem this solves is the one that frustrates customers most: deflection. The only way forward is to grant your AI the permission to take goal-oriented action.
Source: Fin.ai
📊 DCX AI Data Stat
Gartner predicts that by 2029, agentic AI will autonomously resolve 80% of common customer service issues without human intervention, leading to a 30% reduction in operational costs for those who execute successfully.
This is the uncompromising target and the new CX baseline. The 80% autonomous resolution figure is the definition of operational excellence in the agentic era, and 2029 is your final deadline. The hard truth is that the 30% cost reduction is not the reward for trying AI; it is the cost of not achieving this autonomous resolution rate. This stat forces every leader to ask: are we building systems that handle 80% of volume autonomously, or are we just using AI to make the other 20% of problems slightly easier for a human?
Source: Gartner Newsroom
Your 1-Minute Action Plan
Ask your Head of Technology: “Show me the three most common multi-step customer journeys that currently require an agent hand-off, and tell me, with a clear timeline, when our AI will have the explicit, authorized ability to complete those actions autonomously, start to finish.”
☕ Reader Poll
Before we hit send on autonomy, let’s pressure‑test the blockers.
The Signal
The single, uncompromising truth from today’s news is that the new era of autonomous agents is forcing the market to divide into two groups: those who build for action, and those who settle for information.
The rising tide of B2C AI agents from giants like Amazon, the ethical retreat of platforms like Character.AI, and the 80% Gartner prediction mean that the cost of inaction—both operational and ethical—is rising faster than the cost of adoption.
Your business must grant its AI the authority to complete complex, goal-oriented tasks, or you are actively training customers to churn.
That’s the rundown for today.
See you tomorrow!
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.
📬 Feedback & Ideas
What’s the biggest AI friction point inside your CX organisation right now? Reply in one sentence — I’ll pull real-world examples into future issues.








