When Your Chatbot Sounds Human, The Rules Change
Plus: why 2026 budgets will demand “proof,” not demos

📅 December 24, 2025 | ⏱️ 4-min read
Good Morning and happy X-mas eve!
The Executive Hook:
We’ve crossed a line that most companies didn’t officially announce: customers now feel like some AI experiences are relationships. That can be convenient… until something goes wrong. And when it does, regulators and executives will ask the same question: “What did you design this to do in the worst moment?” Today’s stories are all pointing to the same shift—AI is becoming a governed customer channel, not a novelty.
🧭 THE DEEP DIVE: New York and California just set the first real guardrails for “AI companions”
The Big Picture: New York and California have moved first with laws regulating “AI companions”—chatbots designed to simulate ongoing emotional or social relationships—pushing transparency and safety requirements into the mainstream.
What’s happening:
New York’s law is already in effect, requiring companions to disclose they’re not human and to respond to signs of suicidal ideation with crisis resources.
California’s SB 243 takes effect Jan. 1, 2026, with broader protections and enforcement options aimed at reducing harmful interactions (especially for youth).
This is unfolding alongside federal uncertainty, with a recent executive order potentially teeing up conflicts over how far states can go.
Why it matters:
Even if you don’t market an “AI companion,” customers often treat your chatbot like a person—especially when it’s always-on and conversational. The direction is clear: if your AI can influence emotions or decisions, you’ll be expected to prove it has guardrails, disclosures, and escalation paths that actually work.
The takeaway:
Do a “bad day audit” of your bot:
Where does it clearly disclose what it is (and isn’t)?
How does it escalate when someone is distressed, angry, or vulnerable?
What gets logged so you can learn—and defend decisions—later?
Source: Reuters
📊 CX BY THE NUMBERS: AI savings are becoming a headcount conversation
Data Source: Spencer Stuart survey (reported by The Wall Street Journal), published Dec 23, 2025
36% of CMOs expect layoffs in the next 12–24 months tied to AI-driven efficiency or redundancy removal.
At $20B+ revenue companies, 47% expect headcount reductions; 32% say they already reduced headcount in 2025.
The pressure is coming from the top: CEOs/CFOs want clear cost cuts and measurable returns, not just “we’re experimenting.”
The Insight:
CX leaders should take this personally (in the most professional way possible). Your AI roadmap will get judged the same way: not by features, but by outcomes—customer effort down, resolution up, quality steady, risk controlled. If you can’t show that, budgets get weird fast.
Source: The Wall Street Journal
🧰 THE AI TOOLBOX: Guru
The Tool: Guru is an “AI source of truth” that helps teams get trusted, cited answers from company knowledge—without hunting across docs, wikis, and chat threads.
Guru connects to the tools where knowledge already lives and returns answers that are cited and permission-aware, so people can see where the answer came from and only see what they’re allowed to see.
CX Use Case:
Faster, more consistent agent answers: bring policy + troubleshooting guidance into the flow of work (CRM, help desk, browser), so customers stop getting three different answers from three different humans. Guru
Less “tribal knowledge” risk: keep critical service knowledge current and easy to find—so your best practices don’t walk out the door (or vanish in Slack threads).
Trust:
This is the kind of AI that supports brand credibility because it’s built to show its work: citations, governance, and security posture are the difference between “helpful” and “liable.”
More: Guru
⚡ SPEED ROUND: Quick Hits
AI “browser agents” may never be fully safe from prompt injection. If an agent can take actions, assume the web will try to trick it—and design your safeguards like you would fraud prevention. Source: TechCrunch
OpenAI says it made far more child-safety reports in early 2025. As AI tools expand (especially multimodal), the operational burden of safety grows—fast. Source: WIRED
The FTC set aside a prior order involving AI-generated review risks. Translation for CX: deception, trust, and “what’s real” online remains a regulatory hot zone. Source: Federal Trade Commission
📡 THE SIGNAL: “Helpful” isn’t enough anymore—AI needs to be accountable
The theme today is accountability.
Not in a scary way—just in an adult way. If your AI interacts with customers, it needs the basics every customer channel needs: clear disclosure, safe escalation, consistent answers, and proof you can measure quality. In 2026, the important thing will be whether you can calmly say, “Yes, we use AI—and here’s how we keep it honest, safe, and on-brand.”
I’ll be taking the rest of the week off, so see you on the 29th! And Happy Holidays!
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.
📬 Feedback & Ideas
What’s the biggest AI friction point inside your CX organisation right now? Reply in one sentence — I’ll pull real-world examples into future issues.





