The New CX Middle Layer
Plus: Why human-like bots need clear rules and leadership control

To opt-out of receiving DCX AI Today go here and select Decoding Customer Experience in your subscriptions list.
I’m obsessed with Wispr Flow Pro! Get a Free Month on me.
📅 February 24, 2026 | ⏱️ 5 min read
Good Morning!
AI is getting better at sounding human. The problem is, sounding human is not the same as being safe, right, or responsible.
That’s the thread running through today’s issue. Oregon is basically saying, “If your chatbot can feel like a person, you need to act like a grown-up.” That means clear disclosure, stronger protections for kids, and better responses when conversations turn serious. At the same time, Gartner is waving a flag at leaders: you can’t outsource judgment. If leaders stay hands-off, teams will ship automation that moves fast and breaks trust.
And then there’s the practical side. Tools like Typewise are pushing a new model: not one giant bot, but a supervisor that coordinates multiple AI helpers with better controls and cleaner human handoffs. The message is simple: the next wave of CX is not about smarter words. It’s about safer decisions.
The Executive Hook:
Here’s the tension: customers want speed, but they also want to feel protected. AI can deliver both, but only if leaders get involved and teams build the control layer around the agent. If you don’t design for trust up front, you’ll end up trying to “patch” trust later, and patches don’t hold.
🧠 THE DEEP DIVE: Oregon’s Chatbot Safety Bill Raises The Bar For “Human-Like” AI
The Big Picture: Oregon lawmakers are moving a bill that would require certain AI “companion” chatbots to clearly tell users they are not human, and to add stronger safety steps for minors and self-harm situations.
What’s happening:
The bill targets AI companions designed to feel like a person, especially when a reasonable user might believe they are talking to a human.
It requires clear notices and disclosures so users are not tricked by human-like language or behavior.
It also calls for protocols to detect and respond to suicide or self-harm signals, including safer responses and escalation paths.
Why it matters: CX is not just “help me reset my password” anymore. Brands are using chat interfaces for coaching, support, and ongoing relationships. When a bot feels human, the risk goes up fast. Customers can overshare, rely too much on the bot, or believe it has authority it does not. Clear disclosure is not a legal box. It’s a trust move.
The takeaway: If you run any customer-facing chat that can sound human, act now: make disclosure obvious, train your bot to slow down on sensitive topics, and build a clean path to a human for high-risk moments. The brands that treat this as “just compliance” will feel it later in headlines and churn.
Source: Oregon Legislative Information System
📊 CX BY THE NUMBERS: Leaders See AI Coming, But Many Aren’t Training For It
Data Source: Gartner
65% of chief marketing officers (CMOs) said AI will change the CMO job a lot in the next two years. That is a big wave, soon.
32% said the CMO skill set needs major changes. That gap is the danger.
Gartner also warned that by 2027, weak AI skills could be one of the top reasons CMOs get replaced in large companies.
The Insight: AI is turning leadership into a “hands on” job again. If a senior leader treats AI like a tool you can delegate, the team will build fast without building smart. That’s how you end up with automation that sounds confident but makes shaky calls, and customers pay the price.
The new gap is not budget. It’s judgment. Leaders need enough AI fluency to ask basic questions: What data is the model using? What can it change? When does a human step in? How do we know it’s right? If leaders cannot answer those, they are not steering. They are hoping.
In the next two years, the leaders who lean in will protect trust and speed. The ones who stay hands-off will get surprised, and not in a good way.
🧰 THE AI TOOLBOX: Typewise AI Supervisor Engine
The Tool: A “supervisor” setup that helps manage multiple AI helpers in customer service, with human handoffs and clear controls.
What it does: Instead of one giant bot trying to do everything, you use a lead agent to guide the work. One agent can detect intent. Another can search knowledge. Another can take actions, like updating an address. The supervisor decides the order, checks rules, and sends it to a human when needed.
CX Use Case:
Fixing messy issues without breaking flow: Think subscription changes, delivery problems, or billing disputes. These are the cases that often need several steps across systems.
Safer automation you can explain: You get clearer steps, test runs, and records of what happened. That helps when a customer says, “Your bot changed my account.”
Trust: More agents can mean more risk. Each agent is another chance to pull the wrong data or take the wrong action. The fix is not “be careful.” The fix is design: tight permissions for actions, clear logs, staged rollout, and a simple rule for humans to stop the process. If you cannot explain an automated action in plain words, do not automate it yet.
Source: Typewise
⚡ SPEED ROUND: Quick Hits
Agentic CX Ambition Is Rising, But Readiness Isn’t — Companies want AI agents to run more of the journey, but many teams still lack the basics to control them.
Proposed State AI Law Update Highlights Chatbot Disclosure Requirements — The legal trend keeps moving toward clearer “this is a bot” notices and stronger safety rules.
iQor Expands AI-Powered Gaming Practice — Gaming support is getting tougher, and more providers are blending AI with human review to manage risk.
📡 THE SIGNAL: Control Beats Clever
For years, we treated trust like a brand value. Something you talked about in a campaign. AI is turning trust into a product feature. It lives inside the chat experience, the handoff, the disclosure, the logs, and the moments where the system decides what to do next.
That’s why Oregon’s bill matters, even if you don’t sell “AI companions.” It’s a preview of where expectations are going: tell people when it’s a bot, slow down when the topic is risky, and make it easy to reach a human. Add Gartner’s warning, and you get the real lesson. This is not a project you can delegate and forget. Leaders need to engage, ask the hard questions, and set the rules.
If your AI can act on a customer’s behalf, your team needs to act on the customer’s behalf too. So here’s the leadership test for this week: where in your customer journey would a “human-sounding” bot create false confidence, and what guardrail will you add before the next release?
See you tomorrow,
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.
📬 Feedback & Ideas
What’s the biggest AI friction point inside your CX organization right now? Reply in one sentence — I’ll pull real-world examples into future issues.





