AI Is Starting To Decide For Customers
Plus: If your pricing, offers, or refunds aren’t auditable, your “smart” experience becomes a lawsuit

To opt-out of receiving DCX AI Today go here and select Decoding Customer Experience in your subscriptions list.
📅 March 6, 2026 | ⏱️ 4 min
Good Morning!
AI in consumer journeys is shifting from “answer questions” to “set outcomes.” Regulators noticed.
The Executive Hook:
Your biggest CX risk in 2026 is not a hallucinated FAQ. It’s invisible decisioning. Price. Eligibility. “Best option.” Refund amount.
When AI starts making choices, customers stop forgiving “the system.” They blame you.
So the new CX advantage is simple: make AI decisions explainable, reversible, and consistent. If you can’t show the why, you’ll lose the trust fight.
🧠 THE DEEP DIVE: Surveillance Pricing Puts AI Decisioning On Trial
The Big Picture: A U.S. House committee is pressing major travel platforms on whether they use AI-driven “surveillance pricing” to set individualized prices.
What’s happening:
The committee asked CEOs at companies including Uber, Lyft, Expedia, Booking.com, and Instacart to disclose whether they use personal data to tailor prices per person, and to provide documents about revenue-management algorithms and impacts.
The concern is “black box” pricing: consumers don’t know that prices are personalized, or which signals (location, browsing, device data) are driving it.
This sits on top of broader scrutiny, including a California probe into personalized pricing practices, and prior lawmaker pressure on airlines about AI and fares.
Why it matters: Personalization is fine until it feels like punishment. The moment a customer believes “I paid more because you know me,” you’ve created a trust tax that shows up as churn, chargebacks, and social blowups.
For consumer-facing CX, this also changes how you run your AI roadmap: your riskiest models might live in pricing, offers, and refunds, not the contact center.
The takeaway: This week, assign Revenue Ops + CX + Legal to build a Decision Receipt for any AI-influenced price or offer: inputs used, inputs excluded, reason codes, and a customer-friendly explanation, reviewed weekly. Track price complaints, checkout abandonment, and dispute rate as one dashboard.
Source: Reuters
📊 CX BY THE NUMBERS: The CX Perception Gap Gets Loud
Data Source: Medallia “2026 State of Customer Experience Report”
66% of CX practitioners say experiences improved last year, but only 17% of consumers agree.
30–40% of departments take no action after receiving customer insight.
Only 22% of consumers feel “very loyal,” and 40% have switched brands recently.
The Insight:
Consumer-facing AI won’t save you if your org can’t move. The gap isn’t “we need more insights.” It’s “we can’t execute across teams.”
If you want AI to reduce customer effort, pick one journey (returns, delivery, billing, cancellations) and force a closed loop: signal → owner → fix → measure. Otherwise, AI just helps you notice the fire faster.
🧰 THE AI TOOLBOX: Fraud Shield For Shopping Agents
The Tool: Riskified is a fraud and risk layer designed to protect merchant AI shopping assistants from abuse.
Problem:
As shopping assistants get more autonomous, they create new fraud angles: account takeovers, “agent-led” high-risk orders, promo abuse, and chargeback spikes that look like normal customer behavior.
Solution:
Picture a customer using an in-site AI assistant to build a cart, apply promos, and check out with fewer clicks. That speed is great, until fraudsters use the same flow at scale.
A dedicated risk layer watches for identity and behavior signals that don’t match a legit buyer, and blocks or steps up verification before the order becomes a refund and a bad review.
Benefits:
Time: fewer manual reviews and escalations
Quality: fewer false approvals that become chargebacks
Experience: fewer “sorry, we canceled your order” moments for real customers
Where it sits: Side stage (protects the journey without becoming the journey)
Best Fit:
Works best when: you’re piloting agentic shopping, one-click reorder, or promo-heavy offers
Not a great fit when: you don’t have clear fraud outcomes tracked (chargebacks, refund abuse, ATO)
Key Takeaway:
Use it to protect agent speed. Don’t use it as a substitute for fixing weak account security and messy promo rules.
⚡ SPEED ROUND: Quick Hits
OpenAI reportedly scales back plans for direct checkouts inside ChatGPT — For consumer CX, this signals the near-term battle is “AI drives discovery” while brands still own checkout, policies, and post-purchase recovery.
Retailers want ‘delightfully human’ AI to do your shopping, but will the chatbots go rogue? — Agentic shopping is moving from demo to deployment, and mishaps will punish brands that skip guardrails and human override.
Delta shakes up CX and customer service leadership — A reminder that consumer CX outcomes follow ownership; AI programs stall fast when customer care reporting lines get fuzzy.
📡 THE SIGNAL: Make Every AI Decision Easy To Undo
Consumer-facing AI is speeding up the journey, but it’s also making choices customers can’t see. That’s where trust snaps. The fix is not more “human tone.” It’s visible logic, clear reason codes, and a clean undo button.
Your execution choice: build a “decision receipt” for pricing, promos, refunds, and eligibility, or keep paying the tax in disputes and churn. If a customer asked, “Why did I get this price?” could your team answer in 30 seconds, and reverse it in two clicks?
See you Monday.
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.







