AI Shapes the Choice. Trust Decides it.
Plus: Consumers trust reviews more than AI recommendations
Your daily signal on AI and CX — minus the hype.
📌 DCX Stat: Agentic AI is already past the curiosity stage
57% of organizations are either piloting or deploying agentic AI to some degree.
Takeaway: For CX leaders, this shifts the question from “Should we experiment?” to “Where do agents create the clearest customer and operational value first?” Waiting now risks learning too slowly.
In this issue:
→ AI can influence the choice, but not the belief
→ Trust is getting harder to earn in digital journeys
→ Returns are becoming a fraud and fairness fight
→ Loyalty is shifting from points to habit design
→ The next edge may come from better proof, not more output
🔎 Deep dive
The consumer trust gap just got harder to ignore
The sharpest story today is not that people use AI in shopping. It is that they do not fully trust what it tells them.
A new Digital Commerce 360 / Omnisend report found that 84% of Americans trust online product reviews, 33% trust them more than they did two years ago, and 86% still have concerns about AI-generated product recommendations. It also found that 93% double-check AI recommendations before buying.
That matters because a lot of brands are treating AI recommendations as a conversion layer when customers are still treating them as a draft. The recommendation may get attention. The proof still has to come from somewhere else. Reviews, verified purchase signals, and specific feedback are still carrying the final burden of belief.
This shows up first in discovery and evaluation. If customers think the recommendation engine might be biased, paid for, or manipulated, the experience does not feel helpful. It feels rigged.
📬 Copy-Paste Take: Send this to your COO
Consumers are using AI to shop, but they are not outsourcing trust to it. The recommendation gets them moving. The proof still has to come from somewhere else.
OPERATOR PLAYBOOK
Audit every recommendation surface like it has to win a cross-examination
If AI is influencing product discovery, your job is not only to improve relevance. It is to make the logic feel credible.
Audit every recommendation and review the surface for four things:
Whether verified purchase signals are obvious enough to matter
Whether mixed feedback is easy to find, not buried under volume
Whether the recommendation is separated clearly from paid placement or promotion
Whether the customer can see enough product detail to verify the suggestion without leaving the page
Then test the moment after the AI suggestion. What proof does the customer get that this is the right product, not just the easiest one to surface?
Ask your team: If customers assume your recommendation engine is biased, what in the interface proves otherwise?
Signal: In AI-influenced shopping, trust signals are becoming part of the product, not just support material.
📈 Market Reality Check
Consumers are questioning what is real
Gartner found that 50% of consumers prefer brands that avoid GenAI in consumer-facing content. It also found that 61% frequently question whether the information they use to make everyday decisions is reliable, and 68% frequently wonder whether the content and information they see is real.
That is the bigger commercial problem. Even as AI use rises, comfort with AI-shaped customer experiences does not rise accordingly. Consumers are getting more verification-minded, not less. That means every customer-facing use of AI now carries a trust tax. If the value is not obvious, the skepticism shows up fast.
AI adoption may rise. Automatic trust will not.
🧰 Tool Worth Knowing
Mastercard Return Risk Intelligence
What it does: Mastercard uses AI to analyze return, refund, and dispute patterns, enabling merchants to score return risk in real time.
CX use case: Helps brands spot suspicious returns before they turn into chargebacks, losses, or broad policy crackdowns that punish good customers too.
Worth watching because: Returns are one of the easiest places to wreck trust while trying to reduce fraud. A blunt policy saves money in the short term and burns loyalty in the process.
Bottom line: The value here is not catching bad actors. It is protecting the return experience from becoming hostile for everyone else.
⚡ 90-Second CX Radar
Points alone are losing their grip on loyalty
The stronger loyalty programs are leaning on AI analytics, targeted messaging, gamification, and tiered rewards. The interesting part is not the tech. It is the shift in logic. Loyalty is becoming less about counting transactions and more about shaping the next one.
Meta wants shopping help inside the apps people already use
Meta says its upgraded assistant can help people discover what to wear, how to style a room, or what to buy, while also comparing scanned products to alternatives. That matters because product discovery is drifting into feeds, chats, and camera-led moments long before the customer reaches a brand site.
🧭 Your Move
Pick one journey and strip the AI story out of it for a minute. Ask a simpler question: what makes this experience believable to the customer? In discovery, that may be review integrity. In returns, it may be fair treatment. In loyalty, it may be whether the program feels personal before it feels promotional.
A lot of teams are still asking where to add more AI. The sharper question is where customers are already skeptical, and what proof they need before they trust the experience.
“The next edge in consumer CX may come from better evidence, not smarter output.”
Until Monday,
👥 Share This Issue
Think of one person who’s wrestling with AI in CX right now
and forward this to them.
I’m obsessed with Wispr Flow Pro! Get a Free Month on me.
If someone forwarded this to you, they thought you needed to see it before your next AI planning meeting. Get your own copy.








