The psychology mistake that's killing your AI investments
PLUS: Prompts for diagnosing when your AI creates customer frustration instead of satisfaction
Start every workday smarter. Spot AI opportunities faster. Become the go-to person on your team for what’s next.
🗓️ Wednesday, July 23, 2025 ⏱️ Read Time: ~5 minutes
👋 Welcome
I've been thinking about a conversation I had with a CX leader last week. She told me her team spent six months building an AI chatbot that could answer 87% of customer questions correctly. Impressive, right? Except customers hated it. Why? Because nobody taught the AI to say "I don't know" gracefully. The psychology was all wrong.
This perfectly captures what's happening right now. The smartest organizations aren't winning because they have the most sophisticated AI—they're winning because they understand human behavior better than their competitors.
📡 Signal in the Noise
The data is telling a consistent story across research institutions and consulting firms: AI success has almost nothing to do with technical prowess and everything to do with psychological insight. From Harvard's findings on chatbot design to Forrester's workforce predictions, the companies getting AI right are asking fundamentally different questions.
🧠 Executive Lens
Here's what I find fascinating: while everyone's obsessing over AI capabilities, the real competitive advantage is emerging around AI limitations. The organizations thriving aren't those that can make AI do more—they're the ones who understand exactly when AI should do less. They've figured out that the future isn't about replacing human judgment; it's about amplifying it in ways that feel authentically human to customers.
📰 Stories That Matter
🧠 Harvard research reveals chatbot failures are psychology problems, not technology problems
New Harvard Business Review research shows that AI-powered chatbots are leaving customers "underwhelmed" not because of technical limitations, but because companies are engineering better models instead of understanding human psychology. The study demonstrates that customer frustration stems from mismatched expectations and poor interaction design rather than insufficient AI capabilities.
Why This Matters: The research challenges the assumption that better AI automatically equals better customer experience, highlighting the need for behavioral science in AI deployment.
Try This: Map your chatbot interactions against customer emotional states rather than technical metrics—measure frustration, confusion, and satisfaction at each conversation turn.
Source: Harvard Business Review
🚨 MIT research shows AI companies dropped medical disclaimers to increase user trust and adoption
New MIT research reveals that AI companies have quietly abandoned medical disclaimers, with warnings dropping from 26% of health-related responses in 2022 to less than 1% in 2025. The study found that leading AI models now provide medical advice without cautioning users, with some actively asking follow-up questions and attempting diagnoses. Researchers warn this creates dangerous overtrust in AI medical advice, as companies prioritize user adoption over safety guardrails.
Why This Matters: The removal of safety disclaimers represents a fundamental shift in how AI companies balance user experience against responsible deployment, with implications for any customer-facing AI application.
Try This: Audit your AI customer interactions to identify where safety disclaimers or limitations warnings have been removed—ensure you're not sacrificing customer safety for smoother user experiences.
Source: MIT Technology Review
📊 McKinsey introduces "Superagency" framework as AI transforms from productivity tool to transformative partner
McKinsey's new research on AI in the workplace reveals that instead of the projected 92 million job displacements by 2030, leaders should focus on 170 million new roles and the skills they'll require. The "Superagency" framework positions AI as amplifying human capabilities rather than replacing them, emphasizing adaptability and human-centric development over pure automation.
Why This Matters: The shift from job displacement fears to job creation opportunities requires fundamentally different AI strategies focused on human empowerment rather than efficiency alone.
Try This: Audit your AI initiatives to identify where they increase human decision-making power versus where they simply automate existing processes.
Source: McKinsey
📉 Forrester warns of "massive disruption" as AI agents replace "vast swaths" of customer service workforce
Forrester's latest research, published yesterday, predicts AI agents will fundamentally remake the customer service workforce by handling increasingly complex inquiries and replacing "vast swaths of customer service agents." The report warns that "in the not so distant future, customer service will be led by automation supervisors and specialists" who manage AI based on enterprise goals of cost, revenue, and profitability, while humans transition to roles like "process architects" and "citizen developers."
Why This Matters: The accelerated timeline and dramatic language shift from "100,000 displaced" to "massive disruption" suggests workforce transformation is happening faster and at greater scale than previously predicted.
Try This: Assess your current customer service roles against transferable skills rather than job titles—identify which team members could transition to AI oversight and optimization roles.
Source: Forrester
🚀 Digital transformation expert Brian Solis warns most companies "don't think big enough" with AI
ServiceNow's Head of Global Innovation Brian Solis argues that the biggest danger facing organizations isn't AI failure—it's thinking too small. In his latest analysis, Solis distinguishes between "practical AI" (improving efficiency) and "innovative AI" (creating entirely new business models), warning that companies treating AI as "a faster horse" are choosing "stagnation, accelerated." He emphasizes that AI reveals who organizations truly are, separating those ready for transformation from those stuck optimizing yesterday's work.
Why This Matters: The distinction between practical and innovative AI applications determines whether companies achieve incremental improvements or fundamental competitive advantages in an AI-driven economy.
Try This: Audit your AI initiatives using Solis's framework—categorize each as "practical AI" (efficiency gains) or "innovative AI" (new value creation) and ensure you're balancing both.
Source: Brian Solis
✍️ Prompt of the Day
AI Psychology Assessment
Analyze our customer service AI from a psychological perspective:
For each customer interaction type we handle with AI:
1. What emotional state is the customer likely in when they arrive?
2. What are they trying to accomplish beyond the surface request?
3. How does our AI response affect their emotional journey?
4. Where do we create frustration through mismatched expectations?
5. What human psychological needs are we missing?
Rate each interaction on:
- Emotional Appropriateness (1-5)
- Expectation Alignment (1-5)
- Completion Satisfaction (1-5)
Recommend 3 psychology-based improvements that don't require new technology.
What this uncovers: Hidden psychological friction points that technology upgrades can't solve
How to apply it: Use monthly to optimize AI interactions based on human needs, not technical capabilities
Where to test: Start with your highest-volume, lowest-satisfaction customer touchpoints
🛠️ Try This Prompt
You're conducting an AI business value audit. For each AI implementation I describe, determine if it's solving a real business problem or if it's an "AI novelty" use case:
AI IMPLEMENTATION: [Describe the AI tool/process]
EVALUATION:
- Core Business Problem: [What specific business issue does this address?]
- Success Metric: [How do you measure if it's working?]
- Human Alternative: [What would humans do without this AI?]
- Complexity Score (1-5): [How complicated is this solution?]
- Value Score (1-5): [How much business value does it create?]
VERDICT:
- Business Solution: Solves clear problem with measurable value
- AI Novelty: Impressive technology without clear business justification
- Rube Goldberg Machine: Overcomplicated solution to simple problem
RECOMMENDATION: [Keep, modify, or eliminate - and why]
Apply this framework to: chatbots, predictive analytics, recommendation engines, automated scheduling, and document processing.
Immediate use case: Separate productive AI investments from expensive technology experiments
Tactical benefit: Focus resources on AI that delivers measurable business outcomes rather than impressive demos
How to incorporate quickly: Use this framework in your next quarterly AI portfolio review
📎 CX Note to Self
"The companies winning with AI aren't the ones with the smartest algorithms—they're the ones asking the smartest questions about human behavior."
👋 See You Tomorrow
The research is clear: psychology beats technology every time. Hit reply with your thoughts on where you've seen AI create more complexity than value. 👋
Enjoy this newsletter? Please forward it to a friend.
Have an AI‑mazing day!
—Mark
💡 P.S. Want more prompts? Grab the FREE 32 Power Prompts That Will Change Your CX Strategy – Forever to start transforming your team, now.
Catch a sneak peek of my upcoming book
The Psychology of CX 101
I’ve been building something I wish existed when I started in CX. And it’s almost ready to release.
It’s called The Psychology of CX 101: Why Your Customers Act the Way They Do—And What You Can Do About It.
Inside, you’ll find:
101 psychological principles you can apply directly to your customer experience
Real examples from brands you know
The behavioral science behind why customers really act the way they do
Clear, usable tactics to apply to your own journeys
Click below to sign up to be notified when the book is ready and get a peek at the first 10 principles
Let’s stop guessing. Let’s design for how people actually behave.