Your Customers are Bonding with AI
Plus: the “first hello” is becoming automated, whether we like it or not

📅 December 19, 2025 | ⏱️ 4-min read
Good Morning!
The Executive Hook:
AI used to be something customers tried. Now it’s something many customers lean on.
That matters for CX because people don’t leave those expectations at the door. If they’re getting fast answers, steady tone, and “always there” behavior from AI in their personal lives, they’ll start expecting the same from your brand. Not because they’re unreasonable, because that’s how habits work.
At the same time, companies are pushing AI to the very front of the journey: the website, the first question, the first decision point. So today’s theme is simple: if AI is going to be your first voice, you need to be intentional about what it says, what it doesn’t say, and how it hands off.
🧠 THE DEEP DIVE: Salesforce is Buying a “Greeter” For Your Website
The Big Picture: Salesforce is acquiring Qualified so more companies can run an always-on conversational assistant on their website — the kind that starts the interaction, figures out what someone needs, and moves them forward.
What’s happening:
Salesforce signed a definitive agreement to acquire Qualified, which focuses on website-based conversations for B2B buying journeys.
The promise is “always-on” engagement: answer questions, qualify interest, and help create pipeline moments like booking meetings.
Salesforce is positioning this as part of its broader push toward agents across marketing and sales workflows.
Why it matters:
When the first interaction is messy, CX teams feel it later. Confusion turns into tickets. Overpromises turn into escalations. A rough first minute becomes a long and expensive cleanup job.
If an AI assistant is going to greet customers at the front door, it needs three things:
plain language (no corporate-speak),
clear limits (“I can help with X, I can’t do Y”),
an easy path to a human when the situation needs judgment, empathy, or exceptions.
The takeaway:
Don’t treat this like “adding chat.” Treat it like staffing your front desk. Train it. Give it boundaries. Review what it’s saying every week. And make escalation feel normal, not like failure.
Source: Salesforce
📊 CX BY THE NUMBERS: People are Using AI for Emotional Support
33% of UK participants said they used AI for companionship, emotional support, or social interaction in the last year (N=2,028).
8% said they do this weekly, and 4% said daily.
AISI also points to signs of emotional dependence showing up during outages in an AI companion community (more distress and support-seeking behavior).
The Insight:
This isn’t about your company building a “companion bot.” It’s about your customers walking in with a new baseline: they’re getting used to AI that feels responsive and steady.
So the CX question becomes: when your AI can’t help, does it fail gracefully? Does it stay calm? Does it clearly say what it is? Does it move people to a human without making them repeat everything?
If you don’t design that part, you’ll still “have AI”… but you’ll also have more frustration.
Source: AI Security Institute (AISI)
🧰 THE AI TOOLBOX: Zendesk + Unleash
The Tool: Zendesk acquired Unleash to make internal employee support faster — think IT, HR, and ops questions that slow teams down.
What it does:
Unleash is enterprise search that pulls answers from across systems and knowledge sources, with permission-based retrieval (so people only see what they’re allowed to see).
CX Use Case:
Faster internal support shows up in external CX. If employees get answers quickly, customers feel it in resolution time and confidence.
Better consistency. When the “right answer” is easy to find, customers get fewer mixed messages across channels and teams.
Trust:
Permission-based retrieval matters. It helps avoid two classic CX failures: sharing the wrong info, or sharing the right info with the wrong person.
Source: Zendesk
⚡ SPEED ROUND: Quick Hits
UK banks are piloting agentic AI for money management — and the regulator is watching closely. Customer-facing trials are expected in early 2026, and the big theme is accountability when systems act with more autonomy and speed.
Source: Reuters (Reuters)OpenAI updated its Model Spec with teen protections (U18 principles). If your brand uses AI in customer-facing journeys, this is the direction things are moving: age-aware behavior and safety design are becoming table stakes.
Source: OpenAI (OpenAI)Anthropic shared new steps to protect user well-being in sensitive conversations. For CX leaders: “sounding supportive” isn’t the same as being responsible, especially in high-risk moments.
Source: Anthropic (Anthropic)
📡 THE SIGNAL: “Always-on” only works when escalation is always real
A lot of AI experience design still assumes the customer is calm, patient, and asking tidy questions. Real life isn’t like that.
If customers are building stronger emotional habits with AI outside of work, they’ll bring that expectation into support, billing, claims, and service recovery. That raises the bar on tone, clarity, memory, and honesty.
The win isn’t “more automation.” The win is an AI experience that knows its limits, stays grounded, and hands off cleanly when the moment needs a person.
See you on Monday!
👥 Share This Issue
If this issue sharpened your thinking about AI in CX, share it with a colleague in customer service, digital operations, or transformation. Alignment builds advantage.
📬 Feedback & Ideas
What’s the biggest AI friction point inside your CX organization right now? Reply in one sentence — I’ll pull real-world examples into future issues.






