Playback speed
×
Share post
Share post at current time
0:00
/
0:00

Moving Beyond Surveys with Predictive CX Analytics

DCX Podcast #8

Welcome to the DCX podcast where I interview leaders in the customer experience base about how digital is changing the landscape, and how you can leverage these changes for success in your business.

Today I'll be talking with Richard Owen, the founder of OCX Cognition, a predictive CX analytics software company. Richard was part of the original team that developed the NPS methodology with Fred Reichheld and brings this and more than 15 years of CX expertise to today's conversation. He's also the co-author of ‘Answering the Ultimate Question’ which quickly has become the how-to guide for NPS practitioners.

Here are 7 key takeaways from the conversation:

  1. NPS was really just a mechanism for putting customers in three buckets. There's a great elegance to this idea of dividing customers into three groups that behave with different economic patterns.

  2. We should separate the idea of NPS as a metric from its association with its collection mechanism. The NPS metric is great. The mechanism is broken.

  3. Whether the customer is a promoter or detractor or any method mechanism you want, it's probably determined by a relatively small list of things that happen.

  4. Surveys are very poor at generating core data but extremely useful in teaching machines to think like a customer.

  5. Customer relative choice, how they evaluate decisions, is extremely stable and benefits from large numbers across a large survey base,

  6. Augment human decision-making with data and you'll get better decisions.

  7. The most profound impact of predictive CX in the next five years is going to be on the way in which frontline people do their jobs.


Transcript

Mark Levy

Richard, welcome.

Richard Owen 
Thanks very much. Every time I hear about 15 years or more. It makes me feel very definitely old. I prefer to be described as the grandfather of NPS or the Godfather, it seems even worse.

Mark Levy 
Okay, I won't go there. But I would like to know, what was it like working with Fred and how did you get engaged in that whole process?


Richard Owen 
So at the time I joined SATmetrics, Fred Reichheld was a board member and investor in the company and had already been involved in a collaborative process with Dr. Laura Brooks who was head of Research at the company, and they put it in a couple of years, pretty hard work on trying to establish connections and data patterns that would ultimately lead to NPS. So I was really at the tail end of that process and mostly involved with the commercialization of it.

So we took that methodology and we turned, in addition to obviously the technology we sold around it, a CX product. We did about 1000 enterprise implementations, but we also launched a Net Promoter certification, which had about 6000 companies take it. Probably the most successful training product of it’s kind in the world. We launched the Net Promoter conference series. We had 14 conferences before the company was sold. With and average of about 5 or 600 people at the conference. So we were all things NPS for a long time. And, you know, Fred was a very active contributor. You can imagine early in the day especially design a lot of this and I think at the end of the day, it was a big payoff for him for a very long career he’d already had contributing to the space so I think he finally got the recognition for all his work.
 
Mark Levy 
That's great. I've had personal experience with NPS. I worked at Comcast. It was a big part of I think their customer experience turnaround. Really taking it from the top all the way to the frontline a focus on NPS and getting those surveys out and really getting a pulse on customers. But I do think things have changed over the years as to how NPS is used and I imagine you sense the same. Would like to hear how you think the world's changed and how NPS has changed and why it maybe needs some updating.  


Richard Owen 
Well, I think we should first of all separate the idea of NPS as a metric from its association with its collection mechanism. So originally NPS, you know, the origin of the metric was just looking for a correlation between customer loyalty and financial outcomes. And if you think about it, NPS was really just a mechanism for putting customers in three buckets. And one bucket of customers seem to do really, really well economically. They generated most of the profits, they grew. They were actually the lowest cost to serve, a whole raft of economic benefits from one customer group. There was another group of customers who were an absolute disaster, they often have negative lifetime value, they had higher cost to serve. They never bought your more margin-rich product. And there were these guys in the middle who have some ambivalence and who could swing either way depending on whether your competitor had a better offer.

And so we recognize that was the fundamental behavior we wanted to capture and state of the art technology, circa 2001 was well go and ask people using surveys. So it was this desire to map surveys on and a 0-10 scale just happened to be a convenient fit statistically. I still think fundamentally, there's a great elegance to this idea of dividing customers into three groups that behave with different economic patterns. That's a very simple but powerful concept to understand. It holds up economically. Companies with better ratios of promoters still outperform those with worse ratios. And there's really been no better way of thinking about this, which doesn't involve a lot of complexity.

I think what has failed us has been the mechanism for generating it. I think that surveying was a great mechanism circa 2001. But two things have happened subsequently. One has been that surveying has gotten harder and harder and harder, and partly it’s because surveying costs have collapsed. And as costs have collapsed, the volume has increased exponentially. And we all understand the tragedy of the comments. If you overuse something that's free, it basically gets overconsumed and what we're overconsuming is our customer's time, so as a customer, you might have a survey every week, twenty years ago, now everybody surveys you constantly. So response rates have naturally fallen, and this is an inevitable trend. There's no going back. We are saturating customers with surveys, the response rates will go down. That will create both a quantitative problem where you're not getting many responses. But of course, the problem is that you bias yourself increasingly to a certain group of customers, and so the whole mechanism is kind of collapsing.

At the same time. What's happened in just about every other form of business is completely the opposite. We've seen literally an exponential growth in data, the data exhaust from everything else has increased massively. So survey data going down getting worse and worse, every other type of data going ballistic. The net effect is if you go into any company today, and look at it from a complete data perspective, the CX people come into the room with the survey day look like look like they're bringing the buggy whip in from the early parts of the last century, right? Saying, we managed to get 10% of our customers to fill in a form. And you've got the big data guys coming and saying, well, we got this data resource that captures everything the customer does, from the instant they are considered a prospect all the way through to the consumption of the products. And our biggest problem is figuring out how we are going to get our heads around this massive amount of data. And the survey guys are going, well, response rates dropped from 7 to 5 this year and we’re getting less and less so rather than one side of a mechanical problem in obtaining the data. So my basic thesis has always been the metric is great. The mechanism is broken.

Mark Levy 
And the data exhaust is enormous. One of the challenges I see in a lot of companies that I've talked to is data is still very much siloed. And there is a difference in what you're looking for, from sort of business intelligence to experience intelligence, and that's something I don't think that many business intelligence teams actually understand or the analytics teams fully understand. And so there's an education that needs to come there. The opportunity to correlate that together. And to bring it back to root cause to be able to figure out what the problem is, to me. That's the Holy Grail. And so it looks like you're trying to get to the Holy Grail, a bit with OCX, so tell us a little bit about how that's come about and, and what it is.


Richard Owen 
So when we sold SATmetrics, in 2017, we sold it to NICE systems, the contact center company. And what we decided was if we are going to spend any more time in the CX space, we were going to focus on a problem that we thought was really worth solving. And so we saw the data problem was the fundamental problem. If you couldn't get good data, seemed like everything else was a waste of time.

And really what OCX came about from was an observation, which was that whether the customer is a promoter or detractor or any method mechanism you want, it's probably determined by a relatively small list of things that happen. Let's say for sake of argument, 10 things, you may do 1000 things for your customers, but only 10 of them are truly deterministic in the outcome. And if you could figure out which 10 things they were, and you could calibrate them correctly, so that you understood how relatively important they were or how they worked in combination to create an outcome, then it seemed entirely predictable whether a customer was a promoter, and that basic germ of an idea said Well, is it possible for us to isolate the data patterns that create promoters and therefore predict with accuracy whether someone is a promoter.

And if you think about it, it isn’t completely intuitive, look, if I am a manufacturer, and the customer places an order for my product, and they told me it will be delivered in five days and it shows up in two weeks, it doesn’t take a genius to say that customer, the probability that that customer has become a detractor has increased. Now the question is, how much can we quantify that and that depends on those two things. One is how important is shipping relative to everything else, maybe the customer didn’t care about shipping. The product shows up and it’s so fantastical that they don’t care and it shows up and they’ve forgiven your sins. Maybe shipping is everything, and you know, it's Amazon and I could have bought this anywhere and you showed up two weeks late. So it's a relative question about shipping and what is good enough? Is two weeks late terrible? Is 24 hours terrible?

So what we do is we essentially isolate the variables that describe the experience that matters to a customer. We quantify them. And we use that s a basis of prediction. And you can get into a lot of interesting questions about machine learning. But basically machine learning is just statistics, don’t say that too loud to the data scientists. At the end of the day, it's applying computer processing to a bunch of data to see how those patterns create prediction.

So what we're essentially doing is we're saying, if someone doesn't answer a survey, we can still predict, with a high degree of accuracy, whether they're a promoter, and the most obvious application for this is in relationship surveys, where the data is quite sparse and quite infrequent. And we can bulk that up. So you get a 5% response rate, now you get 100% coverage of customers. You get that from a single customer once or twice a year. Now you can get it weekly, or daily or hourly, because as long as the underlying data is changing.

So the vision was to solve the data problem. And I think by and large we’ve done that, you know, I think that there is still a lot to do in terms of the evolution of the algorithms and the data replication. But we've already seen with the early adopting customers that they're getting a massive data set where they had a poor data set, the data set looks very accurate, and they’ve sort of solved the surveying problems, for all intents and purposes.

Mark Levy 
So, with OCX proliferated, we could get away from surveys, we wouldn't really need them or is there still a benefit to both?


Richard Owen 
So, by a quirk of fate, as much as we set off to say maybe we could do without surveys, it turned out that we became actually big fans of surveys for a slightly different reason. So we found that surveys were very poor at generating core data but extremely useful to teaching the machine to think like a customer. So correctly designed, the survey becomes a training mechanism. And so we're interested in relative impact. So we use surveying to gain relative impact to train the machine and then the machine processes the data. Now there are different ways to teach the machine to think like a customer. And surveys may not ultimately be the only way, it might not be the most impressive way. But right now, it's a really useful way to train machines.

At this point, I often get this very odd question I get asked, but it makes perfect sense because it seems to people like it's an inconsistency. And they’ll say, well, hold on, you're just saying that surveys are inherently inaccurate because of very small data sets. But then you're saying use the same survey to train the machine, aren’t you building in that inaccuracy? So essentially, you can’t have it both ways.

Well, we're lucky in that regard, because customer relative choice, how they evaluate, in our example shipping versus anything else, is extremely stable and benefits from large numbers across a large survey base, so aggregated the survey data set even over several periods, maybe years is very good at helping us understand how the customer thinks. It's just very poor at saying how any individual customer thinks on any individual day. So we're using surveys for what they're really good at, which is sort of researching relative impact, and we're ignoring them for what they're bad at, which is telling us exactly what any given customer given day thinks.

Mark Levy 
So, in the practical implementation of OCX in let's say supporting a business that does e-commerce and has call centers and I want to reduce calls, like how do I get to how do I get from this to using data to help CX professionals make decisions? And quantify those decisions either in real-time or near real-time?


Richard Owen 
The answer is methodically. So most companies are quite rightly, very nervous about data engineering problems. So if you step through the door and say to a company hand me all your data, and I’ll solve all your problems, you know, it's actually it's like highway robbers in old England, right, your data or your life. I think that most people look like they’ve been held up in highway robbery, where we can’t get IT resources, and we’ve got all these datasets, integration would be a nightmare. And so to deal with a real-world problem, that data was imperfect badly organized. We do two fundamental tricks and they are both compromises, but they are compromises that don't fundamentally lose the momentum of the idea but make things practical.

The first compromise is, within every industry, we force all the data into a single organizational framework. So if you're in the cable industry, the telecom industry. We say here's a framework in for the telecommunications industry. And you, Comcast, may think you're different than everyone else in the telecommunications industry, but we’re going to persuade you to adopt this framework for organizing data, for two very good reasons, one, we've already pre-coded it, so we've solved many problems that you don't have to solve again, algorithmically. Secondly, you will be able to take a bunch of data corpus across multiple telecommunications companies, and that will improve the overall accuracy.

So it's a trade-off of flexibility for speed of implementation, convenience, and ultimately, a collective data corpus and benchmarking. So that's one trick or compromise. The second major compromise is we want to build these data assets methodically over time. So we're not going to start with everything. And we're not gonna start with everything being automated. We're gonna start with a subset of data, minimum viable, and then we'll expand it across the rest of the customer journey chain over time.

Now, there's obviously a price to be paid for that. In two ways. The less data you start with does affect accuracy to some degree, but not as much as you might think as long as we get the most critical elements, we are still going to have decent accuracy. What it tends to affect more is attribution. Because if I'm not collecting let’s say marketing data, then I can't accurately attribute failure to marketing or success to marketing. So I narrow my breadth of insights if I don't get all this data in. And what we found for most customers is, this is just a real-world practical approach. Start relatively small, make it work, start to develop at scale add more over time.

And by the way, I think this is the future of all data engineering. I think any company today that wants to build data assets has to recognize this on a multi-year maybe never-ending path to build more and more data assets. And the goal is to sweat the assets you have en route to nirvana and anyone that comes in and says, You know what? We solved this we're going to have this great big snowflake application in the cloud and everyone's gonna go into it's gonna be perfectly organized. Just you wait. That's absolutely never going to happen. Right? You know, it's never going to achieve that. You're really in a race to the starting line. And to some extent, we've got to take that approach, if we're gonna be practical. So we have to compromise and use data engineering technique to navigate that. There’s not a reason for not starting. If you're waiting for things to look good. You'll never start.

Mark Levy 
Yeah, so who are the users inside an organization?


Richard Owen 
So that's a great question. So we really see two predominant audiences and it aligns to a large degree with what CX has traditionally served. And I have to say most of our work is business to business. So fair disclosure, we're really spending time on that space mostly.

So the two major audiences, first of all, an executive team. And the executive team as its always been in CX, basically want two or three questions answered. Are we any good? They want to know where they are, and what can we do to improve. And we can answer that question a lot better, because we can now say, here's your entire customer base, entire account base, and here's where you're red, here’s where you are amber, and here’s where you are green. But we can also now attribute back to the operational data elements that drive it. So if you want to run the company to create promoters, you need to ship products in two days, very specifically, three days is too slow, and one day’s unnecessary. You need to have an 80% first-time call resolution.

So we want to design a sort of highly calibrated customer scorecard across multiple functions, that is both the correct measures and the right calibration for the measures. And that's what executive staff wants, they want a formula or recipe if we do these 10 things right, and right means this we're gonna have great outcomes and then assign that to the organization, they can hold people accountable. So NPS is an outcome from it. But this is the real metrics use to run the company, the operating metrics, how we manufacture, how we service. So that's one audience.

The other audience not surprisingly, and it's a tougher nut to crack I think in CX has always been the frontline, and I'm particularly interested with sales teams or teams that are accountable for the financials of the customer. In some companies that might show up as customer success in SAAS, it’s often account management in non-SAAS businesses, but there's always somebody who's on the hook to have that customer renew or upsell. We want to give them a useful tool. We’re not saying by the way and I don't think it's true that the magic predictive engine in the sky is right and you’re wrong. We say we’re telling you your customers is a detractor, you may think they are perfectly happy,  but you’re wrong. The machine says they are a detractor, that's not our intent.

What we recognize is that for most companies today, it's pure guesswork and human judgments if you were to ask a sales team or an account manager, whether or not the account is in good health. And that guesswork is highly biased. It's who they speak to, how they interpret it, whether they're told truth, might be the wrong people. What we are trying to do is give them an objective data source and the data source basically tells them this. If your customer behaves rationally and logically the way other customers have demonstrated behavior, given the experience they've had, they would be a promoter or they will be a detractor. Now take that piece of information and use that to refine your own judgments. Go off and improve your decision-making. And there’s a great piece of research being done at MIT around how collaboration between augmentation of human decision-making with data results in much better decisions and better than machines on their own or all humans on their own. So sounds a bit like a pitch for a science fiction series. What we're saying is augment human decision-making with data and you'll get better decisions.

So we wanted to put these customer-facing teams on a similar page in terms of understanding the health of the customer. Whether they're in tech support, or customer success, sales, marketing, and put them all on a single page. And we want to give them data that hopefully enables them to make better decision making. We want that data to be for every account and we want data that’s very current. If we can get people that data, and it's accurate, I believe it will change their perspectives and their behavior and they will make better decisions and better decisions will improve the outcomes companies are gonna get. So those are the two big audiences

Mark Levy 
And how is the selling and implementation process at the moment? You say start small, who defines the use cases? How is that coming together with your sales team?


Richard Owen 
Yeah. So I think like every business, we're a relatively early-stage company, been in business for two years. And so this is evolving very rapidly. I think that every time we go through a customer implementation, we're learning something new and we are revising it. I think what we have discovered now is a very good process that has worked reliably for us in terms of data engineering, design of use cases. What we are finding is the customers who are now the most mature, I mean the platform has been shipping for just over a year. The more mature customers are now challenging us in terms of questions, which we probably didn't anticipate, to be honest when we started. You know, if you've got 100 times more data, do you use it the same way? Almost everyone in the CX data universe is living in a world of scarcity. Right? I have relationship data once or twice a year, 10% of my accounts. What happens if you have it every week on 100% of your accounts? The type of analysis you want to run, the way in which you want to use that data, the way that data is gonna flow around the company, the way salespeople are gonna use it.

Here's a very simple example, early on one of the things that worried us a lot, was that the data was very volatile. So what if the algorithm said this week your customers changed from passive to a detractor, and the salespeople say, let’s run off and do something about it. Wait a second, we’ve got a new piece of data coming in, they are back to a passive?

So you don't want to create a highly volatile system. Now you've moved off this very, very inflexible set. And so these are these problems that you'd never would have faced in the past because you don't have data. And that changes your idea of use cases and changes your idea of how people should process information, how you want people to respond. And how you want executive teams to think about CX. You have to think of everything in probabilistic terms and early on that was hard for people to get their heads around because they were interpreting CX data very literally.

Mark Levy 
Right. And then they are saying, What do I do about this?


Richard Owen 
Right. What do I do about this? I know someone is a detractor because they filled in a survey. Saying I’m a detractor. Now they are telling me they are very probabilistically a detractor? What do I do with that? Actually, the two data points aren't that different. I mean, a survey is essentially an estimate anyway. We're still just using probabilities, but it's not what we're used to. So we're educating management teams on how to think about predictive datasets.

Mark Levy 
Can you predict behavior?


Richard Owen 
What we can predict is that somebody is a promoter, and therefore we have a very good idea of how they will behave based on that. We can predict churn for example, which is an interesting problem because being a promoter doesn't necessarily mean somebody doesn't churn. There are other variables that connect loyalty to churn. The two aren’t synonymous, contractual stickiness. There's a whole variety of things, so we can make connections like that. But getting to churn prediction is slightly different I know a lot of people do churn modeling, but the modeling isn't terribly operationally useful. It says things like, well, you know, if the customer is in Louisiana they are more likely to churn is great, What I really want to know is why, what is it we're doing that creates that and so I don't particularly love churn modeling for that reason, I think there's a false sense of fidelity to the data and the false sense of accuracy. What everyone wants to know is what should we do with our operation to create better outcomes? And so that is a CX approach, that's not really a churn modeling approach.

Mark Levy 
So in the future, right, what's the next five years look like for companies that implement predictive CX analytics?


Richard Owen 
Well, I think, first of all, the data is gonna get bigger and bigger and richer and richer and that will drive two things. First of all, it will create more and more accuracy in the datasets. It will also create much, much bigger data sets. And that will lend us to whole new sets of analytics no one has thought of yet.

And if you think about a good analog, think about the stock market, which has an immense amount of real-time data. And for the longest time, people were scratching their heads saying, Well, you know, how do we figure out some correlations in stock performance and oh, let's create a metric called beta. Let’s look at the beta of the stock, or alpha, all these are really useful metrics now all of a sudden we have analytics but to some extent, provide an insight into the behavior of a stock, abstracted from that pure random noise that occurs every single day. So I think they're gonna be much more rich analytics.

The second point I would say is that analytics is going to be integrated across all of these different capabilities within the business of marketing data sales, data support data, is going to be joined together with CX data. So companies can answer much more interesting questions. And I think that that's going to unlock a lot more creativity for analysts to be able to solve more interesting problems that were just impossible in the past because the data simply didn't exist.

But I would say the most profound impact is going to be on the way in which frontline people do their jobs. Becoming data-driven decision-making in every profession, is gonna change the way people do their jobs. And I think that where CX people were all very data orientated. Okay, that's not too scary. For a lot of salespeople or support people, it's not necessarily the universe that they’ve historically lived in. The skills that made them successful weren’t necessarily being very smart with data. We have to find a way to make that data useful for them. But we need to build datasets that people like sales reps, find incredibly useful, and make them feel more effective. And that's gonna be the challenge for us all to make that data, high value to the end users.

Mark Levy 
Yeah, that's a great vision. And I'm excited to see where it all goes. And I want to thank you for sharing your experience and OCX with us and wish you all the best.

Richard Owen

Thanks very much, Mark

The DCX Newsletter
The DCX Newsletter
Authors
Mark Levy