Six ways customer service leaders get NPS wrong
Evaluating and improving on customer satisfaction means more than a score
When the Net Promoter Score came into fashion in business, it seemed everyone was eager to talk about it. It also seemed simple: Ask one question to learn what portion of your customers are so satisfied with your business that they would recommend it to a friend or colleague.
Unfortunately, the system first popularized in Fred Reichheld’s book, The Ultimate Question, has been passed from executive to executive so often that both understanding it an administering NPS correctly have fallen apart. Its value in evaluating customer loyalty has likewise fallen, so that NPS, in many cases, has become the lipstick on the pig that is customer service systems.
To help you determine whether your customer satisfaction scoring system is spot-on or needs a tune-up, we’re sharing the top six ways NPS is most often misused, from a Satmetrix-certified NPS Associate.
#1 – Getting the question wrong
NPS is a measurement of customer satisfaction. It asks just one question, “How likely are you to recommend [company/product] to a friend or colleague?” and uses a simple, 0-10 scale, with 10 being most likely.
What makes NPS particularly interesting, compared to other customer sat scores, is the phrasing of the question. By asking whether customers would recommend a company/product to a friend or colleague, it goes beyond mere satisfaction to predict loyalty and advocacy and therefore long-term company growth.
Often, we see companies claiming to do an “NPS score” when in fact the question they ask is, “How satisfied are you?” or they give customers a series or ten, twenty or more questions about every detail of the experience. Both can skew the results positively; in the first case, people tend to be more generous rating their satisfaction compared to their willingness to recommend a brand. In the second case, fewer neutral or dissatisfied customers are willing to tackle a long survey when they’re already dissatisfied with the brand.
#2 – Failure to follow up
Although NPS promotes the concept of asking just one question, an open box for follow up is a key factor in understanding the customer’s mindset. What if they rated you a 9 but noted your customer service rep had failed to follow up? What if they rated you a 7 but said your service was “great, just too expensive.” And what if they gave you a 5 and said they don’t refer you because your competitor has referral incentives but you do not?
The purpose of the neutral follow-on question, typically phrased “What is the primary reason for your score?” does several things. First, it defines what matters to the customer—the primary reason. Second, it allows praise or criticism—the customer controls whether to be positive or negative, regardless of their numerical score. Finally, it invites an open-ended specific comment of any length that business leaders can act on.
You might also want to add some follow-up questions about specific elements of an experience. NPS methodology discourages long surveys because it believes short, simple surveys get to the heart of the most significant issues. If you wanted to follow up on specific issues, for example, you could choose 3-5 categories and ask, “Please rate your experience with each of these: Communication, technical expertise, product quality, etc.”
#3 – Calculating the NPS score incorrectly
Once you have answers to your NPS question, how do you calculate responses? Let’s say you have 110 responses with this answer distribution:
- Score 10 = 10 people – these are classified “promoters”
- Score 9 = 10 people – these are classified “promoters”
- Score 8 = 10 people – these are classified “passive”
- Score 7 = 10 people – these are classified “passive”
- Score 6 = 10 people – these are classified “detractors”
- Score 5 = 10 people – these are classified “detractors”
- Score 4 = 10 people – these are classified “detractors”
- Score 3 = 10 people – these are classified “detractors”
- Score 2 = 10 people – these are classified “detractors”
- Score 1 = 10 people – these are classified “detractors”
- Score 0 = 10 people – these are classified “detractors”
Now, take the number of people in each category (promoter, passive, or detractor), and divide by the total to find the percentage of in each category.
- Promoters = 20 people = 18.2%
- Neutral = 20 people = 18.2%
- Detractor = 70 people = 63.6%
To find the NPS, subtract the number of detractors from the number of promoters (ignore passives for now):
- 18.2% – 63.6% = -45.4 NPS
NPS is not expressed as a percentage. It can range from -100 (everyone is a detractor) to +100 (everyone is a promoter). So while a score of “30” might look like a failing grade on a test, it’s actually above the middle score, which is 0.
#4 – Creating survey bias
It is recommended, to avoid bias, that large-scale surveys be deployed through a third party. This can avoid undue influence from internal customer service reps, who might say “We’re striving for all 10s on our survey!” to increase their scores.
You can offer survey-takers various degrees of confidentiality, from making the survey entirely blind, which prevents you from following up directly on individual complaints, to anonymizing the survey at a group level (such as customers from one organization have their scores grouped), to no anonymity. If your intention is to find and fix individual problems, we recommend no anonymity. If you intend to gauge success on a macro level with hundreds of survey-takers, an anonymous survey might work best.
Ask customers your NPS question no more than every 6 months to avoid survey fatigue, and ideally every year as a benchmark.
#5 – Failing to put NPS scores in context
Context is key for NPS. The best way to express it is in relative terms to another score. For example, we might say “Software X has an NPS score of 27, which is 9 points higher than the software industry average of 18, and 2 points higher than Software X’s score last year.”
By understanding the NPS typical of your industry—or better yet, by learning the NPS scores of key competitors, which are often made public by various research firms—you’ll soon learn whether your score is strong or needs improvement.
#6 – Failing to create an action plan
When designing an NPS survey, the key elements are:
- Use the NPS question “On a scale of 0-10, with 10 being highest, how likely are you to recommend X to a friend or colleague?”
- Use the 0-10 scoring framework, with 10 being the highest, for the greatest fidelity to NPS methodology.
- Ask a neutral follow-on question, “What is the primary basis for your score?”
- Develop a business action plan to analyze results, share them with key stakeholders, and follow up with customers.
This last step is where most companies fail to follow through with the NPS methodology. Here are some ways companies can capitalize on NPS as more than a score:
- Segment customers based on their scores into promoter, passive, detractor, and no-score categories, and send them different targeted messages. For example, your promoters might receive a message encouraging referrals; your passives might receive a message about an improved program; and your detractors might receive a special offer to come back again.
- Develop a list of themes you’ve seen most often in comments, and task various business leaders with solving them.
- Share comments in your company blog or newsletter and discuss how you’re addressing them. This can show customers who answered an anonymous survey that you listened, and more importantly, took action.
- Associate key findings with business activities and revenue to put the score—and future scores—in context.
A strong action plan should include the who, what, and when (timeframe) of direct follow-up. This action plan can be segmented in three parts: changes to how the internal organization will operate (how we act), changes to policies (how we govern), and changes to training (how we evolve).