Your CEO wants a customer satisfaction number for the board deck. Your VP of Product wants to know which features are failing. Your support lead wants to measure ticket resolution quality. CSAT, NPS, and CES each measure something fundamentally different, and choosing the wrong one means blind spots that cost you customers. This guide compares all three with formulas, benchmarks, and a decision framework.
Before diving into each one, here's a side-by-side comparison:
| CSAT | NPS | CES | |
|---|---|---|---|
| Full Name | Customer Satisfaction Score | Net Promoter Score | Customer Effort Score |
| Measures | Satisfaction with a specific interaction | Loyalty and likelihood to recommend | Ease of completing a task |
| Question | "How satisfied were you with [experience]?" | "How likely are you to recommend us?" | "How easy was it to [complete task]?" |
| Scale | 1 to 5 (or 1 to 7) | 0 to 10 | 1 to 5 (or 1 to 7) |
| Score Range | 0% to 100% | −100 to +100 | 1 to 5 (average) |
| Formula | (Satisfied responses / Total) x 100 | % Promoters − % Detractors | Sum of scores / Total responses |
| Best For | Transactional touchpoints | Overall loyalty tracking | Support and onboarding flows |
| Time Horizon | Immediately after interaction | Quarterly or biannual | Immediately after interaction |
| Actionability | High (tied to specific touchpoint) | Low (general sentiment) | High (tied to specific process) |
| Benchmarkability | Moderate | High | Low |
| Tells you "why" | No (unless follow-up added) | No (unless follow-up added) | No (unless follow-up added) |
Now let's examine each metric in detail.
CSAT measures how satisfied a customer is with a specific interaction, transaction, or experience. It's the most direct satisfaction metric: you ask people if they're happy, and they tell you.
"How satisfied were you with [specific experience]?"
Respondents typically answer on a 1 to 5 scale:
| Rating | Meaning |
|---|---|
| 1 | Very Unsatisfied |
| 2 | Unsatisfied |
| 3 | Neutral |
| 4 | Satisfied |
| 5 | Very Satisfied |
CSAT = (Number of satisfied responses / Total responses) x 100
"Satisfied" typically means respondents who selected 4 or 5. Some companies include 3 (neutral), but standard practice counts only the top two scores.
You survey 150 customers after a support interaction:
CSAT = (45 + 60) / 150 x 100 = 70%
Post-support interactions: "How satisfied were you with the help you received?" This tells you whether your support team meets expectations at the individual ticket level.
After onboarding: "How satisfied are you with the setup process?" A low CSAT here reveals onboarding friction before it shows up in churn numbers weeks later.
Feature-specific feedback: "How satisfied are you with our reporting dashboard?" This lets you measure satisfaction with specific product areas rather than the product overall.
CSAT is inherently short-term. A customer can give your support team a 5/5 today and still churn next month because your product is missing a critical feature. CSAT captures the moment but not the relationship.
It's also susceptible to recency bias. According to Zendesk's 2024 CX Trends Report, 73% of customers say a single bad interaction can override months of positive experiences. CSAT reflects this: the most recent interaction dominates the score.
| Range | Interpretation |
|---|---|
| Below 60% | Below average; significant issues |
| 60% to 70% | Average; room for improvement |
| 70% to 80% | Good; meeting most expectations |
| 80% to 90% | Very good; exceeding expectations |
| Above 90% | Excellent; rare and hard to maintain |
Most SaaS companies aim for 75% to 85% on post-support CSAT. The American Customer Satisfaction Index (ACSI) reports an average of 77% across the software industry.
Key takeaway: CSAT is your best metric for measuring specific touchpoints. If you only measure one thing about your support team, measure CSAT. But never use CSAT alone to judge overall product health.
NPS measures customer loyalty by asking how likely someone is to recommend your product. Unlike CSAT, which measures a moment, NPS attempts to measure the overall relationship.
We cover NPS in depth in our complete NPS guide, but here's the essential summary.
"On a scale of 0 to 10, how likely are you to recommend [product] to a friend or colleague?"
| Group | Score | Description |
|---|---|---|
| Promoters | 9 to 10 | Loyal advocates who will refer others |
| Passives | 7 to 8 | Satisfied but not enthusiastic; vulnerable to competitors |
| Detractors | 0 to 6 | Unhappy, at risk of churning and discouraging others |
NPS = % Promoters − % Detractors
300 survey responses:
NPS = 50% − 20% = +30
Board and investor reporting: NPS is universally understood. Bain & Company reports that approximately two thirds of the Fortune 1000 use NPS, making it the most recognized customer loyalty metric in business.
Trend tracking over time: Measuring NPS quarterly reveals whether your product decisions are improving or hurting customer loyalty. A drop from +40 to +28 over two quarters is a clear signal something is wrong.
Competitive benchmarking: Because so many companies use NPS, you can compare against industry averages. Retently's 2024 benchmark data puts the average B2B SaaS NPS at 36.
NPS doesn't tell you what to fix. A score of +25 means you have more promoters than detractors, but it gives your product team nothing to act on. That's why NPS surveys should always include an open-ended follow-up question.
The 0 to 6 detractor range is also notoriously blunt. A score of 6 and a score of 1 carry equal weight in the formula, losing important nuance.
A score between 30 and 40 is considered good for B2B SaaS. Above 50 is excellent. Below 0 means more detractors than promoters and signals an urgent need to act.
Try it yourself: Set up a feedback board alongside your NPS surveys to capture the "why" behind every score. No credit card required.
CES measures how easy it was for a customer to accomplish a specific task. The insight behind CES is counterintuitive. Research from the Corporate Executive Board (now part of Gartner) found that reducing customer effort is a stronger predictor of loyalty than delighting customers.
Their landmark 2010 study, published in Harvard Business Review as "Stop Trying to Delight Your Customers," backs this up. They found that 96% of high-effort interactions lead to disloyalty, compared to only 9% of low-effort ones.
"How easy was it to [complete specific task]?"
Common variations:
| Rating | Meaning |
|---|---|
| 1 | Very Difficult |
| 2 | Difficult |
| 3 | Neither Easy nor Difficult |
| 4 | Easy |
| 5 | Very Easy |
CES = Sum of all scores / Total number of responses
CES is reported as an average rather than a percentage.
100 customers rate the ease of resolving a support ticket:
CES = (20x5 + 35x4 + 25x3 + 12x2 + 8x1) / 100 = 3.47
Post-support ticket resolution: CES is purpose-built for this. A low CES on support interactions tells you that even when you solve the problem, the process of getting it solved is frustrating.
Onboarding flows: "How easy was it to set up your account?" Low CES during onboarding predicts higher churn within the first 90 days. A Gainsight study found that customers with high-effort onboarding are 62% more likely to churn in the first year.
Self-service interactions: "How easy was it to find the answer in our help center?" This is invaluable for measuring knowledge base effectiveness.
Checkout and upgrade flows: "How easy was it to upgrade your plan?" High-effort upgrade processes directly cost you revenue.
CES is narrow by design. It tells you whether a specific process was easy but says nothing about overall satisfaction or loyalty. A customer can find your support process effortless (CES 4.8) while being deeply unhappy with your product's core functionality.
CES also doesn't capture emotional satisfaction. A frictionless but cold, impersonal support experience can score well on CES but poorly on CSAT.
| Score | Interpretation |
|---|---|
| Below 3.0 | High effort; causing friction and likely churn |
| 3.0 to 3.5 | Moderate effort; room for improvement |
| 3.5 to 4.0 | Low effort; meeting expectations |
| 4.0 to 4.5 | Very low effort; smooth experience |
| Above 4.5 | Effortless; outstanding |
Here's a detailed breakdown across every dimension that matters for choosing the right metric:
| Dimension | CSAT | NPS | CES |
|---|---|---|---|
| What it measures | Satisfaction with specific interaction | Overall loyalty and advocacy | Ease of task completion |
| Survey timing | Immediately after interaction | Quarterly or biannual | Immediately after interaction |
| Actionability | High (tied to specific touchpoint) | Low (general sentiment) | High (tied to specific process) |
| Benchmarkability | Moderate (varies by touchpoint) | High (standardized globally) | Low (no universal benchmark) |
| Predicts churn | Moderately | Moderately | Strongly (for effort-related churn) |
| Predicts growth | Weakly | Strongly (promoters drive referrals) | Weakly |
| Response rates | High (short, contextual) | Moderate (requires email outreach) | High (short, contextual) |
| Implementation effort | Low | Low | Low |
| Best audience | Support, onboarding, specific features | Entire customer base | Support, onboarding, self-service |
| Captures emotion | Yes (satisfaction is emotional) | Partially (loyalty implies emotion) | No (measures process, not feeling) |
| Tells you "why" | No (unless follow-up added) | No (unless follow-up added) | No (unless follow-up added) |
That last row is critical. None of these metrics inherently tell you why customers feel the way they do. They all require follow-up questions for qualitative context, and response rates on those follow-ups are always lower than on the rating itself.
Key takeaway: No single metric covers everything. CSAT measures the moment, NPS measures the relationship, and CES measures the process. The best teams use all three at the right touchpoints.
The answer for most SaaS companies is: more than one, at different touchpoints.
Here's a realistic implementation that balances coverage with survey fatigue:
| Metric | Where | When | Frequency |
|---|---|---|---|
| NPS | Email to full customer base | Rolling quarterly | Each customer surveyed once per quarter |
| CSAT | In-app after support ticket close | Immediately | Every resolved ticket |
| CSAT | In-app after onboarding milestone | After completing setup | Once per user |
| CES | In-app after support ticket close | Immediately | Every resolved ticket (alongside CSAT) |
| CES | In-app after key workflow | After first use of core feature | Once per user |
This gives you loyalty tracking (NPS), touchpoint satisfaction (CSAT), and friction identification (CES) without overwhelming your users. No customer should receive more than one survey per week.
Here's the part that doesn't get discussed enough. CSAT, NPS, and CES are all point-in-time survey metrics. They capture how a customer feels at the moment of response. They don't capture what customers think between surveys.
This creates three significant blind spots:
Between quarterly NPS surveys, a lot happens. Features ship, competitors launch new products, pricing changes hit, and customer needs evolve. McKinsey's 2023 State of Customer Care report found that customer expectations change 2 to 3x faster than most companies update their measurement approaches. Survey-based metrics miss all of this.
Even with follow-up questions, survey qualitative data is shallow. A customer who writes "needs better reporting" in an NPS follow-up isn't telling you which reports or for which use case. There's no way for them to prioritize that against other improvements. No conversation, no upvoting, no way for other customers to say "yes, me too."
Surveys tell you how many people are unhappy. They don't tell you which improvements would satisfy the most customers. Ten detractors can cite ten different reasons. Which one should you fix first?
Key takeaway: Surveys tell you the score. Feedback boards tell you what to build next. The two are complementary, not competing.
This is where continuous feedback fills the gap. A feedback board with voting gives you always-on signal. Instead of waiting for the next NPS cycle to learn that customers want better integrations, you can see that request accumulate votes in real time.
The combination works like this:
Across 6,035 product teams, ProductLift has collected over 157,624 feedback items and helped ship 39,406 features based on that continuous signal. That's the kind of volume and specificity no quarterly survey can match.
Try it yourself: Launch a feedback board alongside your survey metrics and see how much richer the signal becomes. No credit card required.
Rather than replacing surveys, continuous feedback makes them more powerful. Here's how they complement each other in practice.
When NPS drops from +38 to +29, the follow-up responses give you fragments: "missing features," "too expensive," "competitors are better." Your feedback board gives you the full picture. The top three feature requests all relate to a workflow you changed last quarter, and they have 200+ combined votes. That's actionable.
Post-support CSAT of 65% tells you customers are unhappy with support interactions. Your feedback board shows that 40% of support tickets stem from a confusing settings page. Fix the settings page and support CSAT improves without changing anything about the support team itself.
An onboarding CES of 2.8 tells you setup is painful. Your feedback board shows that the top-voted request is "let me import data from a CSV." Now you know exactly which friction to remove.
ProductLift's Stripe integration adds a layer none of these metrics can: revenue context. When you see that a feature request has 50 votes, you can also see that those 50 voters represent $45,000 in MRR. Compare that to another request with 80 votes but only $8,000 in MRR. The prioritization decision becomes much clearer.
With user segments, you can filter feedback by MRR range, plan type, and customer status (active, trial, churned). A detractor paying $5,000/month who wants a specific integration is a very different signal than a free trial user who wants the same thing.
You just shipped a redesigned dashboard. Which metric do you use?
Best choice: CSAT. Survey users who have interacted with the new dashboard: "How satisfied are you with the new dashboard?" This gives a direct measure of whether the redesign landed well.
NPS would fail here because it measures overall loyalty, not reaction to a specific change.
Your churn rate jumped from 3% to 5% over two months.
Best choice: NPS + continuous feedback. NPS shows whether detractors are increasing (confirming the problem). Your feedback board and open-ended responses tell you why. Maybe a competitor launched a feature you lack. Maybe your pricing change frustrated mid-tier customers.
CES would fail here because churn driven by missing features or pricing has nothing to do with effort.
Customers complain that getting help takes too long.
Best choice: CES + CSAT together. CES tells you whether the process is easy. CSAT tells you whether the outcome was satisfying. A customer can find the process easy (high CES) but the answer unhelpful (low CSAT), or vice versa.
You have limited engineering resources and six competing feature requests.
Best choice: None of these metrics. This is where surveys fall short entirely. You need a feature voting board where customers can upvote what matters most. Combine that with revenue data from your Stripe integration to weight votes by customer value. ProductLift's Journey Model connects this signal directly to your roadmap, then communicates what shipped through your changelog.
Collecting data without acting on it's worse than not collecting at all. It wastes customers' time and erodes trust. Here's how to close the loop on each metric.
Ready to close the gap between scores and action? Start a free trial and see how continuous feedback turns survey insights into shipped features. No credit card required.
Yes, and most mature SaaS companies do. The key is using each metric at the right touchpoint. Use NPS for overall loyalty (quarterly), CSAT for specific interactions (post-support, post-onboarding), and CES for process efficiency (post-support, post-self-service). Be careful about survey fatigue: no customer should receive more than one survey per week.
Gartner's research (originally from the Corporate Executive Board) found that CES is the strongest single predictor of customer loyalty and repeat purchase behavior. In their study, 96% of high-effort interactions led to disloyalty. However, this research focused primarily on service interactions. For SaaS, a combination of declining NPS plus high-effort CES scores is the strongest churn signal. Continuous feedback data adds early warning because customers often voice frustration on feedback boards long before it shows up in any survey metric.
Many SaaS companies create a composite score that weights NPS, CSAT, CES, product usage data, and support ticket frequency. A simple starting approach: 40% NPS + 30% product usage + 20% CES + 10% CSAT. You will need at least 6 months of data to calibrate the weights for your specific context. The exact formula depends on which signals best predict churn and expansion in your business.
A 5-point scale is standard and sufficient for most use cases. It's easier for respondents and produces comparable results. A 7-point scale offers slightly more granularity but typically doesn't change your strategic conclusions. The most important thing is consistency: once you pick a scale, don't change it, or you lose the ability to compare results over time.
For in-app surveys (CSAT, CES), aim for 30% to 50%. For email-based surveys (NPS), 20% to 30% is solid. SurveyMonkey's benchmark data suggests that in-app prompts triggered at contextually relevant moments consistently outperform batch emails by 2x to 3x. If your response rates fall below these thresholds, revisit timing and survey length before assuming customers don't care.
Survey metrics (CSAT, NPS, CES) are push-based: you send a survey and ask for a response at a specific moment. Continuous feedback is pull-based: customers share thoughts whenever they want, on their own terms. Surveys give you structured, quantifiable data. Feature voting boards give you unstructured, qualitative depth plus a built-in prioritization mechanism through votes. The two approaches complement each other. Surveys tell you the score; continuous feedback tells you the story behind it and what to build next.
Join over 5,204 product managers and see how easy it is to build products people love.
Did you know 80% of software features are rarely or never used? That's a lot of wasted effort.
SaaS software companies spend billions on unused features. In 2025, it was $29.5 billion.
We saw this problem and decided to do something about it. Product teams needed a better way to decide what to build.
That's why we created ProductLift - to put all feedback in one place, helping teams easily see what features matter most.
In the last five years, we've helped over 5,204 product teams (like yours) double feature adoption and halve the costs. I'd love for you to give it a try.
Founder & Digital Consultant
Learn what Net Promoter Score (NPS) is, how to calculate it, SaaS benchmarks, survey best practices, and when NPS alone isn't enough.
Learn when and how to make your product roadmap public. Covers formats (Now/Next/Later, timeline, kanban), what to show vs hide, and managing expectations.
Learn how to write release notes people actually read. Covers structure, formatting, audience targeting, distribution, and templates.
Most feedback loops break after collection. Learn the 5 stages of a closed feedback loop and how to notify customers automatically.
Learn every feedback collection channel, how to organize responses, and how to build a program that drives product decisions. Practical SaaS guide.