Blog

Retention That Matters: Cohorts and Features

"Our retention is 40%" doesn't tell you who stayed or why. Cohort retention tied to specific features or actions does—and that's what actually informs what you build next.
Product
February 13, 2023
Retention That Matters: Cohorts and Features

Why aggregate retention is misleading

One number hides who came back and who didn't. "Our retention is 40%." Okay. Forty percent of what? Everyone who ever signed up? People who signed up last week? People who completed onboarding? The number is meaningless without the slice. You need to know when they signed up and what they did. Otherwise you're mixing apples and oranges. The cohort that signed up during the big campaign might retain differently than the cohort that signed up organically. The cohort that hit the core feature might retain differently than the one that didn't. One number flattens all of that. You lose the signal.

Aggregate retention also can't answer product questions. Did the new onboarding improve retention? You don't know unless you compare the cohort that saw it to the cohort that didn't. Did the feature we shipped last month matter? You don't know unless you slice retention by whether they used it. The aggregate might go up or down for a dozen reasons. You need the slice that ties to the thing you can change.

Cohorts and the right question

Did users who hit feature X retain better? Did the cohort that saw the new onboarding behave differently? That's a product question. It's also a question you can answer with cohort retention. Define the cohort. Define the action or the feature. Compare retention. The number tells you if the thing you built actually mattered.

Cohorts don't have to be fancy. "Everyone who signed up in January" is a cohort. "Everyone who completed the onboarding flow" is a cohort. "Everyone who used feature X in the first week" is a cohort. The point is to slice so that the slice is tied to a decision. If we're trying to improve onboarding, we care about retention of the cohort that saw the new onboarding vs. the old. If we're trying to prove that feature X matters, we care about retention of users who used it vs. users who didn't. One or two cohort views that the team actually looks at beat a dozen that nobody does.

Feature-level retention

Retention for "used the core flow" vs. "never touched it" tells you where to invest. If people who complete the core flow retain and people who don't churn, you know the core flow is the lever. Improve that. Get more people to it. If the difference is small, maybe the core flow isn't the differentiator. Maybe it's something else. The comparison is what tells you. Boring to report in an all-hands. Gold for decisions.

You don't need a data science team to start. You need a clear definition of "used the core flow" (or whatever matters for your product), a way to segment users by that, and a retention curve for each segment. Most analytics tools can do that. The hard part is choosing the right segments and actually looking at them. One or two that the team agrees on and checks every week is enough. More than that and it becomes noise again.

Keeping it simple

You don't need a data science team to start. You need one or two cohort views that the team actually looks at. Same principle as the rest of product analytics: fewer numbers, clearer signal. Pick the cohort question that would change what you build next. Answer it. Put that number in front of the team every week. When it moves, you'll know. When you ship a change, you'll see if the cohort that saw it retained better. That's retention that matters. Not the aggregate. The slice that drives the decision.

Get Started