The Cohort Analysis
I was treated to Dave McClure's "Startup Metrics" talk during Seedcamp in London last month. If you have not seen Dave do this talk, do yourself a favor and click on this link and spend a few minutes with the slides. Or even better, go see Dave give it live.
The ideas are simple, but so few actually apply them rigorously. In a nutshell, the methodology is: build, test, measure, iterate, test, measure, iterate, test, measure ……….
Which leads me to the point of this post. Measurement is not a simple thing. What do you measure and how do you measure it?
One of our firm's favorite measurements is the cohort analysis. From Wikipedia:
A cohort study or panel study is a form of longitudinal study used in medicine, social science and ecology. It is one type of study design and should be compared with a cross-sectional study.
A cohort
is a group of people who share a common characteristic or experience
within a defined period (e.g., are born, leave school, lose their job,
are exposed to a drug or a vaccine, etc.). Thus a group of people who
were born on a day or in a particular period, say 1948, form a birth
cohort. The comparison group may be the general population from which
the cohort is drawn, or it may be another cohort of persons thought to
have had little or no exposure to the substance under investigation,
but otherwise similar. Alternatively, subgroups within the cohort may
be compared with each other.
Like most things, it is easier to show one than explain one. And thanks to Robert J Moore, we have a few really interesting cohort analyses on Twitter to look at. He shared them in a guest post on Techcrunch yesterday.
This chart shows how new Twitter users behave over time.
And this chart shows how Twitter usage grows over time for new Twitter users who stick with the service.
I think both charts are interesting and you would not necessarily notice this behavior by looking at all users together because you need to isolate a certain group and observe them over time to see what is happening.
I'd encourage everyone doing a web startup to adopt the startup metrics methodology and within that methodology, make sure you are looking at cohorts of users, not just all of your users in the aggregate.
Comments (Archived):
Great post Fred – thanks for sharing. From what I’ve heard (and we have all seen with the launch of Google Wave) it seems Google do a very good job of measuring chorts of users. We interviewed the MD of Google Australia and NZ, Karim Temsamani, about a month ago and we asked him a few questions about how they do “metrics.” If you’re interested, you can see the video here (http://www.vimeo.com/6844800)For example, they talk a lot about ‘power users’ who they select first up and let trial their products. Once they have their power users in play, they then watch the data and iterate, measure, iterate etc etc…Surely, the use this kind of analysis to parse the mass amounts of data they have into usable, relevant stats.No start-up, I know, but interesting none the less. Thanks again for the post. Great post Fred – thanks for sharing. From what I’ve heard (and we have all seen with the launch of Google Wave) it seems Google do a very good job of measuring chorts of users. We interviewed the MD of Google Australia and NZ, Karim Temsamani, about a month ago and we asked him a few questions about how they do “metrics.” If you’re interested, you can see the video here (http://www.vimeo.com/6844800)For example, they talk a lot about ‘power users’ who they select first up and let trial their products. Once they have their power users in play, they then watch the data and iterate, measure, iterate etc etc…Surely, the use this kind of analysis to parse the mass amounts of data they have into usable, relevant stats.Google is no start-up, I know, but interesting none the less. Thanks again for the post. 🙂
yes, google is the king of using data to build better products
Conceivably, you can even normalize such a study by % of the monthly median. So, for example, if *everyone* was tweeting about the superbowl, you’d have a blip in the 2nd month of some people and the 4th month of other accounts… but if you did it as a % of the monthly median, you’d be able to smooth out more noise.
A more in depth study that I’d want to see is correlations between # of followees/time, interactions with followees, activity of followees. One of my theories is that Twitter needs to do a much better job showing users who is around them and relevant (now that replies are broken), b/c I rarely discover awesome people on twitter anymore, unless I really go out of my way to. I think people discovery is critical to keeping people engaged.
lists is great. i’ve been using it for a few weeks. when it rolls out, itwill be so much better than replies which was a shitty way to do that
We’ll see… replies were more organic.How about we meet half way… can everyone have an automatic list generated of the last 10 people they replied to and the 10 people they reply to most? I think I’d shutup about replies if we had that. Otherwise, lists will be about as good as #followfriday.
lists are organic. you can share them, add to them.but your idea of an automatic list is interesting.i’ll make sure the twitter team hears that one
“organic” to me means an outgrowth of something I do naturally.Someone should build a list creater app that allows me to create certain filters… like people I follow that are near me, people I RT the most, people who tag things #nextNY that I follow, etc.
This is exactly what we’re aiming to do with Twillist (http://twillist.com); we’ll be adding a layer of meta-data to lists and automation / suggestion to list creation & discovery. We’re very bullish on the possibilities for discovery that Twitter lists creates.Great post Fred on cohort analysis… it’s something we’ve been working on perfecting @totspot… no easy task to nail this type of analysis, but leads to very rich understanding of your users and changes to your app.Update: btw, here’s the alpha link to Twillist: http://alpha.twillist.com
I can’t wait to see all the list tools that are built on the lists api
But I wonder though how many people are going to take the time to make lists? My gut tells me that the feature is a little too heavy and that it’s adoption/use might be relatively limited (and relatively probably means a couple million lists). Don’t get me wrong, I think the concept behind lists is fine. But how about an additional alternative, a light-weight, hit-and-run feature in the form of “I recommend”? That way I don’t have to think about creating a list right now, what list that person/account should be included in, etc.This would actually be more organic than the act (read: chore) of creating a list. It’s easier, faster, more intuitive, and would in the end IMO result in the creation of more/better lists. Here’s a quick overview of how it might work; I mark my favorite accounts with the “I recommend” button and they automatically get posted in my “I recommend” folder, which is public. I can mark them either by going through my follow list or as they appear on my timeline. I can do it purposefully or spontaneously as the situation dictates. When it gets to the point where sub-lists, or categories make sense within my recommended list I create them at that point.This lowers the bar in terms of effort needed to engage, will result in a much higher level of participation, and will simultaneously create a siphon effect for the creation of “lists” as Twitter currently envisions them. Because in the end doesn’t higher participation almost guarantee more and better recommendations in the aggregate?
Lists is an api like most things in twitter. Many people will create tools to build lists in all sorts of ways, including all the third party apps
I get that, I just think you’ll get 10x the participation from users if you don’t require them to create lists in the first place. Just let them start out by being able to mark tweets as “I like” and accounts as “I recommend” and let them group things in to lists when it makes sense to them. As an app developer I’m much more interested in getting access via the API to those two silos of information than I am to some much smaller data set of user-defined lists, for a variety of reasons.Again, I think lists are fine. I just don’t think that by themselves they’re going to deliver on their promise for either end-users or app developers. The bar is unnecessarily too high for initial user engagement.
I love the idea of a list generator app. I can picture a tool like Smart Playlists in iTunes that lets me mix and match criteria from my posts, the people I follow, or any searches to create self-updating lists of people or even particular posts. Some examples:1) Posts from people I follow containing links2) People retweeted or favorited 3+ times for the keywords ‘new york’, ‘nyc’, ‘venture capital’, or ‘vc’.3) Last 10 people @fredwilson replied to (which is a list that I could create even though I’m not Fred Wilson, based on Fred’s public data)Could make for a cool third-party app based on some of the new API calls.
Great thoughts here Joe. Would love to hear any more ideas you’d be looking for; we’d be happy to incorporate into Twillist (http://alpha.twillist.com) …feel free to email [email protected]
That’s exactly what twitter is doing. They create the lists api and the ecosystem develops all sorts of ways to crate them
I second this. I’m not the early user. I’m still having trouble figuring out what is twitter doing to fill my needs (oddly right now it is ridiculous real time search)Yet I know it should be more than that…without good metrics, I am unsure what to mold it to.
A few additional thoughts… The key with statistics – especially as a startup – is to choose both metrics and cohorts that can be leveraged in some actionable way. They need to be part of a closed loop with some specific alignment to revenue, membership, click-thru’s, etc. You don’t have cycles to waste on interesting anomalies. It’s also important to remember that statistics on their own can be misleading, and they cannot act as a substitute to strong direct relationships with your clients. Both are important. It’s also important that you are willing to listen to all of these feedback channels dispassionately. It can be tempting to manipulate the composition of cohort groups to find the type of metrics you are looking for, instead of choosing them in an unbiased way to best measure progress.Thanks for the post!-john
i like the point that you should mix data analysis with real world feedback.the ying and yang of product management.
It’s extremely difficult to get real world feedback longitudinally of the people who dislike you- but those people are often the most useful people. Especially if they are part of the target group you are going for.How do you get their advice? I’m finding interviewing people on the street is not enough…
Social media mining might be helpful
Great points John, especially “real” connections to customers/users and the observer paradox (we often find patterns where there aren’t any).
My first reaction to the post, “why is Fred talking about henchmen in Dungeons and Dragons”. Now after reading, it looks like a juicy data mining topic, even better!Cohorts in this definition are simply feature sharing user subsets. Classification studies use feature description vectors in engineering to characterize objects. Simple Gaussian, multi modal gaussians, or even novel statistic functions can be used as models for data representation. A friend at work is doing scatter charts which relate to probability of detection (he’s focusing on just normalized filter outputs).The cool thing about classification is that the features aren’t absolute. So member groups could be found by clustering (nearest mean recursive algorithm is straight forward), and there are confidence levels associated with subgroups (i.e. I’m 100% likely to fall into the “geek” category for this comment).
That was indeed a phenomenal post over at Techcrunch. I really like the RJMetrics idea and I am prettu sure it’s a no bullshit startup with all the data they can analyze for their own strategy. It’s one of the few times I read something at TC and say ‘I could see myself doing that’.
I’d like to voice further support for this. Doing cohort analysis has produced the single best insights into how people interact with our service. I suggest starting as rudimentary as possible and then expanding from there, if needed. No need to make it complicated.Also, if you’re local to nyc and are interested in applying a rigorous metric process to your startup you should consider joining the lean startup meetup: http://www.meetup.com/lean-… the events, and discussion list, are all about this.
Two great comments in one fraserI think you can easily get to complicated with measurement
I’m not an expert on Cohort Studies: I am going to put a suggestion out there that I have developed from the Longest running cohort study (Harvard’s Grant Study, started by George Vaillant in 1939, and slowly concluding now):Interview and observe your subgroups to see why they have the behaviors they have. You may not want to have a base of just power users. The people who have undesirable behaviors (aka not using your product) probably have a much more telling response than those who are using your product.”All happy families are alike; but each unhappy family is unhappy in its own peculiar way.”
i like the point that you should mix data analysis with real world feedback.the ying and yang of product management.Great points John, especially “real” connections to customers/users and the observer paradox (we often find patterns where there aren’t any).
I second John Mahoney’s call for dispassionate analysis. So hard for startups to look objectively at their assumptions.Although not familiar with the “cohort” terminology, the essense of this discussion reminds me of the host of behavioral matching technology startups that developed on the fringes of the SEM industry. The best of them, like http://www.magnify360, built what could be called cohort behavioral databases on a multitude of attributes and then tied into PPC campaigns charging strictly by performance upside.
In our product planning docs, we have a section entitled “pirate” where we address Dave’s AARRR startup metrics, which goal we are trying to achieve, and how we are going to measure it. Even if it is common sense, it helps make sure everyone is aware and focused on the primary objective of a given project. These metrics have been part of our culture since before I started (it doesn’t hurt that Dave is on the board 🙂 ) and I think they have really helped steer us straight where we may have otherwise been distracted by shiny objects that had no real value.By the way, we refer to cohorts as tranches, which in many ways they are. They’ve been really valuable for us in working on the Retention metric, for example what lifecycle emails we send, and when, and when we demote inactive users.
Having dave on your board is probably entertaining in addition to being valuable
yes 🙂
Really great post. The only downside is the inescapable feeling that at some point in the not-too-distant future, Google and Twitter are going to know what I’m going to do before I do.
Despite the tongue in cheek tone, its a great point Andy. Any of the large search engines/social communication platforms have incredible insight in both macro and micro trends. They are certainly predictive, with accuracy improving as the population sample increases. The statistics collected by any of these firms could provide incredible actionable intelligence if applied externally to markets or investments. In some ways the ultimate “insider information” is sitting on the wrong side of the corporate firewall.Imagine if you could build your own analytics to mine it…-john
Oooh. The wrong side of the corporate firewall. I’m so going to use that line john
timing of this post is impeccable, thanks for the reminder!
I just read this, and liked it. Then I read this story:http://mashable.com/2009/10……which ranks social sites by how loyal the traffic they send is. This is related to your points about twitter vs. google in referrers – and is relevant here in talking about cohort analysis.I would imagine the readers for this blog come in bumps when you have a popular post. How many of those readers from twitter vs. google vs. some other service stay to read more, or subscribe to your RSS feed, etc.I’ve been designing a blog platform for myself (because I can), and will make a it a lot like a webapp – with sign on, user tracking, custom tools etc.It will be interesting to integrate http://mixpanel.com and some other tools to track how readers on a blog behave.
On this blog twitter is king in loyalty, volume of traffic, and time of visit and pages viewed but that might have something to do with my relationship with twitter
How do you track loyalty? And how would you track it for those referred from google? Does chartbeat or google analytics do cohort analysis? Have you ever seen auto-cohorts for sites like this that don’t really have a sign in – but just index by the first time the machine has visited the site and their referrer?
I don’t think it’s obvious from your post why this type of analysis is so important.When a development team makes design choices and then goes back to review traffic data to evaluate the success of their choices, they have to use a cohort analysis for their analysis to be accurate. You can’t just look back over 2 months worth of uniques in Google Analytics to determine if a change implemented 4 weeks ago was actually successful. By using a cohort analysis you can isolate the variable of where a user is in their lifecycle of using their service, which allows you to more accurate assess how new features affect users.Additionally a cohort analysis is a great way to assess the lifetime value of an acquired user. As you look at older cohorts, you can measure, on average, how long a user will stick with your service, and, depending on your business model, how much a user is worth to you. Once you know that number, you’re golden because you know your allowable you can spend in marketing to acquire new users sustainably.I agree that cohort analysis are our firm’s favorite measurement, and the reasons above just scratch the surface of the valuable conclusion you can draw from a cohort analysis.
Thanks andrew. I ran out of time on this one. That’s the hazards of posting in the morning when I have to be done at 7am so I can wake the kids, shower, and get to work
Great content in Dave’s presentation with a lot of learning that large brands can utilize as well (on my to do list to write a presentation of lessons Corporations can learn from start ups). But man, he needs to get some SERIOUS powerpoint creative help (Dave please – no more colour coding your content! 🙂
What I don’t see within these graphs, and don’t know cohort analysis well enough, is to understand how this relates to correlation and/or causation. The fact that these groups joined @ the same time has nothing, other then that date in common, to show the who/what/where/when/why of being engaged with any service.Another caution I would think of re: using cohort analysis when in startup mode is the sample size. What’s the number of people in that Jan 2007 group vs. April 2009? Were there any comments made during the meeting re: applying a weighted average to the analysis? January 2007 has a cohort of 10 early adopters and April has 10,000 “mainstream” users…..
having only seen cohort data in more of a research setting with much smaller user groups, i love seeing this for such large volumes of customer usage data.since Twitter is a network, i’d wonder how the the cohorts behave in relation to the network size increasing…and also, where are the pre 2007 folks? (aka the ones who were mocked for tweeting about burrito lunches in south park? )
I try to very actively find relevant folks, so it’s pretty balanced.