Measuring Success in your Lean Startup

"The Lean Startup provides a scientific approach to creating and managing startups and get a desired product to customers' hands faster"
Source: Principles of the Lean Startup
 I'm a big fan of the Lean Startup methodology. It proposes a simple Build-Measure-Learn process that allows you growing your business with maximum acceleration. This process should be executed in cycles and the main goal of a Lean Startup is to to reduce the time it takes to go through a whole cycle.

 The Lean Startup focuses on learning how to build a sustainable business. This learning has to be validated scientifically, by running experiments that test each element of the startup's vision. The results from these experiments are used to validate the experiment's hypotheses during the Measure step of the Build-Measure-Learn process.

The Measure step is the key to the Lean Startup process. Without a sound experimental design the lessons we learn from our measurements could be completely wrong. This observation applies not only to Lean Startups, but to any startup that uses some sort of metrics to measure and track its progress.



And yet most startups I've seen do not take this seriously enough and use a pretty naive way to measure their progress. Let's see with a toy example what I mean by naive, and how Probability Theory suggests we should be doing things.

A Toy Example

 Let's consider the following problem:
We want to find out if introducing a new feature in our product or web page will improve a given metric, like the number of clicks on a certain button.
Let N1 and N2 be, respectively, the number of clicks per unit of time before and after deploying the new feature. Solving the toy example requires somehow checking whether N2 is greater than N1.

The Naive Approach to Measuring Startup Progress

The naive approach would be to measure:
  • m1: number of button clicks during a period of time of length T before the deployment of the new feature
  • m2: number of button clicks during a period of time of length T after the deployment of the new feature
and to compare m2 with m1. If m2 > m1 we are doing well and the new feature rocks, right? Not quite so.

This approach is wrong because it completely ignores the underlying random nature of the things you want to measure. If you are not convinced that what you want to measure is subject to randomness, just do the following mental experiment (or a similar one):
Remove the new feature, measure m1', the number of clicks during time T, deploy the new feature, and measure again m2', the number of clicks during time T. Now, think about all the factors that could influence a user making a click on the target button, like the weather, you competitor's actions, the user's mood, ... you name it. It should be clear that most probably m1 and m2 won't be equal to m1' and m2', respectively. Perhaps m1 < m2 (your new feature rocks) and at the same time m1' > m2' (your new feature doesn't rock). What conclusion can we draw from this apparent contradiction? None, because we are not following the right experimental methodology.
 Which experimental methodology should we use? Read about the scientific approach to measuring startup progress in our next post.

No comments:

Post a Comment