Thought Leadership

How to

validate your innovation:

Mastering experiment design

Satish Madhira

CEO and Co-founder

Wednesday, July 27, 2022

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C
Text link

Bold text

Emphasis

Superscript

Subscript

"A long time ago in a galaxy far, far away"

.Star trek was the professed inspiration in 2012 for Google glass
Star Trek was the professed inspiration in 2012 for Google glass.

Mother of all Evil: The Mom Test

We are all irrational. If you ever had an idea, it is likely you will get the 'mom disease'. Your baby never makes a mistake. The 'Mom Test' is the first test you would naturally undertake when you get an idea.

What does 'Mom Test' mean'???

  • Pick 5 friends, colleagues ('friendlies') who 'know' stuff or worse still: your mom.
  • They say 'This is awesome, if we get this stuff it WILL sell like free pizza to famine starved hungry billions'.

Sometimes the Mom Test comes in form of a survey or focus group study with friendly customers, with questions designed to prove that it will work. It's a familiar feeling since all of us have been there, done that - we have the 'mom disease' and excel in administering 'mom tests'. Now what?

Why do we need to test and who is this for?

Innovation is dirty. Nearly every firm would have had its Google glass moment. Google glass has been successful in some niche segment(s).

Disruptive innovations like Google glass are hard, require massive leaps of faith and therefore abundance of optimism. It is this optimism that leads us astray and gets us into gargantuan execution plans without validation.

If you are in an innovation cycle, have uncertainty, face high impact assumptions and need to make decisions: this session offers a framework to test your assumptions.

Where do we go from here?

We deal with assumptions like scientists do. When a scientist announces his invention he has a hypothesis, he shows his experiment setup, the results and what constitutes as proof for his hypothesis! We design experiments.

We want to learn double the amount in half the time for assumptions that could bring down our initiative.

Technique: Crazy 6s

Crazy 6s is like Crazy Eight, you use six boxes instead of eight boxes. Crazy Eights is a technique used in UX design sprints extensively. I wrote about it earlier in detail here.

Crazy Six technique
Crazy Six technique

Here is what we did in the session:

Step 1: Map the problem you are solving and the value map for the customer:

The questions to draft the value map are simple:

  • Who is the customer?
  • What problem are you solving?
  • How will we acquire and retain customers?
  • How will you capture value for our company?
  • How will you solve it?
  • How could it go wrong? (Find assumptions using Crazy Sixes)

We can map our value delivery using frameworks like business model canvas, lean canvas or similar. These are just frameworks at the end of the day.

The customer problem, channels should be enough for most cases.

In my opinion using business plan as a surrogate for customer will doom you before you start.

Step 2: Ideate assumptions:

What assumptions are we making, that if proven wrong would cause our initiative to fail? Use crazy 6s.

  • Take an A4 sheet.
  • Fold into 6 boxes.
  • Within 60 seconds, jot one assumption that you think will bring down your initiative.
  • Repeat till there are no more boxes.
  • Now pick one big assumption.

How do you decide which one is the most important assumption?

If you are doing this within your organization (say as a team exercise):

  • Write up one assumption on one sticky note.
  • Now paste those sticky notes on the 2x2 impact/uncertainty matrix (see figure below).

Here is the impact/uncertainty matrix:

Impact/Uncertainty matrix
Impact/Uncertainty matrix

Focus on those assumptions that are on the north east (quadrant with high-high). Those are your riskiest assumptions.

I like to think of these assumptions as cards in stacked cards:

Balanced pack of cards

Pull any card at the top of the stack and your stack might holdup. Pull any card from the bottom and your house of cards collapses.

The base is the customer, the channels. More often than not, these are the ones that fail.

The bigger you are, the more optimistic you will be. Your optimism on go to market capability and customer loyalty will be through the roof. That is where you might fail since you will sit in an echo chamber with your customers, channels and market.

Bottom line find those assumptions that are going to have a very high impact.

Now write this assumption on the reverse side of your A4 sheet.

Step 3: Ideate tests:

  • We use a library of experiment patterns along the "Truth curve" (Hat tip: Book "Testing with Humans", link at end of article). Believability and effort form the axis of the Truth curve.
  • You could think of these experiments as fidelity. Usually higher the fidelity, more the believability.
  • For your reference, I sketched some examples of companies that have used these experiment patterns to test their business models.
Experiment patterns with some examples
Experiment patterns with some examples

Use Crazy 6s to ideate. We don't test everything. This is not some statistical exercise. You won't have time for any nonsense- your innovation initiative will die before you do all that.

So therefore we hunker down to identify that one or two experiments that you can run.

The pattern library is an assistant, a framework to inspire you - go crazy with experiments when brainstorming, while minimizing effort. Do not get pedantic with the means here.

The big idea is: How can we learn what breaks this business in half the time with half the effort?

Interesting examples:

  • Have a payment page, where the customer can leave his credit card and when he clicks on buy, do NOT fulfil. You are creating intentional friction, to see if there is enough value for user to take action.
  • Elliot Sussel was at TaxiMagic which was founded in 2008. For frame of reference, Uber was founded in 2009. There were dime a dozen taxi apps during that time. His explanation on what should have been done (and NOT experimented) was interesting.
  • Dropbox tested its hypothesis by creating an explainer video and posting it to Digg (Landing page + promotional material). With about 75000 (?) signups for beta (up from 5000) they knew that there was a need.
  • Zappos experimented with Wizard of Oz in early days. They would go to local stores, take photos of shoes and put them online. When customers paid, the same shoes were purchased from local stores and shipped under the Zappos brand. From a customer experience perspective customer did not care.
  • Kickstarter campaigns in some sense are presell experiment patterns. They are powerful validation because someone is committing money upfront to your idea.

One thing to note here - you want to break the model. You are NOT looking to optimize here, therefore sometimes adding friction makes sense. For example, if your landing page is dirty, there is a friction on leaving credit card AND YET people want your stuff, there is a real problem you are solving

So your experiment CAN have friction. This is contrarian thinking.

Step 4: Follow the experiment template:

This template has a simple checklist:

  • What hypothesis do you want to prove/disprove?
  • For each hypothesis what metric will you use?
  • What will be the gating criteria (i.e> or < x its a pass or fail)
  • Who is target participant?
  • How will you recruit them and how many do you need?
  • Which experiment are you running and for how long?
  • List any qualitative learnings as well finally

Now go ahead and test. In some cases you have to be careful on the cohorts you are choosing. Keep the rules simple, experiment simple. The idea is to just (dis)-prove with evidence.

Create a roadmap, the cultural change

Experimentation is a process. Create a living document of these risks and an execution roadmap for addressing these. Prioritize, start with quicker wins and move on.

Takeaways

  • Once I was forced to bring out the assumptions with crazy sixes, it was hard for me to ignore them. Once again this technique brought out urgency in identifying those risks and assumptions quickly. Time boxing shines light on the elephant in the room.
  • Customer definition and segment is important, this technique assumes you have done all of that in a very good way. If you are using a lean canvas or business model canvas, start from the right (customer segment and come finally to solution), otherwise you will end up identifying execution assumptions and risks. Regardless of the experiment pattern, remember the Amazon way - work backwards!
  • Techniques like Landing Pages are fraught with issues. There are too many variables: are you testing the copy? Are you testing the call to action button? I will confuse you further: CamelCasing in PPC could give a difference of over 2x (i.e 200%) or even more, every PPC enthusiast knows that little changes could create differences that are order of magnitude. So much for gating or threshold metrics/values.
  • I have started reading up more on this topic. In my past startups, I had tried many of these experiments in an adhoc fashion - almost in an intuitive way. Once some success came, a lot of hubris set in. This session gave me a great framework and was a wakeup call.
  • I must add that patterns like concierge are slightly risky because the customer experience might be better than automation so you might see better demand. If I have to choose I will choose Wizard of Oz over Concierge. Watch for the variables. Once again (this is for my big company innovation bretheren) this is not about the process - you will kill yourself if you do not get this.
  • Gif and Elliott complemented each other. Fabulous work. Clap clap clap.

Resources

Testing with Humans is a great book, easy read and extremely executable. There is no fluff, no preaching. Giff’s other book Talking to Humans is about customer discovery.

Connect with Satish on LinkedIn

Subscribe to our newsletter

Stay up to date with our latest ideas and transformative innovations.

Thank you for subscribing
To stay updated with our latest content, please follow us on LinkedIn.

Follow us on

©2024 Zemoso Technologies. All rights reserved.