Asking Better Questions

In this post, I’d like to share some helpful approaches to eliciting impactful questions so that your team can make better decisions about what to measure.

Perspectives
December 22, 2020
Image of John Cutler
John Cutler
Former Product Evangelist, Amplitude
Asking Better Questions

As we learned earlier in Part 1: Measurement vs. Metrics in Product Analytics Instrumentation, good questions help teams focus their measurement efforts, and increase the likelihood that what they measure will enable valuable insights. A couple of well chosen events and event properties beat out a firehose of data and/or an autotrack solution any day.

In this section, I’d like to dig deeper. I’m going to share some helpful approaches to eliciting impactful questions so that your team can make better decisions about what to measure. This takes practice, but better question asking is within any team’s reach.

Before we jump into the nitty gritty, I wanted to share the most important thing I’ve learned:

You need to make it safe to ask “dumb” and less fully-formed questions. And you can’t rush it. When the people brainstorming are worried about looking silly, they’ll shut down. If they feel rushed, they’ll stick with surface-level questions. Great questions spring from less-great questions which spring from “bad” questions. It takes time and multiple cycles of divergence and convergence to come up with the highest impact questions.

So make the time and make it safe.

OK, with that out of the way, let’s get started.

For a long time, I would start question brainstorming activities with the question “where do we need to reduce uncertainty?” That was it. This is the purist hat I mentioned in The 4 (5 Actually) Approaches. Some teams loved the ambiguity. They ran with it. But for other teams it was too open-ended. It was intimidating. I’ve since adapted my approach.

When it comes to being “data informed”, people generally have one of three related needs:

  1. I need to make a decision, and I need data to inform that decision.
  2. I want to reduce uncertainty related to an assumption.
  3. I want to understand performance and impact. I want to know if something is working. I want to prove something is working (or not working) or will work (or will not work).

I say these are related because decisions involve assumptions. Assumptions guide decisions. And we typically want to know if something is working so that we can make a decision of some sort (even if that decision is to do nothing).

But it can help to divide these out when trying to elicit questions. Why? I’m not exactly sure, but it feels like different people gravitate to different perspectives. Just using one approach (e.g. just using a lean canvas filled with assumptions) seems to limit teams. I also see teams stress out about benchmarks and “standard” metrics, with no real sense of what decisions they hope to inform and/or assumptions they hope to validate. Maybe this provides more flexibility?

My next realization was that the resolution of the question—the level—matters. When brainstorming, refining, and prioritizing questions, it helps to try going up and down a level (or two). An open-ended question is helpful to inspire more specific questions. And a specific question is helpful to inspire more open-ended questions. Why is this important? You can engage everyone in the activity, regardless of how general or specific their questions tend to be. Also, when you leave ten minutes to brainstorm questions, teams stick to a single level instead of exploring other options.

To make this point, I share a table that looks like this (this is from the actual board we use in Miro):

Three Column Questions w/Spectrum

The table has three columns: one for decisions, one for assumptions, and one for areas of performance and impact. For each column I give sample questions/assumptions along a spectrum of specificity.

For example, our assumption might be foundational to our whole business (“demand will increase over a decade”), or it might be about an assumption related to button placement (“that type of button always goes on the right”). We might be wondering about the efficacy of our whole strategy, or about the efficacy of a small workflow tweak.

To get warmed up, I ask participants to brainstorm three examples for each column.

  • Example decisions
  • Example assumptions
  • Example is-it-working type questions

But I add a twist. “Make sure to give me one super specific example, one super broad example, and one medium-specificity example.” Hopefully you can see what I’m up to. This is like an active stretching routine before a workout.

Completed for a DIY app pairing builders with kit designers it looks like this:

All Questions

With some examples on the board, we flow immediately to the next step.

“OK. Now pick one decision, one assumption, and one is-it-working question to explore further. Brainstorm three sub questions for each. Where must you reduce uncertainty? What questions would—if answered—help you unlock this puzzle? Or at least increase your confidence?”

I also remind them of their options: why, who, what, when, where, which, how many, how, how long, do, are, will, have, should, and is.

Sub-Questions

This two step process—exploring the categories and levels, and then brainstorming sub-questions—gets people thinking more laterally, and more willing to go up and down question levels. It is much better than just jumping into questions.

If a team is having trouble, or if they just want more practice, I bring out these trusty fill-in-the-blanks.

  • How many users __ ed in the last 30d?
  • Where are new users dropping in in the __ funnel ?
  • Does __ and __ impact long term retention for __s?
  • How well do __ s retain compared to __ s?
  • Are users who __ more likely to go on to __?
  • What’s the average number of ___ per __?
  • Where do customers go after __, and do they end up ___ing?
  • What unique customer behavior predicts ____?
  • When _ did we adversely ___ ?
  • Are people actually ____ ing, or are they just ____ ing?
  • Where/when are customers having trouble when attempting to ___?
  • Are our efforts to ____ resulting in ____ ?
  • Is what we released causing ____ , or is that just ____ ?
  • Is there any low hanging work that would let us ____ ?
  • Are we on track to ____ ?

Together, these activities leave people a bit more confident about brainstorming questions.

For the remainder of the workshop we do the following:

  • Brainstorm additional decisions (“should we…”), assumptions, and is-it-working type questions.
  • Share and discuss. Tweak. Repeat.
  • Dot vote or place Monopoly money to “pay” for valuable questions.
  • Brainstorm sub-questions individually. Shoot for high volume.
  • Review sub-questions as a group, and refine in pairs. Prioritize.
  • Rinse and repeat until time runs out.

By the end of the workshop, we typically have lots of questions and sub-questions, but we also have some sense of what questions are valuable. We prioritize where it will be valuable to learn more. Perhaps even more importantly, we learn what “class” of questions are valuable. By that, I mean we learn the nouns, verbs, workflows, and goals that are most important.

This is one of the secrets to a smart instrumentation approach.

No, you can’t (and shouldn’t) instrument everything. No, you shouldn’t record every click. No, you can’t predict every question in advance. But you can, guided by good questions, instrument the things that unlock the “long tail of insights.”

There is a misconception that the relationship between what is instrumented and the insights you can unlock is linear. Instead, a handful of intelligently instrumented events can enable a wide range of insights and metrics (back to our Fitbit example).

Good questions guide the way! Keep practicing!


The full series:

Part 1: Measurement vs. Metrics

Part 2: Use a Mixed Pattern Approach to Instrumenting your Product

Part 3: Keeping the Customer Domain Front and Center

Part 4: Learning How to “See” Data

Part 5: The Long Tail of Insights & T-Shaped Instrumentation

Part 6: Asking Better Questions

Part 7: Working Small and Working Together

About the Author
Image of John Cutler
John Cutler
Former Product Evangelist, Amplitude
John Cutler is a former product evangelist and coach at Amplitude. Follow him on Twitter: @johncutlefish
More Perspectives