Skip to Content

33 / Learn Fast, Learn Well With Experimentation

Hosted by Sean Flaherty & Paul Gebel

Listen

About

Holly Hester Reilly Headshot

Holly Hester-Reilly

H2R Product Science

Holly Hester-Reilly is the Founder and CEO of H2R Product Science, a product management coaching and consulting firm that teaches the science of high-growth product development. Holly is a former Columbia University research scientist and has led over a dozen successful digital product initiatives at startups, high-growth companies, and enterprises like MediaMath, Shutterstock, Lean Startup Co., and WeightWatchers. With those experiences, she has developed the Product Science Method, a framework to discover the strongest product opportunities and lay the foundations for high-growth products, teams, and businesses.

Her team at H2R Product Science partners with startup founders and product leaders to share this framework, helping them to figure out which product growth opportunities they should pursue and build the product management skill to deliver on their goals.

Holly also teaches public and private workshops and has spoken about building high-growth products for events such as Lean Startup Summit Europe, growth equity firm General Atlantic’s CIO summit, top boutique design and development agency Thoughtbot’s employee summit, ProductTankNYC, Parsons School of Design, and the Product School.

Be sure to tune in as Holly hosts The Product Science Podcast.

 

Experimentation is not about right or wrong. It’s about learning things that you genuinely didn’t know. The secret is to become comfortable with the uncomfortable and to make room for your own sense of vulnerability, says Holly Hester-Reilly. When you’re able to embrace not knowing something, or have experiments come back that disprove your hypotheses, you’re going to discover amazing insights that benefit you, your team, and your organization.

In this episode of the Product Momentum Podcast, Sean and Paul welcome Holly Hester-Reilly, Founder and CEO of H2R Product Science. In this dynamic and fast-paced conversation, Holly discusses her approach to the product science method, one that focuses on using science and empathy to manage risk while building high-growth products and teams.

“Our job as product people is to manage the risk of product failure,” Holly says. “Part of that risk is to avoid looking bad in front of our teams, peers, and managers. We have to shift the mindset and the conversation away from right or wrong so that we can begin to pride ourselves on learning new things.”

Product leaders have an enormous role to play in encouraging experimentation, Holly adds. “The only way for us to make that mindset shift is for us to be the example by calling out when the people around us learn something new and saying, ‘that’s what we want to see more of!’”

Listen in to catch more from Holly Hester-Reilly:

[02:16] The product science method. It’s really about the difference between what people say they will do and what they actually do.

[02:58] Design experiments around past behaviors, not abstracts and hypotheticals.

[04:51] The role of data and metrics. The cool thing about software is we can actually measure how users behave. The right metrics …that’s the best possible predictor of future behavior.

[07:42] Why smart companies with reams of data still make flawed product launches. They’re too comfortable.

[08:25] The Emperor’s New Clothes. Do we have the willingness to be uncomfortable, to be the person who will stand up and say to the boss, “here are the reasons why your pet project is going to fail.”?

[10:12] Confirmation bias. Channeling Richard Feynman, “you must not fool yourself, and you are the easiest person to fool.”

[10:45] Rapid research. You have to be super-focused on the most important thing to learn and let go of the idea that you might not learn the other things.

[12:38] Exposure therapy. The more times that you’re exposed to something, the more comfortable you become with it.

[15:05] Optimism bias. Gets in the way of making good business decisions like so, so much.

[16:14] How long does it take to change somebody’s mind about their pet project?

[17:49] The role of experimentation. It’s not about being right. It’s about learning things we don’t already know.

[21:19] Premortem risk assessment. Put yourself in a place where risk is already assumed to be real.

[22:34] Our job as product people is to manage the risk of product failure.

[24:02] The difference between good and fantastic product research.

[25:46] Take a snapshot. Make sure that your team is situating who your customer is within the strategy of the product.

[27:22]  Practicing discovery. As a product leader, you should have a strategy that is a series of product-market fits.

[28:27] Measuring the value of research. Two parts: quantify the value of research and know when you’ve done enough of it.

[32:10] “Faster horses.” At least you know what outcome your users want.

[33:48] Innovation. Innovation drives a significant change. It doesn’t just increase the amount of something you’re selling: the revenue, the number of users. It changes the rate of that.

Sean [00:00:18] Hi, welcome to the Product Momentum Podcast, a podcast about how to use technology to solve challenging technology problems for your organization.

Paul [00:00:28] Well hey, Sean, how are you doing today?

Sean [00:00:30] Doing great, Paul. I’m excited to have Holly on.

Paul [00:00:33] This is a great conversation, I think one of the more scientific conversations that we’ve had to date.

Sean [00:00:38] Yeah. She’s got a great amount of experience in this space. I mean, it’s pretty much all she’s doing now, right, is running this company and talking about, how do we experiment better? How do we learn faster? How do we do the things that product leaders need to do?

Paul [00:00:50] And how do we be authentic enough so that it’s safe to propose an idea that could be wrong? And you don’t have to be right all the time and making it a place where you’re going to learn or you’re going to grow, but you’re going to get better either way.

Sean [00:01:03] I love it. And in this conversation, we get into cognitive biases like the confirmation bias and the optimism bias, which, you know, I suffer from. So this is going to be a fun talk. I think our audience will get a lot out of it, and let’s get after it.

Paul [00:01:14] Let’s get after it.

Paul [00:01:19] Well, hello, product people. We’re excited to welcome Holly Hester-Reilly. She’s the founder and CEO of H2R Product Science, a product management coaching and consulting firm that teaches the science of high-growth product development. Holly is a former Columbia University research scientist and has led over a dozen successful digital product initiatives at startups, high growth companies, and enterprises like MediaMath, Shutterstock, Lean Startup Company, and Weight Watchers. With those experiences, she’s developed the product science method, a framework to discover the strongest product opportunities and lay the foundations for high growth products, teams, and businesses. Welcome, Holly, we’re so happy to have you.

Holly [00:01:53] Thank you. I’m so happy to be here.

Paul [00:01:55] Awesome. Well, let me jump right in and ask, you know, when listening to the scope of your material, your talks, your podcasts, your blog, it seems like you’re striking this magic balance between what people want and what they’re willing to pay for and how you test this in the wild. How do you look at the world and how can we start to learn about the research that you’ve built your practice around?

Holly [00:02:16] Yeah. Thanks, Paul. So I really focus a lot on where people are actually going to take action. In many cases, that action isn’t involving paying. But at the end of the day, it’s really about the difference between what people say they will do and what people actually will do. And the way that we look at that has a lot to do with behavioral science. So you go and ask a person if they’re going to go to the gym and they have a gym membership and they bought it for a reason. And so they might say, “yeah, you know what, I’m gonna go to the gym this week.” But if you ask that same person, “did you go to the gym last week?” They’re going to be a lot more honest about whether they did go to the gym. So if you’re talking to a person who’s been struggling, who wishes they would go to the gym but hasn’t, you might get two different answers depending on what you ask them.

Holly [00:02:58] And that really is the detail in there that we focus a lot on is, it is known from scientific research that people are more accurate about what they’ve done in the recent past than they are about what they will do in the future or what they did a long time ago in the past. So we need to design our experiments around what people have done in the recent past. We need to ask them questions about specific stories, things that they can remember, that they’ll have details around, that they can tell us about, instead of abstracts and hypotheticals. And this also includes how they pay and whether they pay. So a lot of times what we’ll do is if we’re working with a client of ours who’s trying to figure out about a service they want to offer and they want to charge for it, we almost always include parts of our research that are around the last time that a potential customer bought something new, the last time that they spoke with their money or, you know, acted with their company’s dollars and finding out details around how that decision happened, what was involved in that, rather than asking them hypothetical about whether they would pay for this thing that we’re thinking about.

Paul [00:04:00] So does this apply to B2B products as well as B2C? It sounds like this is immediately applicable to consumers making retail purchases, making consumer-end, digital product downloads. Does have applicability at the enterprise level as well as the consumer level?

Holly [00:04:16] Yes, it does, and in fact, it’s even more helpful at that level. Because if you’re selling into enterprises or you’re selling into businesses, usually the sales process is way more complex. And the best way for you to get a sense of what that is is to find out what it was like the last time this customer bought something. So we will do this with B2B customers as well. And we’ll ask them to walk us through who was involved in the purchasing decision, whether they needed to get the signoff, how long it took to go from, “I want to buy this,” to, “I have been paying for this service,” asking these questions about the last time that they made a change in what they were paying for.

Sean [00:04:51] Reminds me of a quote from Margaret Mead. I use this quote a lot, but, “what we say, what we say we do, and what we do are entirely different things,” right. I think the coolest thing about software products, you’re talking about the purchase funnel specifically like a sales funnel in terms of like asking what they’ve recently done, but the cool thing about software is we can actually measure how they behave, how they actually behave. If we have the right metrics around how they actually behave, that’s the best possible predictor we have of what they’ll actually do in the future just like you said.

Holly [00:05:22] Yeah, it’s awesome how much we can measure. I’ll say I was very excited when I got to work with Weight Watchers because, I said to myself, “this is about as close to actually measuring the desired outcome as you could get.” You know, the idea that you know, most people, if they buy a software subscription, they buy it for the reasons of being entertained or getting something done. But you’re not, as the creator, necessarily able to measure whether they’ve achieved their desired outcome. You usually are measuring proxies for it, like, how many times did they do the thing? But you don’t know whether the thing they were doing, you know, got them the business they wanted, for example. But with weight loss, I was like, “this is amazing because we actually can measure like they’re putting in data about whether they lost weight.” So we know not only whether they’re using our product, but also whether it’s getting them to their goals. And, you know, that’s something that’s just incredible about software that you don’t get in other products.

Paul [00:06:12] Yeah. The corollary to that Mead quote, on the actual scientist side of it is the Richard Feynman, to paraphrase, I can’t remember exactly how it goes, but… Don’t fool yourself when you’re the easiest person to fool? I think as a scientist approaching the problem is, thinking you know what the problem is. When you have this objective measure like Weight Watchers, there’s something tangible. But oftentimes the problem isn’t as clear, right. You need to figure out what people say they want to do, but also really get into the driving objective behind it. That’s the hard part.

Holly [00:06:43] Yeah understanding the real motivation behind it. Nir Eyal all describes this as, “at the end of the day, everything is really about fear.” Even being drawn towards something positive, actually, you’re motivated by going away from something negative as well. So getting down to those human motivators is a huge part of understanding what customers and users will do.

Paul [00:07:02] So I want to jump into some real business-level drivers for how to bake this process in. Most of our listeners are product people, they’re product owners, product managers, product development team members. And you’ve had experience hands-on at firms that you’ve been a part of at Shutterstock, as well as just observing industry moves like Dropbox Carousel and Amazon’s Fire phone. There are these big, smart companies that have tons of data that still make entrances into the market that are flawed somehow. So how do you negate these blind spots? How do you start to break out that crystal ball and see what people aren’t seeing?

Holly [00:07:42] Yeah. So first you have to understand why this happens. I think really before you can negate these blind spots, you have to educate the people involved in launching the new thing at the existing company about why new product launches so often fail. And there’s a lot of reasons, but they fail differently at existing companies than they fail at a startup who’s doing their first one. And one of the big reasons is that the people inside the company have comfort. They are not as uncomfortable as the people in the startup. They are getting paid every, you know, whatever pay cycle is. You know, they’ve got some semblance of power and control over their day-to-day experience because they have an expectation that their job is going to be similar to what it was last month and the month before that.

Holly [00:08:25] And this does not drive the same behavior as when you’re in a startup where you’re literally saying, “I’m not going to have this job in six months if I don’t get this right, because if no one uses the product, then we’re not going to have funding and I’m not going to have this job.” That makes people much more open to being wrong than they are in that big company where everybody’s comfortable. And I don’t mean to discount. There’s a lot of difficult things that happen in big companies and a lot of ways in which people get uncomfortable. But it takes a bigger amount of willingness to be uncomfortable, to be the person in the big company who will stand up to the person who’s got the pet project and say, you know, “here are the reasons why your pet project is missing an understanding of what’s really going to happen and your pet project is going to fail.”.

Holly [00:09:11] And so the first thing is understanding why that’s so hard. And then the next thing is saying, “well, despite the fact that that is so hard, that is still our job.” If we are a product manager, if we are a product developer, if we are a product designer, we’re supposed to launch new products that succeed and we’re gonna feel better if we do. So we need to figure out how to do that. At the end of the day, the thing that I always go back to is, you want to get evidence in small, manageable pieces. So you need evidence to drive evidence-based decisions. But you also need that evidence to come out in ways that the people who had backed this idea can swallow and integrate into their sense of self in a way where they can feel some ownership over the research where you’re gathering that new evidence and where they can be open to the idea that maybe as a team who’s gathering evidence out in the field, you can actually find something new that’s not exactly what the initial vision was that is going to be successful and that that’s what’s most important for the company. But I think the single biggest thing, it really does go back to that Feynman quote about, “you must not fool yourself and you are the easiest person to fool.” That’s confirmation bias, and at the end of the day, in an existing company, confirmation bias is going to run a lot more rampant because people are just more comfortable. They’re further from the pain of getting it wrong.

Sean [00:10:25] I love the way this thread is going. Your kind of focus in the product world is around experimentation. And you’ve been talking a lot lately about rapid research, especially right now, because we a lot of fast, rapid decisions to make right now for all of our products. What are some tips that you have for us around rapid research?

Holly [00:10:45] So you’re right. I’ve been talking with that lately. And the thing that is important about rapid research is you cannot let the perfect be the enemy of the good. And, you know, the bigger, more established the company, the more likely you are that there’s an inertia there that does get in the way. But even in a small company, I mean, I was doing an exercise yesterday with some people. I said, “imagine you have just two weeks; what research would you do to answer these research questions?” And in some cases, the list of things that they tried to come up with that they said they were going to do in two weeks, I was like, “I have worked for teams for, you know, over a dozen years. I’ve rarely seen any team get that much done in two weeks while still doing their day job of, you know, shipping, product coding, writing, designing, et cetera.” And I think, in many ways, that’s the thing. The thing about rapid research is you have to be super focused on what’s the most important thing to learn and let go of the idea that, you know, you might not learn the other things. So if there’s a known unknown that you think is the most important thing to learn, then you need to design research that just learns that one thing but learns it really fast so that you can make the best decision following it and then you can design research to learn the other things right after that in the next two-week cycle.

Paul [00:11:51] So this experimentation machine in organizations that you referred to, it’s got to come from a place where it’s not artificial. This is not a bolt-on skill, right. This has got to come from a place of authentic desire to learn. And I think this experimentation process, while we’re looking at known unknowns that we’re trying to make clear so that we can make product decisions about adding value to users’ lives, solving problems, addressing pain points, there’s certain DNA components that some companies have and some companies don’t, some product teams have, some product teams don’t. What are the attributes, what are the family traits of the experimenters that you’ve seen, that we can look for? And are they learnable? Can we teach these things at organizations?

Holly [00:12:38] So I want to say that everything is learnable like everything is teachable. But that doesn’t mean that every organization is in a place to be taught, or every person is in a place to be taught. That said, for any organization or person who is wanting to learn, even if certain things aren’t their default comfortable state, I do believe that they can change. They can learn. They can grow. They can get comfortable with it. I guess one of the ways that I think about this is exposure therapy. Exposure therapy means, you know, the more times that you’re exposed to something, the more comfortable you become with it just by virtue of being exposed to it continually.

Holly [00:13:12] So what I recommend for people who aren’t used to running experiments or people who are used to only running experiments that always, quote unquote, succeed, or are always right, is you need exposure therapy to being wrong. Like, you’ve got to put yourself out there into a place where you are learning things that you didn’t know. If you’re not doing that, you’re not getting high value learning out of your experimenting. Then you’re just doing, like, experimentation for show. And the more that you do that, the more comfortable you’re going to get with it and the more you’re gonna be able to communicate with the people around you why that’s okay, why it doesn’t mean that you stink at your job, you know, because you’re gonna get these learnings that you then can use. And I guarantee if you get comfortable with learning things that you genuinely didn’t know and having some of your experiments come back where your hypothesis is disproven, you are going to find insights that will give you higher growth for your business and your team and your company than you would if you’d never done that. And once you get used to doing that, once you do that enough, it becomes something that you can’t imagine being a product person without.

Sean [00:14:12] All right. So I think there’s an interesting confluence of ideas here in this concept of experimentation just for show, right, just to say that you did it because that is something I see being demanded of a lot of product teams to be able to show that they’re learning and that they’re doing some experiments. So you see experiments popping up that are clearly just to prove this thing that we already know. And so it’s like a waste. And you also talked about confirmation bias earlier. And there’s another bias that I think is rampant in the product leadership space, and it’s one that I often have. It’s called optimism bias. And I think it leads us to only look at the things that are going to have a positive result. What do you think about that?

Holly [00:14:48] Yeah.

Sean [00:14:49] I think most product leaders, there’s somewhat of a bias in companies to want to hire optimistic product leaders, right, and you’re going to have a lot more. Not all of them, obviously, but a lot of them, so you’re going to have these biases. How do you deal with that?

Holly [00:15:05] Yeah. I mean, it’s really interesting and I think you’re right. So first, just to elaborate a little bit on what you said. Optimism bias gets in the way of making good business decisions like so, so much. I think we can all imagine or all remember times where, you know, maybe we made that mistake ourselves, thought something was going to work, and then go down the path of doing it and it totally fails. And we realized that we were like not noticing the signs. And a lot of us have experienced our boss or our boss’s boss being that person with the optimism bias. And they just can’t see reality even though we see it so clearly, right. The way that I advocate for getting around that is, and I’m actually going to go a little bit geeky here. I mean, not that I haven’t already, but further, is I like to calibrate your expectations based on your past experiences. So, you know, if you think about the way that an agile team sets, how much they’re going to get done next sprint, a real, true agile team that has a consistent team and has a good practice, they calibrate their estimation of how much work they have to do based on what it took the last time they had to do something similar, not based on what they think is going to happen next time. And that’s one of the key reasons why we use things like story points instead of days.

Holly [00:16:14] I think about that as well as a product person. Think about what happened the last time that you had to convince a stakeholder that they were wrong. So I convince stakeholders that they’re wrong a lot, and one of the things that I’ve learned over the years of doing this is, I had to stop being optimistic about how long it would take. Like, the first couple times, I kept thinking to myself, “well, you know, that last boss or that last curmudgeon took, you know, six months, but this new project is with these really smart people so surely they’ll learn faster.” But then you do it and then you get there and then you realize, “no, this is what the process looks like.” It takes at least three months to change somebody’s mind about their pet project, usually much more than that. And no matter how smart they are or how scientifically minded they are, they still have optimism bias. They still have confirmation bias. They still have all these other things. And they still have their day jobs that aren’t all about examining whether they’re right or wrong. And it’s going to take you many, many months to change their mind.

Sean [00:17:17] Yeah. Admittedly, I’m one of these people. I call myself out here, I have a clear case of rampant optimism bias. No one on my team has ever accused Paul of that, by the way. He’s the other end of that scale.

Paul [00:17:28] Speaking of Yin and Yang.

Holly [00:17:30] That’s awesome.

Sean [00:17:31] But one quick thing. I also think we all have a tendency to not want to look bad. And this is ingrained in our culture and we have to fix this before we can, I think, powerfully experiment with anything. Because we’re gonna have this tendency to want to set up experiments to look like we were right. We have to fix that at its core and that’s a hard thing for any team to do.

Holly [00:17:49] Yeah, that is a hard thing for any team to do. But I believe that the key is that you’ve, similar to behavior change, it’s harder if you say you’re just to stop something and you don’t have a plan for how you’re going to replace what you do when you have the urges that made you do the thing you’re trying to stop. I think it’s the same for the experiments and being wrong with experiments. We have to shift the mindset and the conversation so that we’re priding ourselves on learning new things, not on being right. That, if you go to several learned sharing of insights and you’re constantly saying, “I was right, I was right, I was right,” but you haven’t brought new insights every time, then that’s actually not something to be proud of. And the only way for us to make that mindset shift is for us as product leaders to be the ones who are showing that example and calling out when the people around us learn something new and saying that that’s what we want to see more of.

Paul [00:18:41] That’s a great point. I think we need to shout that from the rooftops. I want to actually take that specific point and bring it right down to where the river meets the road. We’ve been talking about experimenting and learning and sharing ideas throughout the organization, but we can’t test everything, right. And we shouldn’t test everything. The way that you’ve filtered the things that should be tested and are worthy of testing are the risky things. And that sort of begs the question, what is risk? How do you put a gauge on this filter for what gets tested?

Holly [00:19:09] Yeah. So I love to talk about risk. I really do. And again, it’s one of these things that makes people uncomfortable, especially in business environments where we’re used to expecting that we show our worth by being right all the time. No one wants to go up to their boss and say, “Hey, I’m excited about this new project; here’s all of the potential upsides but it’s really risky. Can we do it?” Right, like that’s not what most people are saying. But I love to talk about it because I think that’s how we begin the process of taking away that risk is to just be honest, to be candid that it’s there. The way that I identify which risks are the biggest risks is actually a practice that I learned from Jeff Patton. I’ve seen it in other places as well, so there’s other people who also teach it. But Jeff Patton is the one, you know, in the product context where I first came across this. Which is to say, we’re going to brainstorm a bunch of risks and then we’re gonna put them on a chart. And the chart is going to have one axis that says, how likely is it that this risk is going to come true? And the other axis is going to be, how impactful would it be to the business if it does come true? And so if we brainstorm a bunch of things that we think are big risks for our project or our product or our initiative, and then we identify which ones have that combination of being both likely to happen and impactful if they do happen, that means that those are the really big risks. Those are the things that we need to learn more about.

Holly [00:20:27] Now, there’s another thing I do in this space, which, is in order to encourage people to come up with risks and to come up with good, honest risks, I make sure that we have a team in a workshop where we’ll talk about risks that involve designers, engineers, product managers, stakeholders who need to be brought in, who need to be able to sign off, who have the power to stop a project. Anybody who’s gonna be involved in executing on that project, bringing them all together and asking them to think about ways that this would fail and specifically giving them the prompts about different types of failure. And those different types of failure that I talk about, they come from actually Marty Cagan and his valuable, usable, feasible and viable. And so I say, “let’s talk about failure in those four categories.” And that really pushes people to think more deeply about all the different ways that things could fail.

Holly [00:21:19] And then one last thing that I do around this area of risk is I try to set the scene in the beginning of this conversation so that we try to put ourselves in a place where risk is already assumed to be real. So I say, “let’s say that we’re a year out from when this product has launched and it’s failed. We’re gonna assume it’s failed, so our job is to come up with the reasons and to come up with as many reasons as possible.” And to do this, you know, I call it a premortem risk assessment, but basically you imagine yourself doing a postmortem in the future when the thing has already failed. And so we try and get the teams to be in a place where they’re comfortable saying, “oh, here’s a reason it failed. Here’s a reason it failed. Here’s a reason it failed.” And again, it pushes them into that uncomfortable space of getting close to failure.

Sean [00:22:02] I like that idea of doing a postmortem in the future, like create a future where all these things went wrong. That’s a great idea.

Holly [00:22:09] Thanks.

Sean [00:22:09] I had a conversation with my CTO, just the other day, actually, about when you’re in security, managing that risk, you get fired when you don’t do your job. But when you’re doing your job, nobody celebrates you because they don’t really know. Like when you’ve done a great job of managing this risk, nobody really knows because you’ve done your job. You don’t see it. It doesn’t manifest. It’s like a tricky thing and I think risks are often overlooked because of that.

Holly [00:22:34] Yeah, people don’t want to think that it’s there or they just want to ignore it, because when it’s managed well, yeah, it’s just taken for granted. I think our job as product people is to manage the risk of product failure. And I think we all know products fail all the time. There are a lot of product people who are hesitant to take on ownership, to say that if it does fail, that is something they should have been on the hook for. But then at the same time, I imagine I’m not alone in having worked at companies where when the product failed the leader of that product branch lost their job, and so they were on the hook for that.

Sean [00:23:08] Yeah.

Paul [00:23:09] Yeah, and I think we overemphasize the risk of not doing anything, right. Everyone thinks they’re the next Netflix or the next Uber or the next fill in the blank. And I think that we often think we’re missing out by not doing something and overemphasize the risk of not doing anything so we take on inordinate risk and jump ahead without doing our due diligence and taking a more scientific approach. I want to talk about how this wraps up. So we’ve done experiments now. We’ve created a culture where it’s OK to be wrong and we’ve started to make some learnings and shared out into the organization. Some of the steps that you talk about at the tail end of your process often get left on the table. You know, even the best user interviews that have some of the most clear insights about what people want don’t get shared. They don’t get synthesized and processed and turned into something usable. So how do you take all this knowledge that you’re gaining and apply it. How do you synthesize it and share it?

Holly [00:24:02] Yeah, that is a really great question and it’s something that I think does make the difference between sort of a good enough, you know, product research and fantastic product research is, it’s not doing its job if people aren’t learning from it and learning from it has to be more than just you as the person doing the research or the person who’s gonna make a decision from it. So I have a couple of key things I like to do. The first one is to do a snapshot after every interview. So when my teams do customer discovery interviews, we set aside time right after the interviews where there’s at least fifteen minutes that the researcher, the product person, engineer… Anybody who came to that research debriefs after the interview with the participants already gone and says, “what did you see and hear in this interview?”.

Holly [00:24:45] And we use some frameworks for that. One of the frameworks that I sometimes use, I believe I got from the trio of Barry O’Reilly, Jeff Gothelf, and Josh Seiden in their leading business agility work, which was to say to yourself, “what are the facts, what are the feelings that I heard, what are the quotes that were interesting from this research, and what are the hypotheses that I have coming out of this research?” So a lot of times our teams will go through that and we’ll say, “OK, who’s the person we spoke to? What were the emotions that came out that might be motivating their behavior? What are quotes that really tell us what we learned from here?” And then for the hypothesis, what I’m usually having my teams do is hypothesize whether this person would use the product or whether they have this pain or, you know, whatever the question is that we’re trying to answer, hypothesize where they would fall on it. And I think this actually brings us back to the earlier question with that, which is to note that we didn’t ask them where they would fall on it. We don’t usually do that because they’re usually wrong. So instead, we’re talking afterward about what did we see? Did we think this person is in our target market? Do we think their behavior would match?

Holly [00:25:46] So that’s the first one, doing that debrief. And then from that debrief, creating a quick snapshot. So a snapshot, it’s like a one slide image that has a picture of the person who was in the interview, brings out some of these quotes, brings out some of these facts and feelings and hypotheses. One key thing that I always like to have my teams do in that snapshot is make sure that they’re situating who that customer is in the strategy of the product. So saying, for example, if we’re in a phase of our product roadmap where we’re expanding to, let’s say, with Shutterstock Editor, for example, we were creating an editing tool for nonprofessional designers to create great looking designs and our very first market with social media managers, and our next market was digital marketing managers. So we made sure when we were sharing research that we would say, “this is from a person who fits, you know, segment one, this is from a person who’s in segment two.” Just a reminder to the team at large, “Hey, this piece of research is telling us about a person that we are not planning to make them super happy for three months. We’re going to make them really happy, you know, three months down the line. Today, we don’t need to meet their needs, but we need to know about them so that we can meet their needs with our next release.”

Paul [00:26:54] Yeah. I think it’s super important because the relative pain that people feel doesn’t necessarily always correlate to the value that you can deliver right then. So somebody might be having a bad time with the experience that’s being created. But because of the basically product-market fit problem that you’re solving in this persona versus that, you’re solving the most important problem first because of the research and the data behind it. I think that’s a really key point that’s often missed. Great.

Holly [00:27:22] Yeah, people definitely have a tendency to, especially in the earlier days of practicing discovery, just jump on the person that had the loudest feedback, jump on solving the problem that seemed the most painful. But you really need to put it in context with your strategy. As a product leader, you should have a strategy that is a series of product-market fits. “First, we’re gonna hit this segment with this pain and this outcome. Then we’re going to hit this segment with this pain and this outcome, then we’re going to hit this segment with this pain and this outcome.” And you want to situate that research in there so that people know both the big picture and the what’s coming next.

Sean [00:27:55] Cool. How do we know when we’re actually learning? Like, do you have any insights in terms of like, how do you measure the value of the research that you’re doing? This is a hard problem for us. We all know we don’t do enough research. Like, pretty much every product owner or product leader that you speak to, they’ll all tell you, “we don’t do enough research, we don’t talk to customers enough.” And literally, I don’t believe you can do enough research, but you have a budget and you have to work with it. So how would you measure? How do you know you’re doing enough learning? How do you quantify the value of research?

Holly [00:28:27] Yeah. So I would take that as two separate questions, quantifying the value of research and knowing when you’ve done enough of it. They’re related, but a little bit different. I think it’s easier to talk about knowing when you’ve done enough of it, which is, I think that you know that you’re doing the right amount of research when your team is moving fast and making the outcomes move, you know, making the metrics move, and you are not surprised with any regularity by what’s happening with those metrics. That you know, “hey, when we launch this feature, we’re expecting this subgroup of people to act in this way and that’ll probably move this metric in this direction.” And you’re not going to know, “oh, it’s going to move by 5.7 percent.” Like you’re not gonna have that level of it, right. But like you are going to know, for example, when we launched Shutterstock Editor, you know, we launched it to social media managers. We knew that the thing that they were gonna say is we need text editing because our very first launch didn’t have text editing. And that’s what they said. That’s what almost everybody who was using it said. “This is great. We’re very excited. We use it for X, Y and Z things. When are you going to add text?” To me, that’s when you know you’ve done enough because they can use it for what you wanted them to use it for. They’re meeting the outcome that you have said is the outcome for this phase of your roadmap, and they’re asking about the next thing that you’d already put as the next thing on your roadmap.

Holly [00:29:45] With regards to the value, the value of research is really the savings of the wasted engineering effort of doing the wrong thing. It’s hard to put a number on that, though, because if you do good research, then you’re not actually going through the practice of the wasted effort. So that one’s really pretty challenging. And I’m curious if either of you have a way to measure that. I’m all ears because I would love to have that be more intuitive for people that I work with.

Paul [00:30:08] Well, it’s Aristotle’s proving a negative, right. If something didn’t happen, how do you know?

Sean [00:30:12] It’s like proving the value of your CTO or your Chief Security Officer, it’s the same concept, like how do you know? You’re in the space every day, so I thought I’d ask the question.

Holly [00:30:21] I wish there was an easier answer. I mean it is, I guess, you know, taking my own advice, you know, the closest that I’ve gotten to it would be saying, you know, “how often have you spent time building the wrong thing and how much did you spend on that?” And I’ve worked with clients where, I mean, tens of millions of dollars have been spent on building the wrong thing. So, you know, if you put it in that context, then you’re like, “so how much will you spend on research? Can you give me a budget for, you know, tens of thousands?” And I’m like, “OK.”.

Paul [00:30:48] Certainly puts it in perspective.

Sean [00:30:50] That’d be interesting to look at. Do you know of any studies that correlate level research to project size or successful project size? Even if we had some kind of semblance of, if you’re spending this much money on this product, you should at least spend this much on research.

Holly [00:31:03] I wish I did. You know, the truth is that in the high growth, you know, continuous discovery world, everybody’s so busy trying to make money and make their product move, they don’t do studies on it. And then the people that do the studies, they tend to be tapping into a completely different set of workers who are working in different ways. And so I know of none.

Sean [00:31:24] That’s a challenging problem to solve right there.

Holly [00:31:27] Yes, it is.

Paul [00:31:28] So we’ve just got a couple questions left and I wanted to throw a zinger at you to see what your response might be. As someone in the research and experimentation space, I’m sure you’re sick of hearing the apocryphal Henry Ford faster horse quote.

Holly [00:31:43] Uh-huh.

Paul [00:31:43] But what, if any, truth is there in it? And if you’re willing to share, what is the worst misapplication of that quote that you’ve seen? I think it’s one of those silver bullet quotes that people can pull out of their pocket as a gotcha in a conversation. So I obviously have opinions that I’ve already tipped my hand on. But I’m curious, what do you think about that specific, “if I asked people what they wanted, they’d ask for a faster horse” quote in this context.

Holly [00:32:10] Mm-hmm. Yeah. So basically, “if I had asked people what they wanted, they’d ask for a faster horse,” you know, the idea is that they would want to go faster, but they wouldn’t think of a different way to go faster. They would just say, “make the horses faster,” which, of course, is impossible. The thing that, first of all, yeah, you’re right. I hear that quote and I’m like, “Oh, gosh, not again.” But the truth is that the thing that I hear in that quote is that, well, actually, that’s not a problem, because if a person tells you that they want a faster horse, you know they want to go faster and now you know what outcome they want. So, you know, talk to them because you’re never going to know what outcome they want if you don’t talk to them. And, you know, there’s implications about what pains they’re solving by wanting a faster horse. They’re obviously solving the loss of time in taking a long time to get somewhere. And then you have the things you actually need to design a better solution than they could have come up with which is, what pain do you want to solve for them and what outcome you want to drive for them? Going back to one of your other questions in there, the worst possible use of it is when somebody says, “therefore, we shouldn’t do any research at all.” That’s the one where I’m just like, “oh, just no.”

Paul [00:33:11] You know, I think that is a really insightful take on that quote. I hadn’t really put that together until you just said it just now. When people say they want a faster horse, they’re telling you the outcome that they’re desiring. It’s one of those quotes that’s become just background noise and you don’t even hear it for what it’s saying anymore, but that’s a really great point. People are telling you what they want. They just don’t know how to get there.

Sean [00:33:32] You know, Henry Ford didn’t actually say that.

Holly [00:33:34] Yeah, I heard that, too, that it wasn’t a Henry Ford quote, but everyone thinks it was.

Sean [00:33:40] So a question for you: how do you define innovation? Because that’s what we’re really after here, right, is the next innovation.

Holly [00:33:48] Yeah. So that’s an excellent question. And I’m going to be upfront and say it’s not something I’ve spent a lot of time thinking about in that context. But what I do think about is what drives high growth. And to me, high growth is, in many ways, another word for innovation. It’s just that I’m trying to drop some of that baggage about like, well, innovation is something you can write a press release for. Like, no, that’s not innovation. Innovation to me is something that drives a significant change. It doesn’t just increase the amount of something you’re selling: the revenue, the number of users, whatever metric you have. It changes the rate of that. You know, it goes from, “we’re growing at 10 percent every year” to “we’re growing at 20 percent every year.” The thing that does that to me is innovative. The thing that just goes, “OK, we were growing at 10 percent every year and then this year we hit another 10 percent growth.” Like, no, that’s just optimization. I don’t know if that makes sense but that that’s the way that I think of it.

Sean [00:34:39] No, it’s good.

Paul [00:34:40] Yeah, I like that distinction. I think that’s important to remember. I have one last question for you before we let you go. The learning that we do is founded on reading and talks and TED talks and Mind the Product Conferences when they ever open back up again. But we’re curious, what’s something that you’ve been reading that’s a book that would be worth sharing with our product leaders listening to this podcast today?

Holly [00:35:04] So I have to say, I don’t know if this is fair, but the book that I’m actually working on right now is not released yet. So when it comes out, then I will say you should read that one. But I think the most recent book that I really enjoyed reading would actually have to be Indistractable by Nir Eyal. I’m a big fan of Nir’s because he uses behavioral economics and behavioral science in the world of tech and products. And the thing about Indistractible to me is that it’s applicable no matter which of these roles you’re in. So it doesn’t really matter whether you’re high up or low down in the organization. It doesn’t really matter whether you’re a product manager or an engineer. It’s about fundamental life skills of how we work both professionally and in our families and our personal lives. So that would be my recommendation.

Paul [00:35:49] Great recommendation. Totally. Well, Holly, it’s been a pleasure talking to you today. I’m sure there’s some great learning that’s going to go on from those listening who tuned in. I want to thank you again for taking the time to share your insight with us. It’s really been a joy to spend some time digging into what makes your practice tick and the experiences that you’ve had that we can learn from.

Holly [00:36:08] Thanks so much, Paul and Sean. It’s been so much fun. I just love geeking out over these things, so.

Paul [00:36:13] Likewise.

Holly [00:36:13] Thank you for having me.

Paul [00:36:14] All right.

Sean [00:36:15] We are all product geeks; that’s what this is about.

Holly [00:36:18] Yes. I love it.

Paul [00:36:19] Cheers.

Holly [00:36:20] All right. Have a good one.

Paul [00:36:25] Well, that’s it for today. In line with our goals of transparency in listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get. So please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch and inspire the world, just like you’re doing. Thanks, everyone. We’ll see you next episode.

Like what you see? Let’s talk now.

Reach Out