Skip to Content

139 / User Experience Research: AI’s New Frontier, with John Haggerty & Prerna Singh

Hosted by Paul Gebel

Listen

About

john-haggerty-and-prerna-singh-podcast-guests

John Haggerty and Prerna Singh

Experienced Product Leaders

John Haggerty, founder and CEO of The PM Insider, is a seasoned product leader who offers tailored insights derived from a proven track record of driving innovation and strategic growth. His expertise lies in integrating business constraints across multiple domains, deeply understanding user motivations, and guiding product strategy with data-driven insights. John is also a course instructor, leading interactive sessions and creating assignments and discussions aimed at helping students apply practical skills in product management.

Prerna Singh is a product professional and product leader with a demonstrated ability to bring teams to their goals. She is experienced in combining strategic insights and human-centered design principles to deliver high client and product value. Prerna has contributed her expertise to product teams at Meetup, Quartz, Mashable, and IBM. She also serves as a course instructor at Gigantic.

Back in episode 132 of Product Momentum, Janna Bastow talked about using AI tools to do much of the “grunt work” product managers and UX researchers do so that they can spend more time on the higher-value work that’s actually helping to transform product building. In this episode, John Haggerty and Prerna Singh go a bit deeper explaining how AIs can expedite – and simplify – those mundane, repetitive tasks to analyze qualitative data compiled from reams of user experience research.

John and Prerna will conduct a pair of workshops at the ITX Product + Design Conference, in Rochester, NY on June 27-28.

Leveraging AI for Customer Research

John’s workshop will include a comprehensive overview of AI applications in product management, covering key topics like product feedback analysis, churn prediction and retention, risk assessment, competitive intelligence, etc. “AI is really good at doing things like sentiment analysis, topic modeling, named entity recognition,” he says. But it can be a lot to take in. “The best, fastest way to get familiar with AI is to just play with it. Just have fun, go out and use it, whatever it is.”

Embracing the Data We Already Have

Prerna’s workshop in Rochester will help attendees understand the data we already have — and how might we leverage it to make better customer decisions. Gathering customer research doesn’t need to be some extensive, intensive effort, she says, “but is really something that we should be doing on a continuous basis to make higher quality decisions.”

Bias, Ethics, and AI

Both John and Prerna stress the importance of understanding AI biases, ethics by design, and ensuring equity in training data. They also highlight the significance of “preserving human elements in user research,” such as non-verbal cues and emotional feedback, to maintain genuine human connections.

Be sure to catch the entire episode to grab a few tips from John Haggerty about AI prompt engineering, and learn why Prerna Singh believes humans are becoming more comfortable responding to an AI researcher than to another person — and the new frontier of opportunity this creates. You can also watch our episode with John Haggerty and Prerna Singh on the Product Momentum YouTube Channel!

Paul Gebel [00:00:19]  Hello and welcome to Product Momentum, where we hope to entertain, educate and celebrate the amazing product people who are helping to shape our communities way ahead. My name is Paul Gebel and I’m the Director of Product Innovation at it, along with my co-host Sean Flaherty and our amazing production team and occasional guest host. We record and release a conversation with a product, thought leader, writer, speaker, or maker who has something to share with the community every two weeks.

Paul Gebel [00:00:43] Hey everybody! I am really excited to share this conversation. I had two special guests today, John Haggerty and Prerna Singh. They’re both workshop leaders at the upcoming Product + Design conference here in Rochester, New York, June 27th and 28th. Their workshops are taking place on the 27th, but we’re really excited to share some of the ideas that they’re going to be unpacking in each of their respective workshops.

The ideas that we’re talking about today are centered around AI, but some of the edge cases are in the human area of discovery of empathy and storytelling, working through how do we as product professionals look at this tool not trying to replace humans, but amplify their productivity, but really approach it with a playful sense of exploration and experimentation? I really think you’re going to enjoy this conversation today, and if you’re in Rochester, check us out at itx.com/conference2024. And you can learn more about both of them and what else we have planned in store for the conference later in the summer. Thanks. And we’ll get this started.

Paul Gebel [00:01:47] Well, hey folks, and welcome to the pod. Today I am excited to be joined by two very special guests. First, John Haggerty, a seasoned product leader, John offers tailored insights derived from a proven track record of driving innovation and strategic growth. His expertise lies in integrating business constraints across multiple domains, deeply understanding user motivations, and guiding product strategy with data driven insights. John’s a course instructor, lead interactive sessions and creating assignments and discussions aimed at helping students apply practical skills in product development, and also being joined today by Prerna Singh. Prerna is a product professional. A product leader with demonstrated ability to bring teams to their goals. She’s experienced in combining strategic insights and human centered design principles to deliver high client and product value. Prerna has contributed her expertise to product teams at Meetup, Courts, Mashable and IBM, and she also serves as a course instructor at Gigantic. Thanks so much for joining us today. I really appreciate you guys taking the time. Welcome.

Prerna Singh [00:02:46] Thanks so much for having us. Appreciate it.

John Haggerty [00:02:48] Yeah, this is gonna be fun.

Paul Gebel [00:02:50] I’m looking forward to it. So, we chatted very recently prior to this show, just to try to talk through some of the ideas that we’re finding ourselves at the intersections of. And, you two bring really unique perspectives, each from different angles, but kind of converging on some core ideas. And to kick us off into where I think we’re gonna end up, I’m going to start with you, John, and just kind of taking us into some of the ideas around AI specifically and how we might leverage AI in enhancing customer research and discovery. Can you just give us a peek into your brain about how you’re starting to think about AI, not just as, as a tool for enhancing productivity, but something that’s really kind of next level and almost a psychological bend is where I thought we were kind of starting to go maybe, maybe unpack that before us to get the conversation rolling.

John Haggerty [00:03:42] It really like what I like to use it for is kind of getting into those deep understandings of what’s in your your qualitative data, which especially comes out of consumer research. And, you know, AI is really good at doing things like sentiment analysis, topic modeling, named entity recognition, you know, like running a, a Latent Dirichlet Allocation to identify the main topics of themes that can be discussed out of that were discussed. And in that feedback that you received is so powerful to understand what’s going on. But even more with that, what I find really awesome is doing combination of things where you could do some clustering and segmentation of users. So, you know, like like the example that Rahul Bora uses for his Product Market Fit engine, that’s the thing. His Product Market Fit engine, where you look at, you know, okay, who the people that absolutely love you, that you know would be extremely disappointed and look at the things they love. Well, if you can do named entity recognition or aspect-based sentiment analysis from what they’re saying and get their positive tone out of it, but then flip it. And those people who would be somewhat disappointed look at it from a sentiment analysis that’s a topic modeling and named entity recognition and pull out the negative aspects of it. And look at that. It’s not, you know, the things you need to double down on for the people that love you and the things you need to improve for the people. That almost love you.

Paul Gebel [00:05:08] Love that. And Prerna I know customer research is a topic near and dear. What thoughts do you have about kind of how can we leverage these new ideas and conversations we’re having in ways that help the users ultimately feel more engaged with the products that we’re developing?

Prerna Singh [00:05:21] Yeah, I love this question a lot, but I think John was doing it like talked a lot about how we could use AI for analysis. What I’m really excited by, and where I believe the new frontier in customer research and discovery is actually in how we can leverage AI to actually moderate conversations with participants. There’s this whole field of AI research, and there’s so much as a user, researcher and someone who really loves, like understanding how people interact with technology. This is like in itself, like very meta where you have AI actually asking the questions, and I there’s a couple of tools that I’ve kind of come across recently that are really doing this and sort of pushing the envelope in what I believe will be the next sort of frontier in terms of how we how we do user research and discovery, where they’re having AI actually asked like text-based questions, and then people are responding with voice, and then the AI will then follow up with more sort of text-based questions. And I think this is super interesting because from a UX research perspective, I know one of the things that in my training as a UX researcher, I was trained to like, you know, figure out how to not ask, you know, non-leading questions and prevent biases.

And ultimately, like we’re human and I’ve done user research for many, many, many years now. And I still sometimes will occasionally slip up in a, you know, in a user research study and ask a question that I’m like, oh, dang it, I really shouldn’t have asked it that way. And what I find fascinating is like, you know, now we can actually train AI models to not, you know, to to ask questions a certain way. And I think that that will hopefully raise the quality in the bar of the data that we get back, because ultimately those are the, you know, that’s the data we’re using to make these really critical product decisions. So, I’m really fascinated by this. I think the one thing that is very interesting, that we’ll have to continue to see how people are engaging with the AI, but based off of the research that I’m like looking at right now, it I think people are feeling actually a little bit more comfortable talking about, you know, their thoughts, their feelings, their experiences to an AI rather than, you know, having to maybe tell a human about things that you may have feelings about. So lots to really unpack in this, but I’m really excited about sort of this, like kind of frontier new frontier that we’re approaching AI user research.

John Haggerty [00:07:44] And I’m super curious if anyone is going to jump on the ChatGPT for all and doing the monitoring of someone’s usage of a product while doing the user interview, leveraging an AI in that aspect, because it can both interact with its visual and visual state of what’s happening, and then create the context around it to be able to ask those questions, those open ended questions, to get the person talking.

Paul Gebel [00:08:09] Yeah.

Prerna Singh [00:08:10] Yeah. Sorry. Go ahead.

Paul Gebel [00:08:12] Yeah. I was just going to say, since you went there, let’s go there. As of the time of this recording, we’re middle of May 2020. ChatGPT 4o, just kind of dropped out of nowhere. And, you know, the, the teaser videos and the capabilities do seem absolutely wild, like next-level on next-level of of how we’re interacting with these AI models now. And it’s opening doors we didn’t even know we could think to open before. So John, what did you mean by bringing that up as sort of what could we tap into what goes through your mind when you see an opportunity like this materialize?

John Haggerty [00:08:46] Well, I’m just thinking about like when you set up a, like a, like a really in-depth user analysis, user testing scenario where they’re playing with either the prototype or the live data or whatever type of environment is, you know, with eye tracking and every mouse clicks and all that, and all of a sudden combining that with an AI like, you know, ChatGPT 4o to be able to understand, be deal to see what’s happening and then combine that on top of with the question asking and interact ability of it to be able to go deeper and understand, start asking the why question in an automated, you know, very scalable manner.

Paul Gebel [00:09:24] In that that’s a perfect segue to something that we were chatting about before the show as well, just kind of at the end of the day, no matter how human or thinking or expressive an AI ends up becoming, it’s still just math. It’s still just a robot. It can mimic empathy. It can mimic storytelling. But at the end of the day, it’s going to be expressing a model that it’s been programed to do, that it’s been trained to do. What are some of the things kind of taking a little bit of a Luddite perspective? Not to take us back to, pre-AI days, but what are those things that you think are important to preserve in in the human element of these conversations. I’m excited about the technology as much as any other product professional, you know, watching these things develop. What are some things we should be on the lookout for to preserve, to protect in what’s important in in UX research?

Prerna Singh [00:10:23] I love this question too. And I think what’s interesting is that you see more and more of these like tools. I, I always believe that the role of technology is not to replace human connection, but to augment it. And that’s also true in customer research and discovery. And so, you know, even in some of the research that’s been done, the feedback that participants have given in terms of conversing with an AI is like, well, it was good and I felt comfortable, but there was no human feedback, whether like explicit or implicit. So, you know, there’s no one like, you know, giving you some of that, like facial like feedback or like facial expressions or like the prompting. So, if you don’t have that, like human connection back and forth, it can do just like talking to like a wall. And you have no idea, you know, what the response is and you don’t sometimes like that’s where the ideas really get, where you can really bounce the ideas off of someone. So, you, you people look for that confirmation. And there’s some really amazing brain research also that’s been doing on like how well humans understand each other to the point where certain verbal cues that I might be using your brain is processing. That way, before you even know what my next words are, you, you have the ability to predict what I’m about to say just based on my own verbal cues. And so we still have a long way to go to be able to train actual computer models, to do the kind of like human computing that we’re just doing without even realizing it.

Paul Gebel [00:11:57] Yeah, I think there’s a staggering percentage. I’m not going to try to quote the number, but there’s a staggering percentage of information input that humans are able to process that’s, you know, in any given conversation, a greater than half percent of the information we’re processing in a conversation is the nonverbal cues. It’s the body language. It’s the head nodding. It’s the, the, the kind of ‘uhs’ and ‘ahs’…

Prerna Singh [00:12:21] Exactly like what you were just doing. I’m like, I know exactly what you’re about to say next, even though you haven’t even said the word.

Paul Gebel [00:12:27] Actually, this kind of dips into and I don’t want to steer us too deeply into the ethics of AI because we’ll spend the rest of the time just unpacking how deep that rabbit hole can go. But I do want to touch lightly on it as kind of a first principles model. And I’m curious to hear both of your ideas around, you know, if you look at sort of the Hippocratic Oath and, you know, do no harm kind of mantras or models or banners that we march under, what kind of comes to mind as an ethic or a first principle of how we how we engage with humans through these new technologies? And I’ll, I’ll leave it as open ended, take it where you will.

John Haggerty [00:13:04] So for me, I would say that having the ability to understand and be able to explain the system, to know enough what’s going on, for example, for me, like one of my big passions right now that I’m exploring and playing around with, is understanding the bias and heuristics that exist in the AI itself. What you know, what’s playing out there, being able to know that and have some understanding of what’s going on inside that black box when it’s non-deterministic, is hugely beneficial to know and anticipate what the outcome is going to be and whether or not it’s being used correctly and then the correct manner. I think that’s probably like one of those first principles to look at. Besides that, it’s just, you know, for me, it’s, you know, ethics have to be at the beginning and always in play, you know, ethics by design principles within any AI development or usage that’s taking place.

Prerna Singh [00:14:04] Yeah, I agree, and I think the transparency is, is huge, right? In in any system that is developed, whether it’s software or hardware. Now we’re moving into this world of artificial intelligence. We need our transparency and how it’s working, what’s going into it as well. Right. The ways that these models are being trained, how are we ensuring that we know what those are and that there’s equity even created within the data that these models are being trained upon. So that’s obviously a huge part of it. And I think that there’s also some level of like, you know, we talk a lot about and like the data world and I’ve done data product management for many years now, this idea of governance. And so how are we ensuring that if something isn’t actually up to, you know, the standard that we hopefully will collectively define at some point that what how is that how is that being enforced? How is that how who’s being who’s accountable for these things? And these are all conversations that I think are taking place. But there’s so many more, I think open questions about what the role of different entities, what the role of open AI is, what the role of everyone who is using these to then power their own systems and features and products. What do we all collectively have? How do we all collectively hold ourselves accountable to this?

Paul Gebel [00:15:26] Everyone just want to take a quick break in today’s conversation to share some exciting news about upcoming product and design conference. Taking place Thursday and Friday, June 27th and 28th right here in Rochester, New York. It’s going to be held at the Memorial Art Gallery, and it’s going to be spread over two days. Day one, featuring a half day design and product workshop series with Prerna Singh, John Haggerty, Ryan Rumsey, and Patricia Reiners. Day two is going to be a fantastic day of keynotes headed up by John Maeda, VP of Design and AI at Microsoft. Denise Tilles, coauthor of the new book Product Operations. And Ryan Rumsey, CEO of SecondWaveDive and author of Business Thinking for Designers. Sprinkled in throughout the day of keynotes will have the option to choose your own adventure. You can sit in on some live recordings of podcasts. You can network with some fantastic product and design professionals throughout the day, or catch one of our three fireside chats discussing some of the important themes and topics in our space that we’ll be touching on throughout the day. To reserve a seat for you and a friend, or maybe treat your whole team to two amazing days of learning and networking, you can head on over to it.com/conference 2024. ITX dot com slash conference 2024. Looking forward to seeing you there. And let’s get back to the show.

Paul Gebel [00:16:46] I’ll editorialize only briefly. I don’t want to inject too much on my own, but I do worry a bit that we’re, you know, following the same patterns. For example, with Airbnb, it was an illegal hotel service until it wasn’t. With Uber, it was it was an illegal taxi service until it wasn’t. So, I feel like we’ve followed a pattern of pushing into these disruptive areas and just kind of taking the territory, and it becomes useful and it becomes almost indispensable as a utility, a commodity, and then we can’t get rid of it. And but at that point, it’s too late to unwind and put the ethics back into it. So I that does that does keep me up at night a little bit.

Prerna Singh [00:17:27] I think we’ve seen this over and over again. Right. Like even Facebook has long struggled with this “are you a platform or are you an actual like news media organization that has some responsibility to the to their members and to the people on the platform about what news they decide to show?” And so, it’s right. It’s like it’s a Pandora’s box that, you know, John was right. Like, we need to be leading a lot of these conversations with ethics actually as the primary like motivator, rather than it coming after the fact, after we found ourselves in situations where it is very, very gray and like, who’s to say that the person deciding is actually the most like, impartial person to be able to, you know, make decisions about where things are ethical and where they’re not?

John Haggerty [00:18:16] Yeah, that’s why a couple of the last, the last two organizations I was at, we did one of my principles is doing pre-mortems for product initiatives to understand what could be the worst-case scenario here, how we fail. What do we do. You know that that type of doomsday kind of pre-planning. But on the flip side what I or what I brought into I shouldn’t have said what I brought into that as well. The last couple of organizations wasn’t my AI products, and we were looking at it from a what would be the ethical problems that could come up that could cause this? So not just looking at it from a product failure or a business failure or technology build, but looking at it from an ethical failure, whether it’s the collection of data, the usage of data, bad press or media relationships because of what we’re doing or how we’re doing it, those ideas and bringing that into that, that that pre-mortem discussion becomes extremely important. Then we know that we’re having that ethics-by-design from the beginning, from inception to bring it forward.

Paul Gebel [00:19:21] We’ve been living in, in the stratosphere for the past 15 or 20 minutes or so. And I want to bring us back down to something practical. You know, out there, there’s a there’s a product owner on a scrum team trying to figure out what does this even mean for me and my day to day. And I, I learned something from you yesterday, John, and I’m hopeful that you might be able to unpack a little bit. You shared a little bit about a way to prompt engineer. I’ve used it since you shared it with me yesterday to surprising success. It’s, it’s a very small tweak, but it’s changed the way that I’ve. I’ve started to think about prompt engineering. So just to take some of these ideas and make them practical, could you share some of the ways that you mentioned prompt engineering a few minutes ago, and I’m wondering if you can give us a couple kind of practical, how can people take some of these big ideas and take them back to their teams and do something with them in their day to day?

John Haggerty [00:20:11] Well, first of all, I recommend everyone is just play. Just have fun, go out, use it, you know, whatever it is, if you can’t get your data in there, there’s places to go find data. Kaggle is a great resource for just getting generic data that you can play around with and just explore and try and do different things with it. What I learned last week, which was so interesting from talking to a Google engineer or an ex-Google engineer who’d worked on some of their AI products, that the words you choose have different meanings for the system. So, like ChatGPT’s one underlying goal is to answer your question. No matter what you propose, it wants to get you answer. That’s why it’s almost impossible to get an I don’t know or I’m sorry. I can’t answer that. You know, unless you’re pushing the envelope on like the permissibility and things like that. What do we got? But like a just a generic question, it’s always going to try to answer it. So, the important thing is that like the I’ve been teaching is choose your words. So, like asking it to summarize actually frees the system up to make up more information. And then if you ask it to extract information details from whatever you’ve presented to it, it’s two different words, but they have two different meanings. And yeah, there’s similar words, but they have two different meanings for the system.

So you want to be specific. You want to be goal oriented. You want to provide relative context. You want to be clear on how you format what you’re asking and the results that you want, that outcome that you want to put in there. The other thing that’s really important, I know when we get together in June with ITX, one of the things I’m gonna be teaching on is giving the, the, the system room to think. So, like with ChatGPT or Claude-3-Opus or any of those large language models that are text based. When it starts putting words to paper words on screen, it’s locked in. It’s not going to go back and rewrite that sentence. Oh, I think I could word that better. No. Once it’s there, it’s done. So giving the system room to think, ask it to do an outline first and then create the text from it allows it to get that room to lay out the full argument. If it creates the the title of something first, the rest of the text is going to go to that title. It’s never going to go back and rewrite that title once it’s gone through. So asking for the title afterwards, is something that’s important to get a just a better flow and a more overall context of how you get your answer from it. And the other thing is I always say just play with it. Just keep going through it. Yeah. Start with a zero-shot prompt. Just quick, easy, simple. Go through a few shot prompt which you give it more details, more things to think about. I personally I like using train of thought or train of thought prompting which is where you actually layout specific tasks. So like earlier when I was talking about like doing different types, sentiment analysis or different tools that are around how to break down that, that qualitative data, ask it to do those specific things, lay it out and, and be very tactical in how you asked, but step by step to get to your end result, because it’s going to allow that system time to think and do the work the way you would do it, which just allows for better responses that are more accurate.

Prerna Singh [00:23:23] Yeah, I was gonna say to build on John’s answer, the idea of, giving context is so critical. And that’s again, one of those things that I think us as humans have so much rich information around that makes us capable of doing the jobs in the manner that we do them. So, I think one thing that I’ve actually found really helpful is, you know, approaching ChatGPT with being like, okay, I want you to build the same context that I have. So, one of the prompts that I’ve been using has been really helpful. It takes time. So, this is one part where I think people like to use ChatGPT for shortcuts and like, oh, I just want a shortcut to blah blah blah blah blah. But actually, the more most utilities, if you spend time with the system, you’ll get quality results back. So, the prompt I’ve been is like, hey, I’m a product manager working on blah blah blah XYZ product at this company. And I’m looking at this, what are ten questions you would ask me to build context? And so, then ChatGPT like prompts me with the questions it needs to be able to build the context. And then after we’ve done that exercise because it it understands and retains knowledge, then I can go back and actually get it to do the thing that I wanted to do, which is, okay, now that we have this shared context, you understand it. Here’s the task that I really need to get done. So same that same principles. I think that John was talking about, which is like, hey, you need the context and you need to give the system some time to think. It takes time. It takes it’s an investment. And I think using it as like a Google search bar, it can be used in that way. But I think you’re going to get the results you’re going to get are not going to be the quality that you would want until you actually spend time with the system.

Paul Gebel [00:25:03] Yeah. And Prerna, I want to I want to stay with you for just a minute. John mentioned his workshop, and you’re going to be here in June in Rochester talking as well about the ideas of customer research at any level specific to customer research, any kind of ideas that that a product owner or product manager on an agile team  trying to get through road mapping and back logging and develop deeper insights. What comes to mind when you think of like, how can we help that early to mid-career product professional really level up their game in the areas that you’re thinking about?

Prerna Singh [00:25:35] Totally. So, what I like to think of and what we’re the problem I actually see lies is people think of user research as this, like ginormous, intensive effort that’s going to cost them time and money and resources. And what I like to do is figure out how to actually make user research more accessible, because it doesn’t actually have to take that much time and that much set up. And it turns out that most likely you have more data than you think you do about your customers. And so the workshop that I’m going to be doing in Rochester is really intended to help you understand what is the data that you have, and how might you actually be able to leverage it to make better customer decisions so that it isn’t this like extensive and intensive effort, but really something that you should be doing on a continuous basis to make higher quality decisions?

Paul Gebel [00:26:26] Yeah, I love that. I think, you know, the kinds of ideas that we’ve been talking about, just kind of summarizing all the way back to the to the beginning is we really want to balance the need for doing more with less, enhancing these technologies in our, in our products and, and really helping the people whose problems we’re trying to solve get through their frustrations, enhance the delighters, create the, you know, more streamlined set of table stakes so that people can get more out of their work and more value from the products that we’re developing for them. I kind of want to, you know, we just have a few minutes left and I want to kind of close on, a bit of what do you think about the future and where can we say we’ve come in the past year or two since, really, AI went mainstream. We know it’s been around for much longer than that. But I think, you know, the average person, the average thought worker is now, you know, using AI as part of their everyday vernacular. I’m going to ask you to look into your crystal balls for just a second and kind of peek, maybe not, you know, years in the future, but what are you hopeful for about what we’re looking at? I know we’ve talked a lot about the practicality and the philosophy and strategy behind it, but what’s exciting you about what might come next about the products professionals in the careers that we’re trying to help people level up in?

Prerna Singh [00:27:45] I think what I get really excited by is just helping uncover more blind spots in, in, especially in the product management field, like, there’s so many different things that you’re constantly having to juggle and balance. And so to have a system that is maybe a little bit, you know, sort of sits as almost its own little like platform layer to like look at everything across the board, look at the analytics, look at whatever, and be able to alert and say, hey, actually x, y, z thing is trending down. I’m seeing a combination of like factors of data that’s actually being able to predict in a sense that, like, we’re not able to unless you have some really crazy, you know, data systems that you might have built. But for the everyday, you know, kind of like product manager who might be working on a startup or some early stage, don’t have the robust data systems to alert them. I’m hoping that we can find a layer of AI that sits within our systems to help kind of be able to predict what trends and patterns are happening in our systems that we can actually jump on in much sooner fashion.

Paul Gebel [00:28:49] And about you, John, what’s exciting you about this?

John Haggerty [00:28:52] So what’s exciting is get, being able to move beyond some of the, the repetitive tasks that are required for a product manager. So I know like with for myself when I was, in IC roles and more recently as a leader, like the expectation of spending 30 to 60 minutes every day in the data and doing data exploration and what I got, I call it data spelunking, but just digging around, finding, you know, getting curious and going deep. Can I do that quicker, easier, more effectively with AI? On the flip side, getting into the qualitative data and the understanding and knowing where to spend my time, to get to the whys of what is happening quicker, easier, faster, the, you know, updating stakeholders, keeping them informed, the communication side of it, helping me create the stories from the data, from the users and moving forward, that getting some of those things where I can not offload them but augment them with AI to make it more easier and effective for me to do my job, then freeing me up and my time to do that deep thinking, that maker work that requires me, that uninterrupted time to go deep, to do the analysis, to do the work, I just think is going to help PMs bring more value to their to their teams and their organizations.

Paul Gebel [00:30:18] Yeah. I think you’re both doing an amazing job of helping to level this, this current sort of cohort in the product community that’s starting now and sort of leveling up their game now over the next five, ten years. I think the product leaders who are going to be at the front of organizations ten years from now are going to have super powers that, you know, where I think the sort of the later stage product managers can’t even imagine right now. I want to close down with just a few opportunities that are on the horizon. You’ve both mentioned your workshops are coming up here in Rochester, New York at the end of June, but then later in the year we’re going to be hitting the road together to Cleveland IT INDUSTRY global together. Besides those two events, where else can people find you if they want to take a peek into your work, your writing the the other kinds of initiatives that you’re spending your time on. Prerna, Where can people find you if they want to learn more about what you’re up to these days?

Prerna Singh [00:31:10] Totally. Well, certainly find me on LinkedIn. That’s kind of where I spend my time these days. But I’m really into community building. And so if you’re based here in New York, I host a product breakfast once a month for product leaders to come together and discuss topics exactly like what we talked about today. We’ve covered things like AI in product management, talked about how to bring about and train up the next generation of product managers through different initiatives like APM programs. So that is where you can also find me in in the IRL, but I also work with early-stage founders and high-growth teams and helping them bring customer discovery to the forefront of their processes and helping them really figure out how to either go from 0 to 1 if they haven’t found product market fit, or help them figure out what is next beyond, you know, the sort of initial success that they might have found. So love working with those founders and always open to having conversations with them.

Paul Gebel [00:32:05] Love it. John, where can people find you to learn more about what you’re up to?

John Haggerty [00:32:09] A funny thing is, it’s a pretty similar answer here. So, you know, LinkedIn is the place I hang out. However, I’m not in New York and Prerna, I did see the most recent picture of your product breakfast, and I was super jealous of it because it was like, my favorite product people at that breakfast. So anyone in New York, I would highly recommend it. I’m in Minneapolis, same thing. I’ve got a breakfast spot once a month. We do kind of alternate between breakfast and happy hour. We get together. There’s a couple other product groups here in town that I’m involved in, that we get together IRL. Other than that, you know, like, right now, my focus is one of two things. I’ve got a little side project I’m working on, as I’ve mentioned, our little, little AI project that is focused on the ethical side of things and kind of understanding the behaviors from a bias and heuristics point of view. Beyond that, I’m looking for, my next startup that I want to join for someone to kind of, like, post-C maybe up through B round where they’re looking to grow up, grow out. That’s my wheelhouse for products a little later than Prerna, where I like to help them, you know, mature and expand their product org.

Paul Gebel [00:33:15] You’re both adding incredible voices to the to the product conversation. It’s been a privilege to spend the last little while chatting with you and hearing both of your perspectives. Like I said, I’ve learned a lot just in the prep in the conversation from today. So thanks so much for taking the time to chat and I’m really grateful for the opportunity.

Prerna Singh [00:33:34] Thanks, Paul. Awesome to be here.

John Haggerty [00:33:36] Thanks for having us.

Paul Gebel [00:33:37]  Absolutely.

Paul Gebel [00:33:40] Well, that’s it for today. In line with our goals of transparency and listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get. So please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world just like you’re doing. Thanks, everyone. We’ll see you next episode.

Like what you see? Let’s talk now.

Reach Out