Skip to Content

Special Edition / Taking an AI-first Approach to Product Development, with Yochai Konig

Hosted by Sean Flaherty & Kyle Psaty

Listen

About

yochai-koning-podcast-guest

Yochai Konig

Ada

Yochai Konig is the Vice President of Machine Learning and AI at Ada. He has over 30 years of experience in Machine Learning research, conversational AI, and Deep Learning. 

Prior to joining Ada, Yochai led the Genesys Innovation group as well as Genesys Advanced Research. Yochai was the co-founder and CTO of Utopy (acquired by Genesys), where he created the first Speech Analytics product “SpeechMiner” which started the Speech Analytics market. He started his career at SRI International as a Research Scientist where he served as Principal Investigator for Department of Defense (DoD) research projects in the areas of Speech Recognition, Speaker Verification, and Natural Language Processing (NLP). 

Yochai is the inventor of over 100 patents and the author of numerous academic papers. Yochai holds a Ph.D. in computer science from Berkeley, and a bachelor’s degree in computer engineering, summa cum laude, from the Technion, Israel Institute of Technology. 

Yochai Konig, Vice President, Machine Learning & AI at Ada, has worked in product and specialized in AI-enabled capabilities for more than 2 decades. All of a sudden, it seems, the rest of the world is catching on.

Yet despite the recent buzz, many remain confused about how AI truly works. There was a lot of effort to create AI by learning from how humans do things and somehow transferring those methods to a machine, Yochai explains.

“But AI is not about this. AI is not functioning as humans do…. Instead, it’s about ‘let’s make as few assumptions, and insert as least bias, as possible.” In other words, let’s provide the raw data and the objective function and use the best AI in the world to find the best way for AI to do it.

“The purpose of AI is to serve the application,” Yochai Konig says, “meaning you have to set a measurable objective for the use of AI in the product.” It’s a theme that echoes comments from Pendo’s Todd Olson and Trisha Price in our earlier Live from Pendomonium episode, Entering the Age of Intelligence.

Yochai describes the field of AI as emerging – or, more precisely, emergent. But in doing so, he applies a unique approach to the definition. “In its most simplistic form, emergent describes what happens as the model gets to a different scale; and suddenly, new capabilities emerge.”

And that’s where we are with AI, large language models, and the like. As a community, we are making progress to understand this, he adds. “But some of these emergent capabilities just happened, the model developed this concept and has outputted it. Now, we’re trying to reverse engineer and explain how it is doing all of this stuff.”

Subscribe to Product Momentum now and be notified when the next episode drops!

Paul [00:00:17] Hello and welcome to Product Momentum, where we hope to entertain, educate, and celebrate the amazing product people who are helping to shape our community’s way ahead. My name is Paul Gebel and I’m the Director of Product Innovation at ITX. Along with my co-host, Sean Flaherty, and our amazing production team and occasional guest host, we record and release a conversation with a product thought leader, writer, speaker, or maker who has something to share with the community every two weeks.

Sean [00:00:41] Good morning, Kyle. How are you doing?

Kyle [00:00:43] Morning, Sean. We’re recording Live from Pendomonium 2023. Big shout out to our friends over at Pendo for having us out to the conference this year.

Sean [00:00:51] What a great conference this has been for us.

Kyle [00:00:53] Yeah. Really awesome. Really excited for this next conversation with Yochai Konig, the VP of Machine Learning and AI at Ada. This guy has been doing this stuff forever. Like, it’s super hot right now, but he’s been doing it forever.

Sean [00:01:07] Yeah, and he answered a bunch of burning questions that I have about AI. Like, why should we even bother with search bars anymore? Shouldn’t we all be moving towards chat?

Kyle [00:01:15] Right.

Sean [00:01:16] It’s a great episode. There’s so much good material.

Kyle [00:01:18] A lot of depth, like, practical stuff, but also philosophical stuff that I thought was really interesting.

Sean [00:01:24] Twenty-six years working in AI. Can you imagine?

Kyle [00:01:25] Yeah. Wow.

Sean [00:01:27] All right, let’s get after it.

Kyle [00:01:28] Let’s do it.

Kyle [00:01:31] Welcome to the Product Momentum Podcast. Really excited to be joined today by Yochai Konig. He’s the VP of Machine Learning and AI at Ada, the customer experience automation platform, where he’s been conducting research and leading the AI practice for almost three years. Before that, he founded Utopy, which pioneered the speech analytics market years ago and was acquired by Genesys. He was there for a long time after the acquisition.

Kyle [00:01:56] Large language models are the little black dress of product in 2023, but Yochai has been embedded in this space for over 25 years since he attended Berkeley to pursue a Ph.D. in comp sci in the early nineties. That’s when he got into Automatic Speech Recognition, ASR, and Natural Language Processing, NLP, which are now described as conversational AI. So a leading thinker in this space. Listen, we’re really excited to have you here today, Yochai.

Yochai [00:02:23] Excited to be here. It’s usually a walk in build, finally I speak to people and try to share my lessons of actually building products with other people.

Kyle [00:02:32] Yeah, it’s awesome to have you. And I’m Kyle Psaty, I head up marketing for ITX, and we’re joined today, of course, by Mr. Sean Flaherty, EVP of Innovation at ITX, who you all recognize from every episode of the podcast going back a 100 something episodes. So thanks for being here, Sean.

Sean [00:02:49] Thanks, Kyle.

Kyle [00:02:49] So let’s jump into it. I think it might help to sort of start simple and say, can you talk to us a little bit about conversational AI and what is the space from your perspective today?

Yochai [00:03:02] This field is very hot given the large language models that are slowly becoming more what I call multimodal content. Mainly, we all know the chat form. We type something, we get an answer back. Now there is GPT for vision, we can insert visuals, but eventually we can speak to this NLM in any modality that we want and they’ll also give us their generation in any modality they want. So today, it’s separated, like, doing images. There are some video generation and there is obviously ChatGPT to do text, but eventually, it would be one model that you can give any modality and it will output it.

Kyle [00:03:39] Right. So we’ll be saying, “Can you draw me a diagram of that?” That kind of thing and it’ll be spitting it back out for us.

Yochai [00:03:44] Yes. So you can say, you can make a silly face and say, “Please describe my face.”

Sean [00:03:50] But I know very few people that have spent so much time working in a single domain. I mean, this is two and a half decades of your life. What do you think is the most important thing you learned about AI in 25 years, 25-plus years?

Yochai [00:04:04] It’s a very good question. That AI is not functioning as humans function. There was a lot of effort to create AI, like expert systems in the past or other way. Let’s learn from how we are doing stuff and try somehow to transfer to a machine. It’s not about this. It’s, let’s make as few assumptions as possible, let’s insert as least bias as possible. Let’s give the raw data and the objective function and the best AI in the world will find the best way for AI to do it. That may be not applicable to human beings, given the constraints of our physical brains. So to summarize, describing the text is the objective function. Give enough data and don’t bring any other human assumption into the picture.

Sean [00:04:53] Hmm. So injecting minimal bias and…

Yochai [00:04:56] Assumptions, yes.

Sean [00:04:58] And answering questions that really can’t be answered by a single human brain, right?

Yochai [00:05:03] Yes. If you think about, can we memorize the whole internet in our brain? It’s a little bit too big for our brains, even if you go to the most optimistic assumption on the size of our brain. AI would excel and then we can go.

Sean [00:05:17] Yeah, I love it. You were just on stage and you talked a lot about all things AI. And I’m always curious when there’s someone on stage that has so much depth of knowledge as someone like you. Like, what’s the one question you wish people would ask you?

Yochai [00:05:31] This, by the way? The question that they always ask in interviews when I interview someone. If you were in an opposite position, they ask me this. You learn a lot of this. I would think as a builder, I’ll ask more practical questions. So there was not, people are like, “Okay, I describe that principle, and that’s the lesson that we learn from employing it, and I am expecting to have like a more fuller op.” So which model do you use to do it, how do you collect the data, kind of more how to do it. Saying this maybe the people are bored, the people that sort of specify what, or they’re thinking about what can be done if you want the engineering to do all of this. I’m still a scientist, still an engineer, so I’m thinking about a lot of, and I’m still shipping products, I’m thinking a lot of how to do it. And there was not the question of how, there was a more question of what. This happened only 15 minutes ago.

Sean [00:06:28] No, it’s great that’s a great insight. You’re hearing a lot of the whats in a lot of the end products. Because you know, I think to be completely honest, I think we still don’t really, most of the world, even most of technology doesn’t really understand AI and how it works. It’s somewhat of a black box for most of us.

Yochai [00:06:45] Yeah, if you look at kind of this term of emergent, sort of like basically like that’s the way that kind of the industry speaks about how it can do stuff without them really knowing. So that is emergent and if you look at, what does emergent mean? Basically the most simplistic is like, as the model gets to a different scale, suddenly there’s new capabilities. So what’s happened if currently the models let’s say, are [inaudible]. What emerging capability will be out there and can a human even predict it, understand it? Saying this, there is some effort around interpretability, of understanding why the AI is functioning, what it’s doing. Some of the leading research there is from Anthropic. So they will kind of group new ones, “We think the model are responding for different dimensions and different functions.” So we as a community, we are making some steps to understand this, but some of these emergent capabilities just happened, the model developed this concept and outputted it and we kind of try to reverse engineer and explain how it is doing all of this stuff.

Sean [00:07:58] Yeah. How is it working? Figure it out.

Yochai [00:08:00] Yeah. And I think we’ll make a lot of progress in the next few years to do it because that will be our answer, how we can control it or how can we predict it not to do anything, but the moment we understand actually how it works.

Sean [00:08:14] Yeah, you know, you mentioned emergent properties, like new capabilities you haven’t thought of yet. You know, and if you think of, if you’re trying to solve a single problem with AI, you have one large language model. But it’d be better if you had 50 and you could compare those 50, and then if you could use another A.I. to look at those 50 responses and then compare those, and like it’s like a never-ending…

Yochai [00:08:38] So this is called mixture of experts.

Sean [00:08:40] It’s like the Russian doll.

Yochai [00:08:42] Mixture of experts, it’s a very common technology. And also there is an underlying question in what you’re saying that the industry, both the practitioner and maybe beyond this, are looking at, is it one giant model, or should we bring a very specific model and some are combining it in an optimal way, both for practical considerations of latency, cost, and so forth.

Sean [00:09:10] Now, until we have quantum.

Yochai [00:09:12] Yes, but still, like, at the end of the day, you want the best AI maybe to be a wearable device without sending it. How we can compress it to this so people do all these engineering tricks to make it more computationally efficient. But conceptually, there is all these questions. Is our brain this one big or is there a lot of distributed stuff that somehow I work in concert to do generation. And it’s not like I know the answer to this, but these are interesting questions.

Kyle [00:09:41] Yeah, yeah, absolutely. Let’s pull it back from some of that philosophical though. You’ve worked in product for a long time with a specialization in AI-enabled capabilities. So what’s an AI-enabled product for you? All these products are trying to implement AI capabilities, right? What’s an AI-enabled product?

Yochai [00:09:59] So that will be some repetition with what I said in the talk, but I’ll try to take the relevant part. I think AI is to serve the application. Meaning, don’t deploy AI just for, “Hey, we have AI in a product.” To serve the application, meaning you have to set a measurable objective. “This is my application performance; that’s how I measure the value that my application or product will deliver.” And the moment you insert AI in the first place or when you upgrade AI, you have to see a noticeable difference in AI.

Yochai [00:10:29] The second property of AI is the ability to create, as automatic as possible, continuous improvement. So today we are here at Pendo, we speak about how to collect NPS and the customer feedback and all of this stuff. So the product manager, semi-automatically, they cluster it, they summarize it, but still, the product manager, a human, has to look at it, has to figure out, get the insight as to create a new product. The AI will also hopefully get to a level that can sort of self-optimize, get data how users are using it, what works, what doesn’t work, and create a continuous improvement loop that will be as automatic as possible and reduce the time constraint that it takes to improve stuff.

Yochai [00:11:11] If customers prefer this button to be on top and not this, why a product manager has to be involved in the first place? We see in the data that they always go to this button, can we just change the page? I’m giving a very simplistic example, but, so for instance, for us at Ada, we look at a knowledge base and guidance and we do analytics which is useful to resolve customer issues. Some of them are less useful, some of them more useful. So we can kind of take into account all of this. Or we can see which customer issues are not resolved, automatically generate, let’s say, an answer for that resolve and maybe give to the product manager: that’s what we think should be done, do you approve it? So it’s still a human in the loop, but it’s kind of shortening the distance to an actionable change. Kind of, from the moment you know how to optimize the product, how long does it take you to ship it?

Kyle [00:12:00]  Right. All in pursuit of that north star, essentially the product vision, right?

Yochai [00:12:04] Yeah. It’s like for us, it’s automated resolution. We want to solve the customer issues. Automatically it’s resolved, not like making [inaudible], but actually resolve it that you’ll be happy. It’s actually done. So this is our north star and so we are getting to a level that we deploy a product, we learn from the data, and it’s the AI percentage, the automated resolution, that’s our north star, is increasing. Because the software optimizes because [inaudible] the example of [inaudible]. So all this AI has a few answers inside, maybe it will give the answer one. But if it’s not helping you to resolve, maybe it should give the answer two that is also inside, because it has all the data of the company. And this kind of continuous improvement loop, self-optimization, I think is an additional promise of AI.

Sean [00:12:53] Yeah, by the way, I loved your framework that you put, the triangle, the way you know your AI is successful. It’s got to be accurate, it’s got to be relevant, it’s got to be safe. You know, and that’s the thing that you always hear the buzz about in the press about the safety of AI.

Yochai [00:13:08] Yeah. People also call it sometimes, we chose this word, but sometimes people call it the three H: honest, helpful, and harmless.

Sean [00:13:17] Yeah, honest, helpful, and harmless. I like that. Three H’s.

Yochai [00:13:21] Yeah. So that’s another acronym. We chose this because I think it’s more relevant to what we are doing, but…

Sean [00:13:26] Yeah, I thought that was a good analogy. It’s a good window through which to look, a lens to look at how you’re leveraging any technology, really. But AI in particular, because we don’t know what’s inside the black box in a lot of cases.

Yochai [00:13:39] Yeah, we have to put the safety mechanism to make sure it’s behaving appropriately.

Sean [00:13:44] You also talked about, in the talk, you talked about pricing and you struck a chord with me on that. Like how, you know, as a service, like any sort of AI-driven service, how do you anticipate pricing it in the future? It’s a hard problem to solve because of the value that it adds.

Yochai [00:14:01] So we adopted outcome-based pricing.

Sean [00:14:04] Outcome-based pricing. I was fascinated by that. Wow.

Yochai [00:14:08] We sold customers today based on outcome-based pricing. Now, just what it is, it means, for us, every conversation that we automate, automatically resolves this. You’re happy as a customer, we’re happy, we’ll get money based on this. If we’re not bringing you value, why should you pay us? If we bring you more value, pay us more. Now there is a gentle dance of calibration. Make sure that what we consider resolved is what you consider resolved. There is a mechanism in the product to do this calibration.

Sean [00:14:39] Yeah, I heard you say you’re using AI to watch the AI.

Yochai [00:14:42] Yes, but it’s calibrated also by humans. I didn’t go into all these details, but basically after onboarding both us and the customer, actually, I’m surprised there’s no dispute when customers review it because we kind of also tuned it to be very conservative because we don’t want to get this, like, we’re saying we resolve, let’s say, 50%, in practicality it’s close to 60, but we know that 50% for sure that we resolved. Pay us by this, we’ll be happy, and we’ll be aligned. Our line is to increase it. Your goal as a company, our goal is let’s not buy until you see it, pay us by value.

Sean [00:15:17] I think that’s a really interesting model.

Kyle [00:15:19] Talk about commitment to your vision, right?

Sean [00:15:21] Yeah.

Yochai [00:15:21] We are all in, we are all in.

Kyle [00:15:23] Yeah, if we’re if we’re achieving our vision, pay us for exactly how much we did that. Yeah, that’s amazing.

Sean [00:15:28] One of the things you talked about on stage was search versus chat. Somebody asked the question, search versus chat. And I think, who wouldn’t want to just have a dialog? Like if you had the choice, like, do you want to search and have to filter through a bunch of results, or would you want those results curated for you in a more…?

Yochai [00:15:45] You want a specific answer to your question. You don’t want, why do I have to look at ten documents to figure out the answer if the AI can do it for me? And I can look at ten. Maybe the answer isn’t two, it’s 37 or whatever. So.

Kyle [00:15:58] Right. Page 17 of Google.

Sean [00:16:01] That was a light bulb moment for me, looking at the products of the future, why wouldn’t you start from the perspective of using AI to handle your search function?

Yochai [00:16:10] I’m not sure. Again, it’s, again, the input that’s convenient to us. Like chat is one input. Like let’s say my refrigerator is, I don’t know, the light isn’t working. I can just take a picture and say, “Why is this red light on?” I don’t even have to speak. Or sometimes video is the best way for me to explain what’s happening. And so AI I will bring it to a level that’s most convenient to you. Do you prefer to speak? Do you prefer to chat? Do you prefer which language, which vocabulary, slang is convenient to you as a user to express it? So I think we are moving from the way that we as consumers have to adapt to companies, because we know for this company it’s better to call them, for this it’s better to chat. I want to have the interface that’s convenient to me. The company will adapt.

Sean [00:17:01] Yeah. Or I’ll go somewhere else.

Yochai [00:17:04] This will be most convenient to me. Maybe for me, for instance, as someone with a thick accent, it’s better to write English. My writing in English is better than my speaking.

Sean [00:17:13] Well, that’s going to be solved with AI pretty soon.

Yochai [00:17:15] Yeah, but I’m just saying, like for me it’s very, very nice to write it because I’m kind of short to the point. When they start speaking, I don’t get some of the words, numbers, So, again, it’s convenient to me, some people just prefer to speak. So whatever works for you.

Sean [00:17:33] All right. Well, for a person who has been pretty much in innovation their entire lives, how do you define innovation? What does innovation mean to you?

Yochai [00:17:42] So we go into a more philosophical question, like, is mathematics discovered?

Sean [00:17:48] Or is it just there and exposed?

Yochai [00:17:50] Yeah, or we just found it?

Sean [00:17:52] That’s funny. I just read On Writing by Stephen King. This came up a couple of episodes ago as well. But in it, he says, the idea is that he has to… Do you know who Stephen King is? Like an unbelievable fiction writer, horror writer.

Yochai [00:18:05] Yeah, yes.

Sean [00:18:06] The idea is already there. He’s like an anthropologist or a paleontologist dusting off the bones. Like he’s just crafting the story that’s already there. Is it the same thing for you?

Yochai [00:18:18] I think, and again, this is my speculation, I think the AI is chewing on the current level of human knowledge. And there is corollary and derivative of this human knowledge that we didn’t articulate because we didn’t make the connection yet. But I want to believe there is something behind the current capture of human knowledge that we just didn’t discover, that we as humans will discover the AI cannot discover because it’s not in the data that it trained on.

Sean [00:18:47] Fascinating.

Kyle [00:18:49] So innovation will always belong to the human, do you think?

Yochai [00:18:51] That’s my hope as a human.

Sean [00:18:57] All right.

Kyle [00:18:57] Well said. One question we always like to end with. Is there anything you’re reading right now that you really enjoy, you’d recommend to our audience, or anything you’re learning about?

Yochai [00:19:06] Actually, the last book that I read is called Smart Brevity.

Sean [00:19:11] Smart Brevity.

Yochai [00:19:14] It’s actually related to this because I tend to write very detailed Slack messages and email, and then when I communicate, partly I know too much on some stuff and partly it’s to show off. But when you read smart brevity, the first thing they said is the human attention span after the first six words goes down. Doesn’t matter how motivated you are or how much they pay you, whatever, like how much coffee you had, just like, we slow down. So say the most important stuff at the beginning and say it in the simplest way, don’t use words like about. If you write 3 million, it’s about, it’s not identical.

Yochai [00:19:54] So it’s actually made a lot of impact on my communication to kind of increase my bandwidth. So that’s a book. In terms of papers, I’m very interested these days, in, from a machine learning perspective, in fine-tuning and distillation. So let me maybe kind of go a little bit scientific for a second. Currently there are these big models. And the question is, and I give an example of how to create what you need. Why do we need a model about astrology or archeology when I do customer service, or when you do any other application? So there is this technology of, that’s currently, it’s available for fine tuning. Open AI is fine tuning three, five toolbars. Google, their model is about fine tuning and another form of supervised fine tuning, another [inaudible].

Yochai [00:20:43] But eventually there will be a technology that, I think example of it is distillation step by step. It’s an ability to take a very large model and say, “I’m only interested in this problem; take this large model and create for me from it, derive from it a smaller model that has all the power of this big model, but only in this specific domain. So I see it coming and I’m reading a lot of stuff about it because it will be very relevant to us because I’m kind of off topic now. Ada and I think every company that’s developing AI is wondering how to correct a defensively. What’s unique about your AI that people cannot copy? And part of it is creating your own custom model based on your first-party application data. And that’s part of my focus and my reading papers these days.

Sean [00:21:29] Wow. Is there a concept yet of like a minimum viable model to be able to solve all the problems you want to solve in a given domain?

Yochai [00:21:36] Good thought. Let me think about it, it’s a good thought. Probably there is.

Sean [00:21:40] Because that’s what you’re saying, essentially. What is the minimum viable model that will solve all the problems in the domain you want to solve?

Yochai [00:21:45] Like Occam’s razor, what’s the model but nothing more. And it’s a good way to think about it. I didn’t think about it this way. And the only thing I can say concretely to this, if you read papers about it, it’s still in academic about distillation, they’re speaking about reducing a model by order of magnitude.

Sean [00:22:02] Yeah.

Yochai [00:22:03] By constraining the problem and increasing speed and reducing cost and all of this stuff, good stuff, while maintaining very good performance on this domain. So we’re talking about currently of at least an order of magnitude from the best model out there.

Sean [00:22:16] That’s cool.

Yochai [00:22:17] And that’s still in the research. We have to see it in our domain.

Sean [00:22:19] You still have a lot of work to do. You better get back to work, sir.

Kyle [00:22:23] You heard it here first. Minimum viable model.

Yochai [00:22:26] Yeah. That’s a good innovation by Sean.

Sean [00:22:29] Yes.

Kyle [00:22:31] Yochai Konig, thank you so much for being here with us on the Product Momentum Podcast, coming at you live from Pendomonium 2023.

Sean [00:22:38] Good to meet you, sir. Thank you.

Yochai [00:22:39] My pleasure.

Sean [00:22:40] Thanks for all the work you’re doing in the industry that we all get to take advantage of.

Yochai [00:22:43] No, I’m compensated. Appreciate the conversation was more fun than expected. So thank you.

Sean [00:22:52]  Thank you.

Kyle [00:22:53] Thanks.

Paul [00:22:57] Well, that’s it for today. In line with our goals of transparency and listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get. So please leave a comment or rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world just like you’re doing. Thanks, everyone. We’ll see you next episode.

Like what you see? Let’s talk now.

Reach Out