Skip to Content

128 / Trusting Data Quality: The Key to AI’s Future, with Scott Ambler

Hosted by Paul Gebel & Sean Flaherty

Listen

About

scott-ambler-podcast-guest

Scott Ambler

Agile Data Strategist

Trust is the glue that sustains personal relationships. Likewise, trust in AI’s source data holds the key to its future and our confident use of it, says Scott Ambler, Agile data strategist, consulting methodologist, author, and keynote speaker. Trust takes years to build, seconds to break, and forever to repair.

In this episode of Product Momentum, Scott joins Sean and Paul to dig into the importance of data quality in AI applications, understanding and managing bias in AI, and the essential role humans play in harnessing AI’s potential – and its risks.

“If you’re trying to use AI to make data-driven decisions, it becomes a garbage in, garbage out situation,” Scott offers. “It’s really that straightforward. A lot of organizations have let their data debt increase over the years. As AI ingests low-quality data, you’ll get a low-quality answer.” That’s when fractures appear in your hard-earned trust.

Scott also explores the issue of pervasive bias in AI systems and pinpoints its source, underscoring the need for us humans to develop ethical practices that ensure fairness and equity in AI-driven outcomes.

“There will always be bias in your data,” Scott adds. “Humans are biased; it is what it is. And your data will reflect that bias in your business processes. So when you train your AI on that, part of the training process has to be to detect whatever biases are there.”

The key, Scott says, is to understand how humans can effectively leverage AI technologies. While AI offers tremendous potential for augmenting human capabilities and streamlining processes, it is not a panacea.

“When you look at it at a high level, AI is magical. Some of these Gen AIs are just incredible,” Scott concludes. We want to think it’s magic. But it’s not magic. It’s just hard work.”

Scott cautions against blind reliance on AI-generated outputs and emphasizes human oversight and judgment in validating and contextualizing AI-driven insights. Before AI, there was a human in that “last mile” who could filter out the garbage from the good stuff. And the problem now that AI can’t do that.”

This human-centric approach may be the key to AI’s future. If we acknowledge the complementary relationship between AI and human intelligence, maybe we’ll also recognize – and trust – that AI technologies will enhance human endeavors rather than replace them.

Paul Gebel [00:00:19] Hello and welcome to Product Momentum, where we hope to entertain, educate, and celebrate the amazing product people who are helping to shape our community’s way ahead. My name is Paul Gebel and I’m the Director of Product Innovation at ITX, along with my co-host Sean Flaherty and our amazing production team and occasional guest host, we record and release a conversation with a product thought leader, writer, speaker, or maker who has something to share with the community every two weeks.

 Paul Gebel [00:00:44]  Hello and welcome to the pod. Today we’re delighted to be joined by Scott Ambler. Scott lives in Toronto, Ontario, Canada. In addition to being a father and husband, he manages several additional roles, that being Consulting Methodologist, Agile Data Coach, author, keynote speaker, and advisory board member. As a Consulting Methodologist, Scott helps people and teams improve their ways of working and their ways of thinking. In particular, he’s focused on helping people apply Agile data and Agile modeling strategies. In 2010, Scott co-created the discipline Agile and with Marc Lines, with which led to the development of both the Agile modeling and Agile data methods. When he works with teams, Scott often takes on the role of Agile Data Coach, helping them to understand how to apply Agile data and agile modeling strategies. Scott’s co-written 28 books and hundreds of articles and whitepapers. Most of his writing has been about information technology methods and practices. Scott, thanks so much for taking the time to be with us today. Appreciate it on the show.

Scott Ambler [00:01:39] Thanks for having me.

Paul Gebel [00:01:43]  And as a prolific writer, your contribution to the field is really appreciated. I’ve been excited about this conversation for a while, just to open it up with a broad question to help us dig in here. The topic on the table today is A.I., and primarily the challenges that we’re encountering in data quality. So to get us started, maybe you can help lay out the field as you see it for how this Gen AI revolution or discovery for many folks who haven’t been exposed to it in this way before. How does data quality figure into your outlook on the next steps in this field?

Scott Ambler [00:02:17] Yes. So that course is absolutely critical. The AI for I will for a lot of things if you’re trying to make, data-driven decisions, data warehousing, business intelligence, and now AI and it becomes a garbage in, garbage out situation, it’s really that straightforward. And I think what’s happened is a lot of organizations have let their data, technical debt, or their data debt increase over the years. I mean, they keep pushing it off and pushing it off, and now it’s just built up. And frankly, the date, it’s time to pay the data piper. You’ve got to fix the data quality problems because the challenge with AI is it ingests this data. And if it’s low quality, then you get a low-quality answer. And because it’s working at the speed of technology and the scope of the internet, you can have very serious, very large problems very, very quickly with AI. And this I think, is a challenge for a lot of organizations.

Paul Gebel [00:03:12] And you’re in the middle of it right now. You’re literally halfway through your master’s degree in AI. And both I and Sean are in academia as well, and see AI becoming more and more sort of AI washed across many of the disciplines. How crucial do you think it is for us to separate the hype from the substance of what AI’s impact is on fields? Seems like every time you turn around, there’s a product management with AI or any other respective field with AI. How far do you think the pendulum needs to swing before there’s some correction before there’s some oscillation in our understanding of this intersection we’re at right now?

Scott Ambler [00:03:50] I think we’re still climbing the hype curve and maybe, we’re pretty much hitting the peak. But it’s a little frustrating for me. Just before ChatGPT came out, at least the 3.5 version came out last fall, I had decided to go back to school and then a month later, comes out and all this ridiculous hype. So it’s a bit frustrating because it looks like I’m jumping on the bandwagon when I’m not. So I think the challenge is that there is a lot of hype. There are a lot of unrealistic expectations. I keep telling people that, like, I love Gen AI. I think there’s some cool stuff going on and I use it for what I can. But fundamentally, Gen AI, the purpose of it is to make stuff up. And so when I ask ChatGPT something, yeah, it’ll make up a good answer. And this is the thing that drives me nuts. Some people give me hassle.  And I keep saying, well, wait a minute, give it a prompt, get an answer, and then if you don’t like that answer, you can ask for another answer and it’ll give you a very different answer. And then again and again and again so you can see it make up different things to the exact same prompt. Dall-E, which I use a lot for generating images. It’s even more obvious when you give it a prompt. It gives you four different images, and then you can re-prompt it or select an image and say, you give me something similar to this. And it generates four things that are sort of similar to it. So yeah, it’s just constantly making stuff up. So you get you need to go in eyes wide open. And when as a professional, if you’re using ChatGPT to write part of your report or part of a marketing strategy, you darn well better read and whatever it is you copy-paste, you darn well better read it and update it and make it your own. Having said that though, I think ChatGPT is great for just bouncing ideas around. You can effectively have a brainstorming session very, very quickly. And, give me ten ideas about blah, blah, blah, and it’ll come up with 6 or 7 fairly decent ones and a few just nonsense ones. But that’s okay as long as you’re if you’re able to sift the nuggets out of the garbage, then I think you’re good to go. But it all depends on how you’re using it. But, like I said, go on, eyes wide open. Use it appropriately. And I think there’s a lot of value in generative AI and other forms of AI too like gen AI is, the big elephant in the room right now. But I’m hoping that once the hype curve goes down, we’ll see people come in and start dealing with reality on the other, other forms of AI.

Sean Flaherty [00:06:17] So you talked about garbage in, garbage out, data debt. I love that concept. We’ve all experienced it. We’ve been working on it in a given domain for any period of time. Do you have any good examples of where this garbage in, garbage out has manifest?

Scott Ambler [00:06:32] I can’t name names, but I’ve worked with a couple of organizations now where, they were, trying to do a pilot and a couple, and we’re trying to do pilots in AI. They’re a machine learning thing. And what happened was they started out with what they fairly believed to be a six-month pilot in AI, to figure out what they could do and save the universe or whatever the goal was. And they very quickly realized they had a multi-year data quality cleansing problem. Because their production data just had too many problems with it, and bias and just basic quality problems it derailed them. And I think this is something we’re going to see more and more. So even though there’s a lot of hype around using professionals like us, using GenAI to generate text or images or whatever or music or whatever you’re into, which is great. Augmentation of humans. I think the real bang for your buck is going to be in organizations training, using machine learning, and in some cases, large language models like we see in Gen AI but training their own models and particularly narrow Gen AI. But to do that they need better quality data. And so they’re gonna have to clean up their data. So it’s going to be several years for a lot of organizations….Particularly, the Fortune 500 companies, the insurance companies of the world, and the banks, and they’re all suffering from these problems. And I think it’s come to a head now. So I think we’re going to see over the next couple of years, we’re going to start seeing more and more business processes or portions of business processes automated by AI. Hopefully, that will improve the life of the workers and, truly augment what they do. But, it’ll be a couple of years, but I think there’s gonna be a lot of easy kills, like I said, writing reports and improving small little activities that we do day to day but up not like full end-to-end automation. And that’ll be a few years away for most companies and it’ll be because of the data that they just, either don’t have the data or the data they have is just not where it needs to be.

Sean Flaherty [00:08:35] Seems to me a great use of AI would be to clean up historical data.

Scott Ambler [00:08:40] Yeah. It is. We’re seeing some tools that can do that, certainly point out where the problems are. But there to be fair, I used to do a lot of work in the data warehousing community, and they’ve been pointing out data quality problems for a long time. A very common thing for a data warehouse is to feed back when they because they’re getting they’re ingesting data from all these legacy sources, just like the AIs would, and the data warehousing people, if they’re doing the proper job, they’re feeding back logs of, here are the data quality problems we’re running into. It’d be really nice if we fix them up because it’s not their responsibility, right? And rightfully so. But the problem is, nobody the people that, whatever team or whatever group in your organization owns those data sources. Often they’re not fixing the data quality problems because they’re big and it’s expensive. It takes time. So fair enough. I think what’s happening now is that, and you could get away with it…because in the data warehousing/business intelligence world because fundamentally we’ll always be humans. The end consumer of the information is a human. And then they can make a decision based on the report or the widgets on the screen or whatever. The challenge now with the AI is that the AI is the end consumer of the data and it’s making decisions. So if it can’t tell that this data doesn’t make sense or information or the report is nonsense. It won’t be able to detect that. So it’s going to it’ll make a decision based on what it’s being told from the data. And then, garbage in, garbage out at that point. So I think this is why the data quality issue has come to such a head with AI. Is that that last it’s the equivalent of the last mile from Telecom, right? The last mile in the data space before AI, was a human that could make a decision and filter out the garbage from the good stuff. And the problem is now is the AI can’t do that. And what happens is, you build the AI, you test it and you get nonsense coming out of it and say, well, we can’t deploy that. So, glad that it took six months to figure that out. And this is why we’re seeing such a huge failure rate in AI projects. The success rates are between 15 and 20%, if you’re if you believe what’s coming out of some of the consulting companies. And that’s seemed pretty reasonable to me. Well, it seems horrendous, but it seems like an accurate number, a very unfortunate but accurate number.

Paul Gebel [00:11:03] I want to use a phrase that you mentioned in our chat before the show. It was around the topic of just sort of the social aspect and zeitgeist that everybody and their dog has access to ChatGPT now and others. So Google’s coming out with theirs. Elon has his. Meta has their own large language models and they’re sort of a quickly saturating space now. And I don’t think we even have the words to define the things that we’re experiencing, one, for example, that comes to mind is I feel like I can kind of tell when a social media post has been written by ChatGPT, and I can’t quite put my finger on why or how I know it. But you use it enough and you can kind of tell that’s an AI thing. And having this sense of misplaced trust is a really, I think, core element to either driving that success rate from 15 to 20% up or just saying that’s as good as it’s going to get. And the last mile is where we need to focus as human beings because that’s where the actual value add comes. This is sort of the crux of what I’ve been looking forward to digging into with you. How do we, as people rightly understand trust in this AI space. And I realize that’s a very broad question, but I can’t think of how to narrow it down. Maybe you can help me refine the question as you answer it.

Scott Ambler [00:12:18] That is a great observation because now the unfortunate thing is that people trust the information coming out of the AIs more than they trust the similar information coming out of people. It’s just this weird sociological thing. Which is not the answer you’d be looking for, but it is. So I think even when you get this gut feeling of, oh, that’s not quite right, well, it’s coming on the computer must be right. And that’s like the level of thinking for most people. And that is a horrible, horrible thing. But that’s the reality on the ground. So what that tells me is we need to get a lot better at determining, whether or not something is ready to go and ready to deploy. And then once it is deployed – monitoring it. I always like to use the example of a few years ago, Microsoft released a chatbot into the wild, and after 19 hours they had to pull it from production because it became a raving racist. And it was recommending, let’s we need to go out and kill certain groups of people and stuff like that. And what had happened was they had pointed it at some right-wing discussion groups and, it did its thing. So at the speed of the internet. Right. So luckily they pulled it, right? Like luckily they were keeping an eye on it because they inherently knew that this could go poorly. So we’d better keep an eye on it. And sure enough, it went poorly really quickly. And, they pulled it. But, it might not happen as well. So, you could put an insurance fraud AI in place, right? And, I’ve been involved with stuff like this. There will be bias in your data. So whatever data you’ve got, it reflects the realities of your business processes which are implemented by humans. And humans are biased is just is what it, regardless of what you might think, there is always bias. So your data will reflect that bias in your people following your business processes. So then when you train your AI on that, the biases will come out. So part of the training process has to be to detect whatever biases are there. You won’t get them all up because you’ll get some of them. And you want to have diverse teams and all that sort of stuff. But the people who training these AIs are only human. So they’re they’re also going to make mistakes as well. But hopefully you’ll do better. So you’ll get rid of some of the bias, but not all of it. But the remaining bias has the potential to get out of hand really quickly or really slowly, which is hard to detect. So as your models start making questionable decisions because there could be certain small groups of people that your insurance fraud system is saying, ‘oh, no, they’re one of those people. It’s committing fraud’. So let’s go investigate that. Like that group because they live in a certain neighborhood or whatever. And that sort of stuff can be really hard to detect. And there’s and there’s a wonderful case study. There’s a wonderful book on Weapons of Math Destruction by Cathy O’Neil. It just contains horror stories of well-intentioned people building what they believed to be good systems. And then the side effects were unpalatable. Anybody doing AI needs to read that book or pretty much anything Cathy writes. But certainly, that book.

Paul Gebel [00:15:31] Outstanding. One of the phrases that keeps ringing in my ears is the signal-to-noise ratio. Regenerate response after response. There’s also a recursion that starts to take place. Where there’s such a corpus of AI-generated content, both visually with Dall-E and written word, that now it seems that the models are now feeding themselves, and there’s sort of this capitalistic tendency towards ingesting previous AI art to make new AI art and previous AI writing to make new AI writing. And it seems to be a slippery slope that seems dangerous on the face. But again, I don’t know that I have the words to define what it is we’re even talking about yet.

Scott Ambler [00:16:09] Frankly, the output from chat GPT-4 is virtually impossible to detect that its been written by AI. From what I can tell. It’s very, very difficult. So if you’re publishing a web page based on that output, how do you detect it?  I’ve been thinking deeply about this, where we need some sort of a way to indicate that content is produced solely by people, and we’re seeing more and more of that when you submit to certain journals and certain, certain known publishers, now you’ve got to say, yes,  I did not use AI to generate any of this content, but we need a general international strategy. That’s where it’s all going to fall apart. It’s doing it internationally. So there’ll be multiple ways of doing this and, there’ll be great arguments, but we need to be able to indicate, what is human-generated content, and what isn’t. But then you’ve still got the problem that, some of the human-generated content is crap as well. And so, there are a lot of very questionable writings produced by humans that are also being ingested in these AIs, which is why the chatbot became a raving racist. Well, because it was ingesting racist opinions being published by people. And it just and then just spiraled from there.

Sean Flaherty [00:17:23] You said earlier that people trust AI more than they trust other people. I’m just curious in that study did was it did the people know it was AI in this?

Scott Ambler [00:17:36] What we’re looking at and it’s not just AI, it’s just information coming out of the computer. The computer must be right. That’s the level of thinking. So there’s greater trust in what a computer produces in general than what people produce. Because that number must be right. You get your telecom billing. Pretending you still get it on paper, your telecom bill comes every month. It’s got to be right. But the computers at AT&T or whoever your provider is, this is the level of thinking, unfortunately.

Sean Flaherty [00:18:08] So we’ve essentially been trained to believe that computers don’t make mistakes. It’s it’s kind of bled over into our ethos about AI. That’s an interesting concept.

Paul Gebel [00:18:19] So I want to dig into how a product manager looking at a platform, a piece of software, a new product they’re trying to release into the world can reconcile the signal-to-noise ratio, for lack of a better term. I think the project manager, product manager, delivery team that’s trying to ship software, I think is in a position where it’s very tempting to rely on tools to generate requirements to adapt user interfaces in more dynamic ways. The appeal is strong, and if it were to work out in an ideal world, it really is a force multiplier to be reckoned with. It is 100% a productivity boon, and I think it’s difficult or impossible to ignore at this point in time. But integrating it into our existing workflows, whether it’s Agile or Waterfall or whatever framework you may be working in, this product manager, entering the field or maturing their career is looking at how do I deal with these new tool sets where the rules haven’t been written? Regulations are still amorphous. People don’t know what to trust or who’s writing a line of code or a factor figure that my UI is displaying. It brings us back down to, terra firma, from Strategy Mountain that we’ve been living on for a minute. What does a product manager do with this nexus that we’re looking at, where a new technology is brand new and being built before our eyes, and we still have to get work done? And now the expectations from maybe people who don’t understand it are now – ‘Well, now you should be able to do this ten times faster or, ten times better’. How do you arm a person like that with the knowledge they need to handle these brand-new questions?

Scott Ambler [00:20:02] I think people need to be open about what they’re doing. For example, you mentioned, the use case of generating requirements from ChatGPT or something. That’s a great idea. But at the same time, if a product manager is doing that and just copying and pasting straight, they deserve what they get. It’s simple. But if they’re treating it like a brainstorming session with just between them and a computer, great because it’s still fundamentally their decision. All they’re doing is getting some, extra feedback very quickly. But they could have done a Google search. But the thing is, you do a Google search, and then you have to read all these articles or whatever comes up. Whereas, if you use one of the AIs like Bing or whatever, then it’ll, it’ll start giving a fairly decent answer. Now sometimes all it’s doing is regurgitating an existing article. So you better click on it and see where it came from. But others are, you’re asking it to give you 25 ideas around this and it’ll come up. But you still have to process them yourself and and choose the ten that were good ideas for you in your situation, effectively it’s still yours and you’re going to tweak it anyway. Point number seven will be something. And then you’ll reword it to make sense for your situation. So it’s a lot faster, that easily will be a force multiplier compared to, having even just a conference call where you get like ten people together and then you’ve got to sift through all their ideas. So there are some opportunities for doing that. But what I would do is I would look for opportunities where I could use AIs to speed up one task or one part of the process and use it as, input into whatever it is that they are trying to do there. But I’m still making the decision. It’s still my material going into or my ideas, after seeing what came out of the AI. So you got to own it. And I think that’s where you’re going to see a lot of, force multiplication. To your point, though, this is changing so quickly. This is a highly competitive field. It’s like any, tech, fad. You get this huge burst of semi-innovation, I guess you would say, with hundreds or thousands of potential options, and then, Five years later, you’re down to the top five. And this is the way this is going to play out as well. We’re in that several hundred options to choose from point. You’ve got to keep an eye on it. And because this does change, talk to talk to your friends. And I think there’s a lot of I’ve seen a few groups, professional groups now where they’re, they’re having meetups of ‘let’s share what you’re doing and what tool. Let’s talk about the tools and strategies that you’re using for, applying AI in your job’. And I think that’s a beautiful idea. And for the next couple of years, that’ll be absolutely critical for a lot of groups.

Paul Gebel [00:23:02] I used the phrase when we were chatting before the show, ‘There’s no magic. It’s just hard work’. It’s not true. I think that’s my top takeaway, Sean.

Scott Ambler [00:23:11] There it is. You want to think it’s magic. And I guess like on the very high level surface it is magical. Some of these Gen AIs are just incredible. Like I said, I use Dall-E a lot because I don’t have artistic talent whatsoever. So, I’ll use it to generate a few images that I’ll use in presentations and whatever. But, back in the day when I had staff, I had a full-time graphic artist and it would take, for the images that I’m getting out of Dall-E in like five minutes, I would have spent several days going back and forth with her. This is what I want. And can you do this? No, that’s not what I want. I can what about this or like that? But I really like this. So we’d go back and forth for several days until finally we came to the idea of, oh, that’s it, okay, make a clean version of that. And she’d go off and do a great job. Now I can do pretty much the same thing in Dall-E in five minutes. So there’s some very interesting stuff going on there, but there’s no magic, right? It’s just, that Dall-E ingested millions of pictures in it, which is why I can draw like that. And I think we really need to understand this is happening. And there are some very interesting IP challenges there. We may have to give up on copyright. I was in a conversation earlier today about that very issue where I’m pretty generous with my IP which has always been just, strange for a lot of people. And it’s now it’s like, yeah, I’ve got some copyright. I do have a lot of copyrighted material, but I can’t fight that battle. I just don’t have the resources. And I know it’s been stolen. I know without a doubt it’s been ingested by all these AIs and just nothing I can do about it. I got to give up that. Yeah, whatever, and move. I just don’t have the resources to fight those battles. And I think we need to understand that.

Sean Flaherty [00:25:07] That’s interesting for the creators out there. So along this theme of there’s no magic, it’s all hard work. I have a couple of takeaways I’d like to share with the audience from my crazy mind, from all the things that you said here when the first one is, I think, one of the top engineering use cases that I haven’t heard a lot about that is often overlooked is this concept of using AI to cleanse data. And you mentioned, we’re still years out from it being super useful in the context of large data sets because of the health of the data. And I think there’s a huge opportunity there. There’s a gap and maybe an opportunity, it has at the very least, it has the ability to show us where the data, where the data problems are or where they’re congregating, or maybe even where they’re coming from, that’s a big opportunity. Second big takeaway, I think that I captured is that as a society, I think this is cool. And actually, I might post about this one. As a society, we’ve actually built this cognitive bias that didn’t exist before because technology has caused this cognitive bias around believing that the technology’s probably more right than the people we interact with. I think that’s a cool insight and something that, especially as product leaders, we need to be very careful of. That bias is real, it’s legit and it’s scary, I as I look back, I fallen prey to that I’m sure we all have. So we need to be aware of it or we can’t do anything with it. The third is that we can use AI you said something about speeding up a part of the process, which is it’s a first principle in any software industry, right? So any software endeavors to break problems down into smaller parts so that you can fix them. Well, we could do that or we should be doing that with AI. Like, what are the small parts of the problems that we could take AI and help solve them? And that applies to design. You talked a lot about using Dall-E to solve design problems. It applies to engineering. We can break down some of these engineering problems and maybe think differently about how to apply AI to solve components of the problem. I thought that was brilliant. And then the last one is that AI is not going to be the easy ticket, but it can help us significantly speed up our decision-making ability it can help us refine information and sort of spark ideas. It’s not a tool yet anyway that can fully create for us, but it can certainly help us make better decisions and help guide us in ways that we haven’t been. It makes us more efficient in ways that we haven’t been able to take advantage of before. And you’re reflections on those takeaways?

Scott Ambler [00:27:40] I think they’re all solid. I think one challenge, though is that AI can, in fact, solve bigger problems. But the problem is you don’t want to trust it. And so I think. So you really want to use AI, fix it, address a small problem or a small issue, and then make sure it’s clean, and then go on to the next thing and so on. So having humans in the loop I think is critical. So, for example, if from a programming point of view, you can generate lots and lots of source code to build websites. I’ve seen demos of this. It’s absolutely fascinating, but it’s that you do you really want to trust that code? Whereas I’ve also seen people use Copilot to give me a small function to do blah, blah, blah, and the code is solid now, you still need to test it and what you’re doing. But I would trust small little snippets of code. After I’ve eyeballed it and made sure it actually works. I certainly wouldn’t trust the several hundred lines of code written by it, written by some Copilot. Yeah, it might work, but there could be some very interesting security problems there that would get passed because it’s hard to look at a big it’s hard to review and fix a big thing. It’s pretty straightforward to review and fix a small thing. And I think it’s the keep the humans in the loop for the small things, at least for now, maybe, five, ten years. I could be completely wrong. I could be building entire systems in five minutes. Who knows? I doubt it, but you never know. But certainly now only use it for small things and make sure humans are are in the loop.

Paul Gebel [00:29:18] Sound advice. Scott, there are two questions that we ask all of our guests at the end of our show. The first is actually interesting because you’ve used the word a couple of times already. The first question we ask is what is the definition of innovation to you? When you hear the word innovation, especially germane to the space that we’ve been in for the past half hour or so? What does the word innovation mean when you hear that?

Scott Ambler [00:29:37] Great question. I think innovation to me is new ideas or new combinations of ideas, in a context. So because very few innovations are truly are truly new, right? It’s usually the application of existing ideas and combinations thereof in a new context where, those, those that idea or those ideas have been applied before. So I think it’s, standing on the shoulders of giants type of thing.

Sean Flaherty [00:30:08] Awesome. The second question is, what are you reading these days? What’s what’s intriguing to you? What kind of books are you reading these days?

Scott Ambler [00:30:16] Well, because I’m in school, I have been doing a lot of reading in deep learning and machine learning and all this stuff and papers around AI, but on the side, I’m actually for fiction, I’m rereading Lord of the Rings, and so going through each book one at a time, and I forced myself to not watch the movie until after I’ve read the book. So I’m most of the way through Two Towers right now. I reread the Lord of the Rings probably every ten years or so.  I’m looking forward to watching Two Towers probably over between Christmas and New Year’s. I’ve got a book I think it’s something along the lines of 25 Possible Uses of AI. I think it’s on the HBR AI books list this year. So it seems to be a pretty interesting book. These authors are all, the godfathers and sometimes godmothers of the AI field in writing of their various ideas. So a lot of good stuff.

Sean Flaherty [00:31:15] I saw Paul light up when you said Lord of the Rings. I think it’s required reading in our domain for sure.

Scott Ambler [00:31:20] It is.

Paul Gebel [00:31:21] Deep, rich goodness. Well, we’ll be sure to list all those books in the show notes, as well as the other that you mentioned, Weapons of Math Destruction, which I’m looking forward to cracking open. But, Scott, it’s been a pleasure getting a peek inside your mind and thoughts on the space. Really pragmatic, hands-on approach to how to deal with these very new and sometimes intimidating ideas. So thanks for breaking this down for us and helping approach this with a really levelheaded, I think, set of ideals and principles. So yeah, it’s been a blast. Cheers. Cheers.

Paul Gebel [00:31:51] Well, that’s it for today. In line with our goals of transparency and listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get. So please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world just like you’re doing. Thanks, everyone. We’ll see you next episode.

Like what you see? Let’s talk now.

Reach Out