Skip to Content

99 / Overcoming the ‘Fragility of AI’ to Improve User Outcomes

Hosted by Paul Gebel & Roberta Oare

Listen

About

dipanwita-das-podcast-guest

Dipanwita Das

Sorcero

Dipanwita Dasis an award-winning global technology entrepreneur and a Sir Edmund Hilary Fellow. As CEO and Co-founder of Sorcero, Dipanwita has worked to improve lives through effective use of the world’s knowledge. She developed Sorcero’s vision of applying advanced analytics to medical and scientific language to improve patient outcomes and helped author the company’s 5 patented advances in biomedical AI.

Dipanwita previously founded and led 42 Strategies, which built the digital platform infrastructure for the largest private-sector global public health policy initiative in history, partnering with the Gates Foundation, Bloomberg Philanthropies, and the International Red Cross.  An Atlas Corps Fellow and later Board Member, Dipanwita designed the Global Leadership Lab and trained global leaders from over 60 countries.

Dipanwita graduated from two of the world’s top 5 startup accelerators for high-growth entrepreneurs, including Plug & Play, MassChallenge, and Y Combinator’s Female Founders program. She completed the Stanford Graduate School of Business’s Executive Program for Social Entrepreneurship, earned an M.A. from the Institute for Development Studies at the University of Sussex, and a B.A. from St. Stephen’s College, Delhi.

In addition to her many professional endeavors, Dipanwita actively supports gender equality, climate action, and criminal justice reform. She is passionate about science fiction, education, travel, and making the world better.

Make no mistake. Artificial intelligence and machine learning are super-powerful tools; their benefits seem endless. But let’s not confuse them with superpowers. AI possesses a fragility, says Dipanwita Das, co-founder and CEO of Sorcero, who is working to improve patient outcomes through advanced analytics.

More blind spot than flaw, the fragility of AI is nuance that algorithms cannot now account for. Ironically, even paradoxically, AI requires interaction with humans to reveal its true power. Thought of in this way, AI quickly becomes more approachable. It’s really thinking about the people who build the tool, those who interpret its suggestions and predictions, and all the lives impacted by the outcomes down the road. Dipanwita shares some examples relating to human health.

Dipanwita continues: “So it goes right back to us in how we’re collecting and organizing the data, how we’re designing the products, how we are applying this AI, and then what we are doing with its suggestions that will determine the end outcome.”

Anything we have not factored in, she explains, we have to account for somewhere else. If you don’t, you invite uncertainty as to whether the AI-driven product feature will perform as it needs to.

“AI is neither the silver bullet nor is it a demon,” Dipanwita concludes. “It is, at the end of the day, a tool, like anything else in software, to help us do our jobs better.”

Catch the entire podcast with Dipanwita Das as she joins ITX co-hosts Paul Gebel and Roberta Oare to discuss –

  • The role of bias in data collection and interpretation, and gaps it creates
  • The impact of nuance on user experience design
  • Criteria for finding the right balance of AI + Human Interaction
  • Sorcero’s “human in the loop” approach – a built-in touchpoint where an expert is able to give active feedback.

Paul [00:00:19] Hello and welcome to Product Momentum, where we hope to entertain, educate, and celebrate the amazing product people who are helping to shape our community’s way ahead. My name is Paul Gebel and I’m the Director of Product Innovation at ITX. Along with my co-host, Sean Flaherty, and our amazing production team and occasional guest host, we record and release a conversation with a product thought leader, writer, speaker, or maker who has something to share with the community every two weeks.

Paul [00:00:43] Hey, folks, the conversation you’re in for today is so timely. We’re sitting down to chat with Dipanwita Das. D founded Sorcero, which helps bring data in the healthcare space from mere access of data into the realm of true usability of data. Her passion is inspiring because so much of what AI and data-driven products have done lately is elevate data visibility, but as product leaders, we need to drive better decision-making through usability and functionality. Without proper thought and care, our systems only become more and more fragile over time. I hope the warnings that D shares are helpful and that you’ll be as inspired as we were to get beyond the access of data and think first about how we can help people be better versions of themselves with decisions they can trust. All right, that’s enough from me. Let’s get into our conversation together and hear it straight from the source.

Paul [00:01:28] Well hello, everyone, and welcome to the show. Today, we’re really excited to be joined by Dipanwita Das. She’s an award-winning global technology entrepreneur, a Sir Edmund Hilary Fellow, and CEO and Co-founder of Sorcero. Dipanwita has worked to improve lives through effective use of the world’s knowledge. She has developed Sorcero’s vision of applying advanced analytics to medical and scientific language to improve patient outcomes and she helped author the company’s five patented advances in biomedical AI. In addition to her many professional endeavors, Dipanwita actively supports gender equality, climate action, criminal justice reform, and making the world a better place for us all. D, thanks for joining us. So happy to have you.

Dipanwita [00:02:05] Thank you, Paul. That was very flattering.

Paul [00:02:09] Outstanding. So I’d love to get it started at a high level and dig into some of the problem statements that I think you’ve tried to address. So digging into some of the things that you’ve encountered, the problems that you’re solving at Sorcero, you know, I was struck in speaking to you before the show because most articles and essays that I read about AI focus on how much of a silver bullet is it, how resilient it makes you, how scalable it makes your products. But you use a phrase called “the fragility of AI” in our chat before the show, and you’ve started by focusing on some of the blind spots that you’ve identified and sort of just painting with broad brushes about how AI and machine learning and data science can solve all your problems. Can you begin our conversation just by unpacking, what do you mean by that?

Dipanwita [00:02:54] A, I love that topic. Let me start by first talking about, why are you using AI? Which is often, you know, the right place to start when you decide that a product or a solution should have AI-powered components or ML-powered components or any sort of these computational components in it. The first question to ask is why. In our case, in the case of Sorcero, it started off because we wanted to be able to analyze very large volumes of complex content that were coming from different sources, something a human being, even the smartest of us cannot handle at scale. Usually, that’s a starting point for saying, “I should have some sort of a cognitive tech component to it.” It’s a lot of it. And the minute you start going there, then a lot of, “what kind of content?” Is it an image? Is it video? Is it text? What kind?

Dipanwita [00:03:43] In our case, it’s language and it’s text, which is inherently complex for lots of reasons, but let me start by first saying nuance. We don’t even all speak English the same way, and that kind of nuance, then when you combine it with science, again, in the case of Sorcero, in our case, biomedical science, you’re adding these layers of complexity that is very difficult to hard code into an algorithm. So all of the background knowledge that we come with when we’re reading a piece of text is entirely absent from any sort of an algorithm, and we have to feed that all in. And some of it will always remain impossible to feed in. So this concept of drawing conclusions from previous lived experience, et cetera, how do you factor that into an algorithm?

Dipanwita [00:04:29] So when I talk about the fragility of AI, what I mean is that. Anything you have not factored in, you have to account for somewhere else in order to have the certainty that you can depend on this product feature that has an AI component to do exactly what it needs to do in a way a human expert can just take from it and move on. That’s really what I mean by it, in the nuance, language, and all of the stuff that we know that we come to the table with.

Paul [00:05:00] That’s really well-put. I think the thing that is most striking to me is just how human that is. It’s not an inaccessible ivory tower approach to AI. It’s really thinking about the people that are impacted by it, both in those building it, those interpreting it, and those impacted by the medical and biomedical services that are going to be health outcomes for patients down the road. So it’s really thinking about not just, “how can we make this technology bigger, better, faster?” It’s really about, how can we improve patient outcomes? How can we make people understand data in ways that they might not have been able to understand it before? Am I recapping sort of what that problem statement is that’s core? Did I miss anything?

Dipanwita [00:05:39] I think the one other thing I would like to add, it’s not so much you missed it, is that this is also the right place to understand the word bias. And, you know, we talk about it a lot when you talk about patients and health outcomes. But if you ask the question about where does bias come from, it usually comes at a data collection level, like an algorithm in and unto itself does not have bias. It’s the data on which it is trained and even the data that is being applied that is absent certain things. And that’s, again, where you take the responsibility on the human beings. If you’re designing a trial, a clinical trial, and you’ve omitted certain patient populations, you can’t then later on say the AI is biased, is that the data set is absent certain components so when you are trying to make decisions based on that data set, you’re missing it. So this is really where AI is neither the silver bullet nor is it a demon. It is, at the end of the day, a tool, like anything else in software, to help us do our jobs better. But it goes right back to us in how we’re organizing the data, how we’re designing the products, how we are applying this AI, and then what we are doing with its suggestions that will determine the end outcome.

Roberta [00:06:47] That’s fantastic, D. I’d be interested to hear what your impressions are on how accessibility is impacting the work that you do. Are there strategies that you have in place right now that you’re looking at to further accessibility for this data?

Dipanwita [00:07:01] Sure. Roberta, for the purposes of my response, could you please define accessibility in this context for me?

Roberta [00:07:08] Sure. Accessibility in the context of the digital space whereby we make information equitably available to all consumers so folks that have disabilities or need to consume data using other assistive devices.

Dipanwita [00:07:22] Right. I’m going to give you a somewhat tangential answer, partly because accessibility isn’t inherently core to how Sorcero solves problems. We do deal quite a bit with health equity analytics and DEI in these bodies of data. But in terms of accessibility, there is a set of standards for software design, of course, that take into account various kinds of abilities and account for that in the user experience, everything from color to speed. And so we definitely follow all of those standards and those are evolving all the time. So that is one big way of doing it.

Dipanwita [00:07:57] I think there is a lot of interesting conversation happening around data privacy and personal health data where accessibility is not exactly that definition, but literally who can access it for how much? Is that even a good way to go? That I think we could maybe speak about a little bit later. But to really respond to that question in Sorcero’s context, one of the things that we’re supporting our customers with is health equity analytics, and this is particularly prevalent in clinical trials. For example, if a company is running a clinical trial to treat a particular disease state, disease states can be more or less prevalent in different kinds of patient populations, by geography, by age, by race, by gender, and so forth. And that is inherent in our genetics.

Dipanwita [00:08:43] If that representation and that distribution isn’t fully covered in a clinical trial, what happens is when that product or that drug is taken to market, physicians are unable to prescribe that properly and then reimburse it properly because the data doesn’t explicitly call out the right patient population or the right dosage and so on and so forth. So one of the things we have developed is an analytics package to help folks figure out where there are gaps in the data, even in a product that is currently in-market, particularly as it pertains to currently available data on the propensity of that disease in different population groups. So that’s, like I said, sort of adjacent to your question on accessibility, but that’s how we deal with it outside of software design standards.

Roberta [00:09:30] And when you’re considering all of the nuances, right, and all of these tangential topics, how does that influence the design?

Dipanwita [00:09:38] So there are lots of parts of the user experience in the design that I think are important. One is of course the UI itself. So I’ll start surface down. And UI, as you know, is not UX. So from the UI standpoint, the first thing I think about is, “there are so many buttons, what do I want my user to first look at? If they kind of freeze out 70% of the screen, are they still going to get to the most important thing?” So using, again, everything from colors to shapes to positioning of different fields, we have a lot of filters. We’re dealing with a lot of text. There are cognitive items. There are callouts. There are ontologies, there are analytics. There’s so many valuable things in that one screen.

Dipanwita [00:10:21] A big part of the UI is drawing attention to the thing that truly matters or the three things that truly matter. And we try and do that as much as possible through extensive testing, getting the customers to beat it up, getting our own internal experts to beat it up. And it is really interesting. We have a massive product release coming out next week and I was just going through the war room recording videos and they’re actually calling that out and saying, “you know, I kind of miss that button, I think we need to change the shift, how where it’s positioned or the color or the callout.” So one is just that and doing that constantly. Because at the end of the day, you’re designing for a user persona, but it will never cover all the edge cases. So how many of the edge cases can you encase in the UI? So that’s level one.

Dipanwita [00:11:03] In the user experience, one of the tricky things is scale. So one of the reasons people are using AI at all is because there’s so much content and I want to know all of it and look at all of it. But that also brings in a lot of trickiness in how do you scalably represent that kind of text or that volume of text without completely overwhelming the user. And that’s really an art and a science. And I want to say that most of us are still figuring that out, and it could be, you know, calling out the five most important things to look at, using generative auto summarization to again help the users, kind of bundling and rolling up a lot of, let’s say, observations under a single insight so they can get to the lineage and the raw data, but they’re not overwhelmed with it. So there is a lot of trickiness there in handling data at scale. So that’s a whole ball of yarn in a design framework add text at scale, not just data at scale.

Dipanwita [00:11:55] The third thing that we look at is the sort of AI-powered features, which are generally either predictive or suggestive or they’re highlighting data points or they’re doing auto summarization. The first thing we need to take into account is trust. How are we building in trust so a subject matter expert who knows the subject can look at the screen and go, “Yep, I could work with this.” That happens in a few ways. One is algorithmically, everything is transparent and [inaudible]. That’s one. Number two is representing data lineage clearly in the user experience. But number three, the thing that we found to be most interesting to our customers is having a human-in-the-loop approach where they’re able to reject, to suggest, to give active feedback very easily while using the product and knowing that what we’re doing isn’t going to impact the end patient.

Dipanwita [00:12:44] So there’s lots of different pieces. And then the last, which is probably its own podcast, is data visualization, is storytelling with data. You know, [inaudible] has done a bunch of stuff with it, but you will tell the story you want to tell. So it’s very important there, again, to make sure that the way we are representing data and data highlights is done in a way that benefits the end user, and in this case, is really helping our customers understand how to treat patients better.

Roberta [00:13:09] Amazing. So going back to the why, right? It’s very much a human-centered approach. I completely appreciate that. Can you talk a little bit about how not to make the AI more precious than the problem?

Dipanwita [00:13:24] Ah. That is such a good question. So we actually debated this a lot because the tendency is to say, “the AI will!” A, there is no such thing, right? The way we keep ourselves sober and not drink our Kool-Aid, or drink the AI Kool-Aid, is to remind ourselves that at the end of the day, we’re solving a problem and we’re designing a software product to solve a problem. And that software product could have one AI-powered feature, could have ten, could have, you know, a little fraction, something happening here. But as long as it solves the problem, it is a product worth building. So that’s kind of what’s the grounding philosophy that it isn’t about the AI, it is about solving the problem. That’s one.

Dipanwita [00:14:08] Number two is workflow is great and workflow is, again, a very sobering space because you have to get into the nitty-gritty of how your customer is actually spending their day, their minutes, their hours, and then make sure that you’re making it not incrementally better, but often ten x better. And that’s really sobering also. Kind of going back to the fragility of the AI and the inherent limitations of it, when you’re at that workflow level, every little piece matters and the AI may only pop up in three or four or five different ways, and that’s your wow factor. But you need this person to log into this workstation every day and do their work. So again, it’s very sobering to stay at that level and not drink the AI Kool-Aid.

Dipanwita [00:14:52] And last, but certainly not the least, is the process of benchmarking and operationalizing algorithms, which is a beast. And if you talk to anyone who has had to operationalize models, one or more, you will know immediately that it is not fun. It’s not a solved problem. And keeping these things stable, not over-tuned, working all the time, and benchmarked, will definitely keep you off the Kool-Aid.

Paul [00:15:19] So we’ve covered a lot of ground and I want to tie this all together. I do want to focus on one thing that kind of sparked this whole thread, and that’s the concept of bias, right? So if we have more data than humans are realistically able to process and we have AI-enabled products that are able to essentially be our superpower, you know, AI plus humanity is greater than the sum of the parts. If we can build a platform, if we can build an experience where people are able to understand and visualize data in ways that can be greater than the sum of the parts, how do we understand what those gaps are? Because if they’re blindspots to us and we’re the ones building the technology, then won’t the technology have those blind spots too? How can we be sure that we’re rooting out bias at the root and making sure that our data samples are valid and cover all the people that we’re trying to help?

Dipanwita [00:16:10] So here’s the bad news. There is not a bias-free data set that we’re going to start out with. Right. And first is to accept that a lot of the data we’re working with, let’s say if you’re doing longitudinal data on population health that’s been collected over 20 years. 20 years ago, the world was a very different place. So what we need to really do right now is two things. One is figure out where it is encoded and hard-coded in the data, and we know where some of those things are already. Like we as human beings know that there were geographical biases around North and South. There’s definitely gender biases. We know that there are racial biases, there are even age biases.

Dipanwita [00:16:46] There’s an incredible project I know of called Data for Health. I think that it was happening in Eastern Africa where they were just trying to collect birth dates and death dates because there weren’t birth certificates and death certificates, so you don’t actually know what the lifespan of that population is on average. And there is very unequal data collection on health across the world by geography, by, you know, rich nations, poor nations, and so forth. So starting out with we don’t actually have very clean, well-fit data sets. Now that we’re applying a lot of cognitive tech, we need to identify those biases and then start new data collection projects across the world in different spaces so we have better data sets. That’s number two.

Dipanwita [00:17:26] Number three is that medicine historically has not been pure science. You know, it’s populated and littered with all sorts of superstition and bias and prejudice and all of those things. So medicine in and of itself, it’s not a hard binary sort of, “this is how we treat this person.” And we see this even today. So anything to do with medicine, I’m going to push us to think about how are we using AI? Because that’s how we protect ourselves, is how far is this unsupervised algorithm going to touch the end user? And there are spaces where it’s been demonstrated that an algorithm could outperform a human being, you know, could be in processing images. But that doesn’t mean that we just replace a doctor. It means you give more aids to the radiologist.

Dipanwita [00:18:09] So I like to separate the use of AI into sort of three buckets: decision support, decision augmentation, and decision automation. Right? So decision automation obviously is where AI is kind of doing all of it. So at Sorcero, we sort of shift the focus from results-oriented, so yes or no, to enhance a customer’s workflow. So the way to do this really, really well is to identify the right place to apply the right kind of sort of AI uses. Decision support in medical diagnoses where an AI is sort of suggesting based on a bunch of images that this could be what’s happening with patient one, and then a radiologist maybe comes in and looks at it and says, “Yay, that’s great.” And there’s a human in the loop and at the end of the day, the human is responsible. And that’s the least amount of machine, the most amount of human because the decision is being made by a human based on principles and ethics and experience and logic and reasoning and all of those things. And of course, their degree.

Dipanwita [00:19:12] Next is decision augmentation, which could happen in a financial investment, right. Like you plug in all your details and they tell you, “yeah, you’re kosher for this loan.” And that’s almost like a 50-50. There is still a human being that is looking at your data, but the machines are generating recommendations based on previous behaviors and then the machine suggests, human decides. Also a place for bias, right. We have heard and we’ve read about race-based and other issues with loans and credit. But again, human-in-the-loop.

Dipanwita [00:19:45] Next is decision automation, which could be a much more low-risk environment in next best action for digital ordering, right? “You like these shoes? Here are seven others.” That’s a low-risk space where if a human did not intervene and you bought, you know, a teal shoe instead of a black shoe, it’s not going to materially hurt you. So that’s where you can increase AI. So that’s how I would suggest we look at it. What is the risk in the space in which it’s being applied? What is the core data set? And does a human being’s involvement materially improve the outcome for the end user?

Paul [00:20:23] Putting this in a frame of reference that might be a bit more generic than the specific use case at Sorcero… There is an implicit bias in science itself. I don’t think that any scientist who takes themself seriously would say this out loud, but there is an implicit bias in science that says anything that is will continue to be, and, you know, what we see is all there is. But at some point, you do have to use your imagination. You have to say, “there is a cure for this disease that doesn’t exist today that we don’t know, but we believe it can be achieved.” And we have to figure out how to put pieces together of what is known but doesn’t exist today to turn it into this thing that we have in our imagination, this strategy, this vision, and get us there. So maybe I might be getting into the philosophical weeds here, but…

Dipanwita [00:21:05] Oh, I like the thread on which you are going. It is something I noodle on a lot myself. And if I may…

Paul [00:21:11] Yeah, please.

Dipanwita [00:21:11] I would say that science fiction is reality because we imagine a thing, and that’s maybe the easiest, most relevant thing that everybody can relate to. You see something in Jetsons and then you get working on, “How do I build a flying car?” Right. And even Sorcero’s products have a touch of that in it, the inspiration from science fiction, of what could be done. But also the essence of science is recognizing that what you know to be true today may not be true tomorrow because you do not know everything. And so science inherently is ever-changing. And that’s also the beauty of it. Even the hard sciences, you could theoretically discover something so fundamental that it changes the shape of the map. Is it rare? Yes, it is, but it is possible. And that’s really important. And that’s particularly true of medicine. And you see this, you know, particularly where we’re still working through a pandemic. How many times did what we know about COVID-19 change? Does that mean the scientists were wrong? No, we did not know. We have more data, now we know and now we know and we know. And similarly, you know, how we AI; it works a certain way now, the data changes and works a different way. So it’s also an evolving space, which is why we work in it, because you learn new things.

Roberta [00:22:25] I love that energy, D. It’s amazing to see your spirit. In just doing some background research on you, I find that you’ve got such an activist spirit and I’m so curious, like, what are you passionate about right now? What’s the most important thing to you right now to accomplish?

Dipanwita [00:22:44] The most important thing for me, above all else, is I want my customers to come back and tell me that their lives are better because they use our products. That’s it. And if they come and tell me that and their lives are even a little bit better, awesome.

Roberta [00:23:00] That is an honorable quest.

Paul [00:23:03] Well, I can’t think of a better way to wrap things up. I do have two last questions to close out our time together and it’s a couple of questions that we ask of all our guests. The first being, how would you define innovation in your own words? What does innovation mean to you?

Dipanwita [00:23:17] Innovation to me means that science fiction North Star, combined with Kaizen. It’s continuously improving, better today than we were yesterday on our path to that North Star. That, to me is really innovation in action.

Paul [00:23:31] Last question, where would you point our listeners to find inspiration? Obviously, your writings and your activism. Any books that have stood out that an aspiring product manager should have on their shelf, or even a TED Talk or a blog post that has stuck in your mind?

Dipanwita [00:23:47] So there are two books that have been stuck in my mind for a little bit, and they’re a little different. One is Ways of Seeing by John Berger, which is about art and kind of disabusing some of our notions about art being for the elite versus for everyone, and you’re going to like it. It’s argumentative. It’s a little bit activist. It’s very cool. The other book, which I’m rereading after a while, is Godel, Escher, Bach, which is the Hofstadter book on AI. And it’s brilliant because at the end of the day, you know what we do and what we live with, I think product managers should really read it because it grounds you in the philosophy behind sort of these self-learning cognitive systems that we want to build. I would say those two are probably most top-of-mind at the moment.

Paul [00:24:34] Awesome. We’ll link those in the show notes, for sure. D, thanks so much for taking the time today. I really appreciate you spending a minute with us and chatting through some of these ideas. I’ve certainly learned a ton and I’m really fortunate to have spent the time chatting, so thanks again.

Dipanwita [00:24:47] Thank you very much for having me. It’s been an honor.

Paul [00:24:50] Cheers.

Paul [00:24:53] Well, that’s it for today. In line with our goals of transparency and listening, we really want to hear from you. Sean and I are committed to reading every piece of feedback that we get, so please leave a comment or a rating wherever you’re listening to this podcast. Not only does it help us continue to improve, but it also helps the show climb up the rankings so that we can help other listeners move, touch, and inspire the world just like you’re doing. Thanks, everyone. We’ll see you next episode.

Like what you see? Let’s talk now.

Reach Out