Episode 75

The Regulation of AI in Healthcare

with Suchi Saria, Ph.D.
Episode hosted by: Julie Yoo

August 1, 2023

Share Episode
Share on email
Share on twitter
Share on linkedin
Share on facebook

Watch

Highlights

Listen

Suchi Saria, Ph.D.
Founder & CEO, Bayesian Health

Suchi Saria is the John C. Malone Associate Professor of computer science at the Whiting School of Engineering and of statistics and health policy at the Bloomberg School of Public Health. She directs the Machine Learning and Healthcare Lab and is the founding research director of the Malone Center for Engineering in Healthcare.

.

 

Find projects that are aligned with your core.

Transcript

Scroll

Julie Yoo: Alright, Suchi.. It’s great to be in conversation here. As you are obviously one of the OGs of the AI space specifically as it applies to the healthcare domain, and it’s obviously a very dynamic time right now. So very keen to get your perspectives. One thing that was notable was that you recently announced participation in a consortium of folks who were thinking about the AI safety dimension of everything that’s going on right now, I think there’s been a lot of consortium activities fitting up left and right, whether it be in the healthcare world. Many of which go back many years even. But certainly in the last 12 months we’ve seen a lot of that. There’s a ton going on, obviously on the broader tech stage with the big tech companies taking a stance on this. And then obviously at the federal level a lot of hand wringing, let’s call it on on how to address this domain. Now since you are involved in at least one, maybe multiple, you should talk about, all the different groups that you’re fault in. I’m cur curious to hear what does success look like for you for these consortia efforts? I think, there are skeptics out there who say, How much teeth do these efforts have and what’s the real tangible value that might come of these initiatives? And then I think on the flip side, industry players who might not be involved in some of these initiatives might feel like they’re missing out or, not on the boat. What are ways that, folks in the broader ecosystem might be able to get involved or contribute to those efforts as well?

Suchi Saria: So Julie, obviously you wanna dive right in.

Julie Yoo: Right in.

Suchi Saria: talking about AI safety, very interesting and important topic. I guess maybe it’ll be helpful for me to give a little bit of background people listening can see where I’m coming from. So I’ve worked in the field of machine learning and artificial intelligence for a little over 20 years. Most of my early work was in the pure tech side of the house starting robotics. Got into healthcare around 2007, eight. Have since then held various positions mostly within academia as a researcher, have been on many scientific advisory boards, have spun out companies. And I’m also now the be the c e o and founder of Be Health, which is a company we spun out with six years of IP out of Johns Hopkins. And really focused on bringing real time AI at the point of care to augment care teams. trendy way that people think about these days is co-pilots for care teams. safety is an angle where I’m deeply passionate about the last five years on the research side of the house at Hopkins, we’ve done a huge amount of work on AI safety and in particular, I think there’s this Twitter notion of safety, which I call. This they call it alignment, which is if you were to start building AI agents or bots that are if you have emerging properties or become sentient, how do you get them to align with human motives? I actually think among experts that isn’t considered to be one of the more important core areas of safety. Instead, coming back to a regulatory perspective, the F D A, for example, has approved 500 plus AI tools. date in the last few years, and many of these are, basically what I would call software as a medical device. It’s software and radiology, software and pathology software and clinical care. And what the software is doing is essentially, using data in some ways, synthesizing it to produce insights that then inform a diagnosis or treatment or dosage. And in that scenario I think that’s where there’s enormous opportunity for thinking carefully about safety, which is as we deploy these kinds of AI tools, what is our rubric end-to-end rubric? And in the scenario where AI falls under the rubric of our software as a medical device, that rubric becomes very firm and clear under the F d A and other scenarios. Maybe it doesn’t fall under software as a medical device. The same principles and guardrails hold. There are right now in the. Last two to three years, a number of different grassroots groups, including Coalition of Healthy ai, which I’m one of the con founding members of. When we started that group, the idea was to help health systems self-regulate and Id, ’cause we’re building tools and we’re deploying them and the ability to come up with some kind of end-to-end rubric or framework for understanding what does great look like? What does responsible safe ML deployment look like? In my work with the F D A, wearing my researcher hat, we’ve been working on novel tools that they can use to modernize the FDA’s framework for overseeing such kinds of tools. And then most recently, national Academies of Medicine taking sort of an rather than a specific agency, this is a cross-agency effort through the perch of the National Academies is brought together a group to be able to reconcile, there are so many different guidelines and rubrics can be basically reconciled to come up with something that is more usable very broadly. And the ideas are pretty simple. It’s all the way from can we identify what the key risks are? And sometimes half the risks are, we are deploying software without fully understanding, what is it we’re trying to accomplish. You’ve seen this in digital health. I almost feel like all of digital health could benefit from some kind of discipline and regulation, right? Because. There are all these goals, glitzy objects where we take something and we want to deploy it, but we don’t fully understand what is it we were trying to do in the first place? What was the goal? How are we gonna measure success? How do we know we’re succeeding? How, what are the metrics? And then when we deploy in the real world, there’s also this notion of it’s learning that site specific data. Is it tuning correctly? Is it learning correctly? Are there drifts and shifts we need to worry about? And those are the kinds of ideas that these rubrics will start to end, produce a checklist that makes it very easy for whether the pro product is regulated or not regulated. There’s some semblance of best practice guardrails that people can implement E either within the system or anytime they’re choosing to use ai.

Julie Yoo: Yeah, and probably more mo most importantly, creating a common language so that people are saying the right words in the right context and able to understand each other. So that’s really interesting. You mentioned the f d A and it’s interesting in my conversations with folks in the non-healthcare industries, thinking about how AI should be regulated. Many folks actually point to healthcare for once as. Sort of the tip of the spear example of how we actually do have a regulatory framework for assessing and approving AI-based products in the form of what you described whether it be m d or the five 10 kpac. In what ways do you believe that the f d a model could be a model or a template for other industries and, specific to healthcare, what do you think needs to evolve or further be addressed even within the healthcare? Domain when it comes to the way that F D A has operated to date,

Suchi Saria: Yeah, great question. I actually think that you’re absolutely right that because of the. History of needing to be careful and needing to measure and needed to manage risks. There’s a lot more discipline here and understanding already in place around safety. And a very simple way to think about this is when the F D A looks to regulate a product, the first thing they start with, what is the intended use? For any piece of technology, what are you hoping to accomplish with that technology? Can you specify the who’s to use it? How are they to use it? And what is it that we’re trying to do? And then the next thing you do is to think about the risk benefit. What are the risks it poses and what is the benefit? And only the benefit is dramatically higher than the risk, than the device is approved. In walking through this framework now, you basically then do tests and trials and studies to understand the benefit and the risk, and then you’re able to demonstrate that essentially you first understood what it was supposed to do in the first place. You clearly articulated the risk, you clearly measured the benefit, and you have a framework in place to do the risk benefit trade off. That’s very simple in a nutshell, and I think it’s a very useful framework in introducing new technologies to market. to where there is opportunity modernized. Practically speaking, the framework written and built a long time ago, decades ago. And when we were, to me, one of the principle gaps is basically the slowness with which the A moves. there’s so much opportunity for high quality new solutions that are beneficial to come to market. For instance as an example with at Bei, we’ve spent years of research developing a solution for early detection of sepsis being one example in a number of other areas. We are applying this platform technology that takes data that’s already collected in the electronic health record. We’ve already paid millions of dollars. At every given health system to collect the data, but you can use this data intelligently to identify patients at risk for life-threatening complications. And now you can make those signals available within workflow, making it very easy for providers to identify, treat close gaps, improve outcomes, cut financial waste. Now the challenge remains though, that in some of these areas, the F D A has expanded oversight. And is now, wants to regulate. but our current process for regulating it is, means many of these are de novo devices where it could take years for the product to come to market. and so I think there’s an enormous opportunity to say, can we modernize the infrastructure where we can do these studies in a much faster fashion and evaluate. Another key gap there is this notion of post-marketing surveillance. When a device is brought to market, you wanna make sure that whatever performance you showed in a lab, when you’re deploying in the real world, there’s an opportunity for the device to stay performant. And the reason performance can drift is because of, maybe the population changes. Maybe something like covid happens in our own studies. We did this beautiful study that came out in Nature Medicine, going back to our work in sepsis. We started in one site and then we took the device to a second site and a third site, and a fourth and a fifth site. And then through the course of these sites, COVID happened, which was the biggest surprise. And what we were able to show was the, the performance. Like it’s essentially because we built the tool and the device in a way that was adaptive. You can do that with AI in an intelligent fashion. We were able to show that the device stayed performant over the course of these deployments through different sites. Also as covid a population that never seen before. The system was able to tune and improve and stay performant. And, under the f D A rubric in December of last year, there was a new regulation passed called predetermined change control, so basically P C P in short, which allows you to continue to. devices in the real world in order to stay and improve performance, which I think is brilliant news because it now allows companies to deliver tools that are gonna be truly performant in the real world and do this in a way that is fully, there’s great, good oversight over it. Now, obviously implementing and operationalizing all this is really, whether, the questions remain, but I’m exceptionally excited about. Both sort of the overarching framework, and I think there’s a very clear line of sight to how this can happen, and also the ability for AI to actually be redeployed very responsibly at the point of.

Julie Yoo: And you just articulated, I think what is such a challenge for companies that are building in this space is you’ve got the speed with re regulation and regulatory frameworks are moving. And then you also have the speed with which, like the health systems that you are working with are able to build the appropriate infrastructure and have the appropriate practices in place to enable you to even take advantage of the fact that you have access to all this dynamic data and such. And we know from our many conversations that pretty much every health system in the country at this point is trying to come up with their quote unquote AI strategy. And as I’ve seen it, I think there’s multiple pillars to that. One is, in some cases it’s actually like literally ai, yes or no, right? We’ve actually seen a few health systems say, we’re gonna shut down access to chat g p T for all of our employees until we figure out. Whether there’s a there, which has been you know, remarkable to see those who are leaning into an AI strategy are saying, okay, on the one hand, what are the use cases for which AI should and can be applied? And and with responsibility and safety, obviously a huge dimension of that. And then I would say a third pillar is what is the actual IT and data infrastructure, and then policies and procedures that go along with that we need to put in place to fully take advantage of these cutting edge technologies that are now being brought to us by companies like Beijing. You have probably participated in so many of these conversations yourself. Is this a productive exercise for organizations to be doing? What’s like the right framework that you would say is the most productive way to resolve those open strategic conversations that health system executives are having these days?

Suchi Saria: Think it’s hard to execute without a strategy. So it’s very important to have a strategy in place. It’s also important to be humble about having a strategy in place. ’cause what’s also happening is by doing, you’re going to learn a lot. So there are a couple of different like things, open items. I see. And I’m part of a couple of different large national collaboratives, with leaders from both health systems, but also insurance companies. Around planning an AI strategy and what should that look like and what the use cases are and so on so forth. the first thing is, I think steer away from just, doing pilots, like the distractions, the glitzy objects. I see a lot of what’s happening is, in, in some sense Chad G p t, AI has been growing for a long time. The excitement for AI in healthcare is what feels like two quarters old because they, Chad g PT was released in Q four of last year and, that suddenly opened up consumer excitement and interest board level interest, which then percolated down to leadership interest, percolated, so they’re getting bombarded from all sides. It’s starting from like a consumer level experience in the grounds of how many advances had occurred over the last decade and especially the last, so interestingly, I was in a National Academies meeting earlier last week where this colleague made this observation that like, suddenly they can’t keep AI out. They can’t keep AI out because essentially all the consumer led efforts is making it so that, generally in enterprises decisions are made in the center on the top and there they’re more reluctant to try out new ideas, to take on new things. But consumers are curious. And so essentially our clinicians are care teams are back office people have all become consumers and they’re bringing ideas. So I think an interesting time. clear AI is necessary. I think. The issue is not, it’s hard for me to imagine a system that’s going to stay live and stay around five to 10 years from now if their strategy started with ai. No. I think it’s definitely a question of ai. Yes. And the question of how and what and where do we perform given where we are. And so number one, the simple thing that people can align on and is let’s find experienced teams that really know what they’re doing that have deep know-how. In an area, and then what you need to evaluate is for your system. Is that area or is that thing that it’s doing productive for you to take on? So rather than just engaging with AI as a technology, let me just try to engage with AI and do something with ai. Instead, find something. Find projects that are with your core, the core of your business, whether it’s, core of your business and healthcare, whether it’s care delivery, ultimately you’re delivering care. That’s why you exist. And vision is, focused on the core of care delivery. If you are looking to, perhaps you’re in an area where back office is your issue, whatever the issue areas are, identify your issue areas. Find teams that are deeply focused in those issue areas that have deep expertise, and then make sure you’re solving problems that you deeply care to solve. And not just because you’re doing this to explore the technology. other thing that I see a common misconception, Is this notion of rather than focusing on teams with deep expertise in both the domain and the problem and the technology seeking out teams, which are just pure tech experts over and over again in the last decade, we’ve seen this from tech entrants who are coming into healthcare who have very limited experience in healthcare, and there’s just a huge gap in what it takes to get AI to work. Within a problem setting and that knowledge of the problem setting is very crucial for being able to operationalize it. Just as an example with Be in the last, I’ve spent over a decade now thinking about how do we make ai work at the point of care, work for our care teams. Actually to be fair, of the other like claims, denials, back office, those problems are simpler, they’re easier. the flip side, it pains me that like we’re still practicing, like we were in the Flintstone era. Like literally, patients still comes in, you hear what they had to say. From anecdotes, you respond to what you have to say. Important life decisions are getting made, tons of data is getting collected, but there’s so little use of it. even from a system perspective, we’re like one in three nurses are leaving the workforce. We know about staffing shortages. We know about declining margins. We know that the patients that are coming in to see are higher, more, higher acuity, and our care teams are getting constantly being asked to do more with less This. Nothing I’m saying here is news, it’s just that we’re in denial. We don’t want to solve the problems that are in front of us, and any system that doesn’t know how to think about this and doesn’t put a strategy in place. I don’t know what they’re planning to do. So I think there’s such an opportunity here to use AI with data and the right infrastructure to do this. Going back to your point about is every system gonna stand up, a team that does monitoring and reporting and data shut, drift and shift and needs all this expertise? Yeah, will definitely seems see large academic medical centers where they. Want to and can hire a team of 5,200 people who do things like this. But I also feel like we can borrow a card from other sectors. other sectors where we’ve brought AI to life by having fully managed end-to-end service. And so for instance, the way Beijing is doing this is we are basically build, bringing our deep expertise on the reimbursement regulatory, like what it takes, and then point of care delivery. To essentially the platform plugs into your workflow and delivers a service so that you don’t need to worry about things like drift and shift and managing end-to-end performance and measuring performance. And it’s much you bought an ultrasound machine and it works and in the same way this kind of software has to work and what you care about is getting guarantees. What is the thing I need to know to show it’s really working? Can I measure that? can you show me it’s working and then what it takes to make the sausage happen is your problem, not my problem.

Julie Yoo: Yeah, and what you’re talking about, Suchi, is basically creating, so beige has its own products that it has developed itself and has brought to market. And I know you’ve been very thoughtful about not just the the clinical rigor and the technical rigor, but also the business model that justified that this is a, an application area that makes sense from a value proposition perspective or. The customers that I’m serving. And then similarly, if a health system has internally developed AI algorithms or other such assets that those same governing principles of the justification of why you’ve brought your own homegrown solutions to market, you can apply to those third party algorithms and help that health system both make the decision about what is a viable product to bring to market, but also potentially help them commercialize that. Is that an accurate description of what you were just saying?

Suchi Saria: It’s a platform where we really are partnering with the health system to bring AI to life in the care delivery setting. And we really think end to end, and this means. A number of solutions we’ve developed, but also a number of solutions they might have developed that they want us to operationalize, or solutions that the partners have developed that we’re helping operationalize responsibly.

Julie Yoo: yeah. And you also mentioned something that I think is another sort of top of mind question that I hear from a lot of executives in the market is, there’s like a lamenting of the fact that the majority of startups that are building in that space are not going after the clinical domain. And I think we all have our hypotheses about why largely that it’s the, probably the hardest domain to go after. If you think about the common two by two matrix of, on the one hand you’ve got your non-clinical use cases the ones that you mentioned, back office, et cetera. And then on the other end you have clinical. And then the second axis is consumer patient facing versus professional facing. You’ve made a very deliberate decision on where to shoot the arrow from a patient perspective, which is. Piece and then the provider facing piece, which again, arguably has, the highest stakes in many ways. What gave you conviction? But you probably could have also packaged your IP in many other ways, right? In, in other quadrants. What gave you conviction to go after that particular quadrant? And then do you foresee a future in which Beijing moves into other quadrants? And what would need to be true for that to happen?

Suchi Saria: Yeah, absolutely. Again, to I think if you go back to the business or the business of care delivery, my when you look at the opportunity set is massive. Being able to use data at the point of care to be able to identify in real time, patients at risk for, or moments where there’s an opportunity to change the trajectory or improve the trajectory by intervening in a timely, more proactive passion. It’s something we all know is where we should have, even under a fee for service model. So one thing people will often complain is, Oh yeah, that is exactly the right thing to do, but are we gonna get paid for it? the reality is, even under a fee for service model, when you look at the D R G code, you know you’re getting paid a certain amount for providing care to a patient. If a certain set of complications happen, suddenly the cost of taking care of the patient is way higher. it behooves you to be able to both. It makes sense, not just from an outcome, patient outcome perspective, but it also makes sense from the point of view of total cost of care. So a health system, when they’re coming in, they’re implementing ways to provide proactive care. In these D R G based episodes, there’s an opportunity to much more tightly manage the care episodes. So you’re both getting more correctly. You’re providing most importantly, high value care, and you’re improving your frontline clinician’s experience today, there are so many areas where they do, today, medicine is very much built on a ccy C y a framework. We wanna make sure there’s documentation for it, so let’s make sure we hire some humans to make sure there’s documentation for it on every single patient, every single day. Simple example, pressure ulcers Twice a day, they do head to two assessments on every patient. Why? ’cause they need to show documentation so that they don’t, in order to mitigate malpractice risk. turns out only a small pool of these patients are really, truly at risk. if you really could focus your attention on those cases, you could identify them early, proactively, do something to actually improve outcomes, mitigate risk, mitigate reduce total cost of care and not be causing painful work on your care teams to be just going around the block twice a day doing useless work that they don’t have belief in the first place. And then in trying to do it all, we often miss doing the things that’s most important. Simple example where the right use of AI at the point of care can dramatically streamline, save time, save cost, and improve outcomes. So from my point of view, to me, some of the other tasks are very much in the fringes. They’re important, they’re valuable they can be done. But really at the core of healthcare delivery is care delivery, which is today still very much the way we practiced about a hundred years ago. It’s high time, like we’re taking and flying helicopters in Mars. But it sucks that we’re still killing patients today when they didn’t need to die. And if we were just using data better at the point of care, we wouldn’t have to, I think this enormous opportunity and we all believe five years from now, we look back and say, holy crap, I can’t believe we were doing this in the first place.

Julie Yoo: Let alone, layering on top of that, all of the labor constraints, challenges that you mentioned. So the time is nine. Said. Last question here, Suchi. Another kind of hot topic of debate these days is around Is it incumbents or upstarts who have the bigger advantage when it comes to ai and in particular, I think this has been surfaced in the context of generative ai where things like compute infrastructure and access to proprietary data assets and the ability to integrate into workflows is of paramount importance. What is your point of view on where specifically with, when it comes to health systems, where those incumbents might have a leg up versus areas where they might not? And same question for upstarts. Where are the areas where they’re advantaged versus needing to and needing to partner with the incumbents that we’re talking about.

Suchi Saria: I think today what I am seeing in the last couple of quarters is suddenly a lot of companies are taking a huge. Amount of interest in ai, right? They’re all starting, they’re all learning, they’re all excited. They all wanna participate, which is fascinating and great. I think from a, in healthcare they’re very small number of dollars. There’s not a whole lot of dollars to go around. And what we have to make sure is we’re responsibly deploying them, and part of that means clearly identifying what problems you’re looking to solve. How do you know you’ve solved them? What are you gonna use to measure success? And then knowing, are you working with a team where you can succeed do you have confidence you will succeed? In terms of like advantage, I feel the productization in healthcare is such an important issue. And people cough often, miss this. An example is, look at the iPhone, right? And the experience of iPhone. It really brought smartphones to market. But there were other people who could have said they were building phones. people who could have said, yes, I have a messaging tool too. Yes, I have a phone. But the experience from a frontline perspective was very different. Productization takes time. Productization requires deep experience. So my point of view is that teams where they have deep experience in the technology and deep experience in the domain. would really know how to productize to solve problems are the ones that are going to see the most amount of benefit. And then to really do this well, obviously you have to have all the ecosystem know how to do it right? So in the case of Asian, for instance, It took us years to be able to do what we’re doing. If we were starting today and saying, Hey, I wanna come in and actually help you improve point of care, you should be skeptical. And any company that’s doing this and is waking up overnight is gonna struggle. But in our scenario, it’s been almost a decade in the making, like years to do the deep integrations with EMRs. to be able to build that sort of safety infrastructure to allow us to do drift and shift monitoring and et cetera. And then being able to deeply understand users and clinical workflows and variations in clinical workflows across sites to be able to understand how do you splice in a way that feels natural, seamless, and easy. So I think teams that know how to think about things end-to-end from a problem perspective and a solution perspective, I think will have and obviously then have to have the technical know-how, right? AI is not an easy piece of technology, so you have to have deep experience in it. It’s very hard for a team that’s very, like a very simple example is your electronic health record, right? Like built amazing, enormous software that is now the backbone of delivering care. Your e electronic health record, yet the bone you need to build that kind of software is very different from what you need to build ai. really deep experience in the technology is a must, and yes, Nvidia will benefit because ultimately a lot of compute goes back to Nvidia, and the cloud providers will benefit because ultimately they’re the servers where the compute happens. But I think there’s great need for the people who will sit on top of these and really build the actual productized services that really work. Pay attention to detail from a user’s point of view.

Julie Yoo: Yep. What’s very clear, Suchi and especially from my standpoint of where I sit and, meeting with hundreds of entrepreneurs who are building in this domain, it is really a, like still a very finite number of humans on the planet who I think really understand this game. And I. And lean into the intersection of AI, clinical business and commercialization while, having such a comprehensive view of the broader ecosystem. Thank you as always for sharing your perspective. I’m sure it’s been helpful for those who are listening to, create some frameworks and some language around how they need to be thinking about this and where there’s opportunity for partnership.

Suchi Saria: Lovely to chat. Julie, as always, thank you for having me.

Subscribe for Updates​

For exclusive access to Think Medium content and program updates, subscribe here.