Episode 86

AI Revolution

with John Glaser, Ph.D.

November 10, 2022

Share Episode
Share on email
Share on twitter
Share on linkedin
Share on facebook

Watch

Highlights

Listen

John Glaser, Ph.D.
Executive-in-Residence, Harvard Medical School

John Glaser, Ph.D. is an Executive in Resident at Harvard Medical School and recently published “Advanced Introduction to Artificial Intelligence in Healthcare.” Previously, Dr. Glaser joined Cerner Corporation as an Executive Senior Advisor, due to their acquisition of Siemens Health Services, where he was CEO. Prior to that, he served as the CIO for Partners HealthCare, now known as Mass General Brigham. Dr. Glaser sits on multiple boards of directors including the Scottsdale Institute, NCQA, and Forbes Health Advisory Board.

.

It may take several years. But nonetheless, the world will be different.

Transcript

Scroll

[00:00:00] Gary Bisbee, Ph.D.: Good morning, John, and welcome.

[00:00:02] John Glaser, Ph.D.: Thank you, Gary. It’s a pleasure to be here.

[00:00:04] Gary Bisbee, Ph.D.: We’re pleased to have you at this microphone and congratulations. Your newly co-authored a newly published book, Artificial Intelligence and Healthcare is just a terrific reference guide and it needs to be on everyone’s desk, all of us who are leaders in healthcare. Let’s dig into how leaders could be thinking about.

[00:00:26] John Glaser, Ph.D.: Yeah.

[00:00:26] Gary Bisbee, Ph.D.: We interviewed had a conversation with each other about a year ago on this show, and one of the advices you gave to up and coming leaders was pay attention to ai. You need to really learn about it. And how does that advice look today, John?

[00:00:44] John Glaser, Ph.D.: Well, Gary, I think it still applies today. It’s maybe almost timeless in some ways. I think, the book was written from the perspective of, let’s presume you’re in front of a, the C-suite of a health system or. Executive committee of a health system, what would you say if they ask you about what should we do about ai? And that’s the framing that led to the book and its contents, et cetera. And I think, Gary, about every 10 years, the technology arrives, it truly changes the world. I’ve been around long enough to seen a number of them that network PC. The internet, the mobile device, the sensors everywhere connected and AI falls into that category. It may take several years, but nonetheless, the world will be different just as the world’s different because of the internet and because of mobile devices. So it will factor in into the activities, the care delivery, the operations of any health system in a material way. So you better begin to understand it and begin to learn about it etc. And I think one way to start with that is say what do we mean by ai? Within that, there’s really two broad categories of ai. One is referred to as deep learning and basically what happens here is you feed the computer an awful lot of data and it classifies what it finds it may. You feed it a bunch of images of MRI images and it’s says, I think there are tumor. And this is the type of tumor that I see, or that a bunch of data that suggests, oh, kids with asthma, and it says, I perceive this kid’s well, managed this kid is not, et cetera. So it is in, it gets pretty detailed. So it can tell you, for example, not only that this is a car, but the make and model of the car. Facial recognition is an example. We came back from a vacation in Bermuda last week. I went into global entry. It took a picture of my face and knew exactly who. Based on facial recognition. So deep learning is one example. The second is what they call an odd term, frankly. Robotic process automation. And there the computers watching a process. And the process could be prior authorization. The process could be coding a medical record. And it’s as long as that process is working like it should, it’s silent. But if it detects that the process is off course, or likely to be off course, it intervenes and suggests that here’s what we ought do to make sure that the process goes back to where it should. And so these can be a number, often administrative processes that go on care coordination, readmissions, et cetera, are examples here. So those are the two major categories that are going on. And broadly, Gary, they fall into the category, the overarching family, so to speak, of statistical pattern recognition like regressions, like cluster analysis or factor. Things that you and I might have learned in college or graduate school. AI is an example of that, but nonetheless, it is going to be, and in some ways is quite profound, changing industries and will continue to do so for years to come.

[00:03:18] Gary Bisbee, Ph.D.: John, how does AI and healthcare compare to other industries? Sometimes those of us in healthcare think we’re behind in certain technology applications. But is that the case in terms of ai, do you think?

[00:03:33] John Glaser, Ph.D.: In some ways it’s really hard to say Where are we cuz it’s so diverse. We’re ahead here, we’re behind there, et cetera. In he. In terms of image recognition, we’re quite sophisticated. Where the machine and where a lot of the work is how do I know in a radiology finding or in, an ultrasound finding or video of somebody, whether they have Parkinson’s or not. Can I detect that? So we’re quite sophisticated there. We’re also pretty sophisticated in a lot of linking together of instruments and devices and saying, By the way, I’m in the ICU and this is the pattern that I’m seeing here. We’re not as sophisticated in other types of things, which is, managing, what’s the likelihood that Gary’s going to be healthy in the year ahead? We’re still crude in some ways.

[00:04:11] Gary Bisbee, Ph.D.: Are we in danger of Overhyping AI at this point? Sometimes we hear it can be used in certain ways that sounds like Star Wars.

[00:04:21] John Glaser, Ph.D.: Yeah. You get these examples where you won’t need humans as doctors anymore. You got a robot doc that’s gotta go on here. So there is hype and there’s some fascination with that. And I get it, it’s pretty exciting, matrix type movies and things along those lines. Example was a study done. That looked at predictive algorithms for Covid early in the pandemic. And the algorithm said, Are you likely to get sick? And if you do get sick, how severe is it likely to be? There are 200 algorithms that came out. None of them were clinically useful. So it was over hypey, but it turns these shit back. Said, What was that all about? Just not very useful. So yes, and I think one of the challenges for the leadership is, okay, it’s overhyped, I got it. And we’re also at times inappropriately dismal about it, but it’s going to get to mature. So how do I navigate my way through this period of time, of a lot of noise, frankly, and, don’t, I don’t wanna over bit the farm, but also don’t wanna undersell the stuff because it will be important.

[00:05:16] Gary Bisbee, Ph.D.: About data and the importance of data. Can you talk about the difference between data and then what I might call clean data, which seems to be a big problem here.

[00:05:27] John Glaser, Ph.D.: Two examples here. One is a pretty famous study that was done that said, I’m gonna look at individuals and see to what degree are they going to be really sick in the year ahead. And I’m gonna, if they are my algorithm says I’m gonna focus on them. Et cetera. So it looked at a lot of data, but one of the pieces of data it looked at was how cost, how much did it cost to take care of you last year? And it assumed it would cost that to take care of you next year. What it forgot to think about is that if you’re poorer in rural areas, your costs are really low. You don’t have access to care. That doesn’t mean you’re not really sick. So the algorithm was bias. Towards people who had access to care and towards people who actually could be, who were covered by insurance, et cetera. So the data was clean, but it was wrong in certain ways. The other is you can have, you say, All right, I want the AI to look at a bunch of images, and this is a tumor and this is not, et cetera. And I, they say how will I know? I’m gonna ask a bunch of physicians, Do you see a tumor or not? And I’m gonna use that to train the algorithm. At times, physicians don’t agree. And they actually, the way you label the data is different, et cetera. So the data is not wrong, it’s just interpreted differently. So there’s all kinds of stuff you have to wade through on the data. Sometimes it’s a real big issue and sometimes it.

[00:06:36] Gary Bisbee, Ph.D.: Yep. John, the book makes the. There’s two broad uses for aai. One you call administrative payment and the other is clinical. On the administrative payment side, what are the low hanging fruit? Where has AI been used and has been effective?

[00:06:57] John Glaser, Ph.D.: By and large what you find, and one of the points is it’s probably further ahead in administrative than in clinical. And one is the ROI tends to be clearer in the administrative. If I can improve coding, that’s a clear return to me. And if I can do a better job of reducing the cost of prior authorizations and not getting it wrong, there’s real return here. So that’s a little easier. If I have, a doctor who’s better at diagnosing images, what’s that worth to me? It’s a good thing, but it’s a little harder to put your finger on the return of it. So the ROI tends to be clearer. The second. Is that the mistakes, if you make a mistakes, it’s less consequential in the administrative area than in the clinical. And because of that, they’re unlikely to be regulated in the administrative than they are in the clinical, which means leadership will tend to be less hesitant to go after them. And, not saying what are the regs and whether we step in something. So lots of great is in administrative, very diverse, and it’s tended moving faster than we see in the.

[00:07:55] Gary Bisbee, Ph.D.: You used examples of predictive modeling on the clinical side. How important is predictive modeling on the administrative side?

[00:08:02] John Glaser, Ph.D.: It can be, it can be depending on their type of use. The predictive you have, give you an example, Gary, and a lot of what is in the boundary between clinical administrative. I have a colleague who works for a large health plant. And they were looking at trying to reach out to their subscribers to encourage ’em to get vaccinated, for the covid. And they found that in the sample that if you take a hundred some percent, let’s say a third, they’re gonna get vaccinated. No matter what you don’t have to do anything. They’ll go off and do that. And there’s another third that aren’t gonna get vaccinated no matter what you do. For whatever set of reasons they don’t believe it’s true. And then there are third that are persuadable and they’re trying to figure out who’s the third that are persuadable. I’m trying to predict whether you are amenable to a conversation or not. And so that’s in this boundary between clinical and administrative activity.

[00:08:45] Gary Bisbee, Ph.D.: What about cyber security? Does AI help or open the door for more cyber issues?

[00:08:52] John Glaser, Ph.D.: It can be striking. Gary, as take a large health system. How many people try to get in to their network on any given day? It’s tens of thousands times, hundreds of thousands. And just sorting through all that at times requires AI to know when I, as a security person need to intercede. So AI per se, I don’t think creates additional security risk. It help you combat it. The major risk with. Some of which you just talked about is what I call algorithm risk is you think you’ve got the right set of data, but you don’t, you’re skewed in some ways or you forget that data decays. So if you did have the cat’s pajamas of the COVID algorithms, in the year 2000. Even it was great and really predictive and all that stuff, it isn’t anymore. Why? Because the variants are different. The treatment patterns are different. So the world changes, and because of that, the data you traded on is no longer reflective of the reality of today.

[00:09:42] Gary Bisbee, Ph.D.: Let’s turn to the clinical side for a moment. You indicated that imaging is. Perhaps the most prominent use clinically at the current time. Is that true, John?

[00:09:53] John Glaser, Ph.D.: If you look at the FDA’s blessing of AI use in a piece of diagnostic equipment, the vast majority are radi. And the thing that works with great with radiology is one of the things that AI works well. If I want to train my deep learning algorithm to recognize this side or the other, I have to be able to say, Did it get it right? And if it didn’t get it right, how do I tune it? One of the neat things about imaging is if the AI says there’s a tumor you can inspect, is it you’re right or you’re not right, et cetera, you got a way of determining truth, or fact if I say you’re likely to live next year, ah, it’s a little harder for me to know whether that’s gonna be fact or.

[00:10:28] Gary Bisbee, Ph.D.: What do we know about how doctors use the outcome of predictive bottling? You can see that in some cases they might say, Hey predictive bottling is trying to override my decision making as a doctor. And others would say that’s just more information that’s useful for me. Where are, where do we stand generally on that?

[00:10:49] John Glaser, Ph.D.: You’ve gotta make sure that the clinical staff is involved in the creation of this, and they’re taught about why it matters. You gotta make sure that it’s into the workflow, so it’s easy to listen to, et cetera. You gotta make sure you don’t over. It’s the same thing with the AI algorithms. If you, they’re quite willing to be, someone suggesting there may be something going on here you pay attention to, but you gotta introduce it well and you gotta make sure that it doesn’t take five extra steps to go off and do it. And you gotta make sure you don’t badger them to the point where they say, Enough already, I’m just trying to take care of patients.

[00:11:17] Gary Bisbee, Ph.D.: Liability in terms of the vendor, the physician, the institution, and so on. Where does liability fall in this whole area?

[00:11:26] John Glaser, Ph.D.: It’s like writing a textbook. You better make sure that you, if someone says, Wait a minute, this is wrong. You say, Yeah, but I did all the things I’m supposed to do. Talk to experts, reference, et cetera. I wanna show professional diligence so liability can rest with the supplier to the degree that they didn’t exercise at. On the other hand, liability can also rest with the consumer of the textbook or the consumer. That is if you can say, listen, you’re a doctor. You should have said, Wait a minute, this doesn’t make sense for this particular patient. You did robotic following, et cetera. So those are where the two liability factors can lie, is on the producer, but also on the supplier. And in, in some ways it’s the same kind of liability equation we’ve seen with medical textbooks.

[00:12:06] Gary Bisbee, Ph.D.: So ethics is another. And you can see that there could be a variety of ethical concerns. Have we done a good job preparing for those kinds of ethical concerns, John?

[00:12:18] John Glaser, Ph.D.: Oh, I think we’re getting our hands around it, Gary. And you’re right, there’s a range of them and you can get. A range that says your data is under sampling certain populations, and so you’re missing real issues confronting African Americans or Hispanics, et cetera, or people in rural settings versus urban. So by steer you steered the care, to ways that disadvantage them, and that’s an ethical problem. There can also be ethical problems that you see broadly speaking in digital health that says, I’ve got individuals, digital therapy is an example is I can give you a device that helps you, deal with an AI bot to worry about your anxiety or your depression, et cetera. But it assumes you’ve got a device and it assumes you’re in an area where there’s good internet coverage. Neither might be true, So I’ve got an ethical problem of access is certain technologies which provide value. Are out of the reach of certain populations and we’ve got a problem.

[00:13:10] Gary Bisbee, Ph.D.: So you’ve indicated that AI is really in early innings, if I can use that term. Maybe a bit ahead on the administrative side versus the clinical side. From the standpoint of a leader, are we basically, should we be thinking about this for the rest of this decade? There’s just going to be continued evolution, continued applications for ai.

[00:13:32] John Glaser, Ph.D.: I do think there’s two general things that I would, if again, you and I were in front of a, C-suite, what would we say? Say one is nobody buys ai, they buy something else that performs better because of ai. So you say, I want some software to help me with utilization management or appointment scheduling or diagnostic imaging, and you’re the vendor. You say, Ah, yes, I have ai. I say why does that? Do any better, what makes the performance gains worthwhile? By that, of that, I’d asked the same questions that I’ve always asked of a supplier telling, trying to sell me some software solution. How do you know that it works? Show me that it does all that kind of stuff here. So nobody buys ai. And related to that is an algorithm is not a solution, which is not a company. So someone might say, I got the greatest algorithm on the planet. I said I’m happy for you. On the other hand, that doesn’t mean that it fits into the workflow well. So it may not be a solution cuz it’s really the only thing that’s gotta happen. And even if it is a solution, it doesn’t mean you’ve got a company that’s got legs. You at all this kind of stuff. There are hundreds of AI algorithms related to radiology. The poor radiologist is, I’m overwhelmed here, and so none of them will get real traction because there’s just too many. So remember, you buy something else that is better. You buy a car that is safer because of ai. You buy an MRI that’s more reliable because of ai. And the question is, you look at more, what am I buying? Why? And what does it do? I think the second thing, and I remember Gary, when in my tenure as a cio, when the web first came out, we had an internet strategy. And it was, what are we doing on the internet? What are we learning about its uses in healthcare, but also broadly speaking, other industries. And we had that for about three years and then we stopped having it. And why? Because we really understood how to apply it. And it was still early in some ways. We understood this. So what I think I would do is say, I’ve gotta start learning about this stuff. And when you look at organizations that are successful at leading digital transformation, why? One of the things is because they’re great at learning. They just learn all the time. They’re out there absorbing what’s going on here, inside and outside, et cetera. And I’d say the same is true with ai. So I’d start that. You may abandon the committee or disband it three years later, but fair enough, for right now, you’re learning and that’s the right thing to do, to know when and how it will take off and when and how it will provide material advantage to you. And frankly, some of these are just flashes in the pants. Fair enough. Let’s learn about ’em and cut our losses and move on.

[00:15:50] Gary Bisbee, Ph.D.: John, you’ve addressed this in several different ways. Let me ask the question directly. Should we be looking for AI innovations from larger companies or some of these earlier stage startups? Yeah.

[00:16:03] John Glaser, Ph.D.: So I think when you look at ai, there will clearly be stuff that comes through your major supplier of your hr, your revenue cycle, or population health, whatever. But there will be F Stars. And when you see the F Stars, your job is to do the homework. I says, What do they really have a material leap here? Is this, if that’s the case, cause it happens, these established companies disappear because of guys like this who come through here. Is it really material? And do I think they’ve got the backing and the leadership talent to make it? And if so, I’ll take a shot at et cetera. So you’ll see it from both sides here. You don’t want to be said, no matter what certain rep does. That’s all I’m gonna do because you might be missing some great opportunities. Nor do you want to be overly anxious when an upstart comes along because not all of them will.

[00:16:45] Gary Bisbee, Ph.D.: Who in the organization, who in the health system is responsible for ai, responsible for that? Pilot testing and the committee that you made reference to and so on. Who’s the focal point of of interest and responsibility in the house?

[00:17:01] John Glaser, Ph.D.: So you have this, fundamentally the, the CEO and the C-suites accountable, the CIO can lead the group. I do think the pilots are being held accountable by the, department or area that is holding them et cetera, and just to make sure that we’re learning. So it’s like a lot of things we do, Gary, in healthcare, it is, the accountability is shared, because there’s so much of what we do just traverses, all aspects of the, or of the health.

[00:17:23] Gary Bisbee, Ph.D.: If you were the CIO again, as you were for many years at Partners, , what would you want the board to know?

[00:17:31] John Glaser, Ph.D.: So what I’d want the board to know is a couple things. One is it’s early, but it’s a big deal. And so what we’re going to do is create this series of pilots and experiments to go off and learn about these. We’ll give you updates as appropriate and when it gets time to hit a home run, we’ll talk about that. The other thing as board, because you know you’ve, you know these folks. There’s some serious talent around the table, and I say, I’ve turned to the person who might be the CEO of a local bank, for example, and say, I’d like to talk to your person who’s doing with AI to learn from them about what’s going on here, because I, yes, banking’s not healthcare, but there’s a lot I can learn about how you think about it and how you do it, et cetera. So I’d like to really leverage your organization. As appropriate to go off and do that. But again, we’re in this learning exercise. We’re not ready to bet the farm on this thing, and I’m not ready to sit here. I want a million dollars or $10 million or $20 million to go full. I may come there, but I’m not ready to do that yet. That’s what Id want them to know.

[00:18:23] Gary Bisbee, Ph.D.: Okay. John, this has been a terrific interview. As expected, you are so knowledgeable in this space. I’d like to ask one last question. Okay. If I could, which we asked a year ago, and that is for up and coming leaders. What would you like them to, what advice would you have for them about.

[00:18:44] John Glaser, Ph.D.: Yeah, I think the main thing is It is to be able to take the time to understand it at a level that says I can have a, just like we’re having now, sort of conversation now. And to take the time to think through how might I apply it here. And that may be ideas they have. It might be, golly, I went to a conference and I saw what other people in the operating room. Management or doing about this stuff, et cetera. How up might I think about doing this thing? And I also might say part of your job is technology can really change. We’ve seen industries transform and organizations transform. Your job is when these technologies arrive, they can truly change the nature of your work and what you do. You ought to be on top of that. So I’d like you to learn enough about it, learn how to apply it, and then if you think there’s some ability to really improve, What your group does is that you begin to put those ideas on the table here.

[00:19:31] Gary Bisbee, Ph.D.: John, thanks for being with us today, Artificial. Oh, my pleasure. Okay, artificial. In healthcare, it’s a good one. Should be on every leader’s desk. Thanks, John.

[00:19:41] John Glaser, Ph.D.: All right, Gary, you take care and I’m glad to spend time with you.

[00:19:44] Gary Bisbee, Ph.D.: Cheers.

Subscribe for Updates​

For exclusive access to Think Medium content and program updates, subscribe here.