February 10, 2022
[00:00:48] Gary Bisbee, Ph.D.: Dr. John Halamka is President of Mayo Clinic Platform and a noted expert in digital health and information technology. We dove into the promise and current practice of artificial intelligence and algorithms in the clinical setting, from predicting patient outcomes to synthesizing cutting edge research in real time. Dr. Halamka described the importance of Mayo Clinic Platforms’ investment in data. Work is underway to explore new ways to gather and clean data, analyze models and deliver clinical results all while maintaining patient privacy. We discussed the state of technology, policy and culture. The research and potential of AI technology is robust. However, we currently lack policy guidelines for how AI should be used in healthcare. And while algorithms can be effective, they’re hard to explain, causing lingering suspicion among some patients and providers. We learn about Dr. Halamka’s background, why he purposely majored in a subject he wasn’t good at, what drew him to emergency medicine, and how he and his wife, Kathy, have rescued over 300 farm animals at their sanctuary in Massachusetts. For young leaders, Dr. Halamka notes that there is no innovation without risk.
Well, good morning, John. And welcome.
[00:02:12] John Halamka, M.D., M.S.: Hey, great to be here.
[00:02:14] Gary Bisbee, Ph.D.: We’re pleased to have you at the microphone. Of course, you’re an acknowledged expert in a lot of things, but digital health, information technology. You’re well-published. You’re an entrepreneur in your own right. Congratulations on a terrific career to date. What we’d like to do today is to spend time on the state of digital health and particularly how AI and ML fit into clinical decision support. With that as a backdrop, your exciting new position as president of Mayo Clinic Platform. Can you describe the Mayo Clinic Platform for us, please, John?
[00:02:53] John Halamka, M.D., M.S.: Well, sure. So in 2019, Gianrico Farrugia, the CEO of Mayo Clinic had this notion. If we’re going to transform healthcare globally, we better put in place the digital technology policies and processes so that we can do that with less friction, and not just for Mayo Clinic, but actually for the entire healthcare ecosystem. I joined Mayo Clinic in January of 2020 and have worked on a series of capabilities to ingest high velocity data, create longitudinal records and new AI models, validate those models, and deliver results and workflow. And collaborations, joint ventures, and partnerships have now sprouted from that, over 30 of them. So it has been an extraordinary time. And of course, during the middle of COVID. So, we can talk all about that.
[00:03:43] Gary Bisbee, Ph.D.: Well, while we talked about you growing up in Southern California during the glory years, and Beach Boys, and malls, and so on. And now here you are in the glory years of digital health, I’d say, and doing a terrific job there at Mayo. Can I ask, what’s the business model of the Mayo Clinic Platform?
[00:04:03] John Halamka, M.D., M.S.: Really two-fold. Because we have so many partnerships, joint ventures and collaborations, you might imagine that if we co-develop products, that there are equity investments in companies and those have equity growth over time. So in that respect, yeah, the balance sheet is benefiting from the value created in creating spin-out companies and co-development. But there also must be ongoing revenue streams. The notion of Platform is, if we’re going to expand Mayo Clinic’s reach and the diversity of its products, there need to be services and solutions sold into the marketplace. We don’t sell data, right? That is just not a business that we’re in. But we will sell a product such as an algorithm that has been validated as fit for purpose. And we have found that the marketplace is willing to fund not only the algorithm purchase, but let’s say an external algorithm provider wants to run that algorithm and refine it against large de-identified datasets. So they don’t take the data, but they take a refined algorithm. They’re also willing to pay for that. So as you could guess, combination equity and sales of products and solutions.
[00:05:25] Gary Bisbee, Ph.D.: When they lease or buy an algorithm, does that come with the Mayo brand on it? So is that part of what they’re getting?
[00:05:33] John Halamka, M.D., M.S.: We don’t sell the brand, right? Again, you have to be very careful about this. The brand of Mayo Clinic and its reputation are treasured. Now, Mayo Clinic Platform is a sub brand of Mayo Clinic. And for example, we’ve used the designation “Powered by Mayo Clinic Platform” or “A Verified Solution of Mayo Clinic Platform”. So it’s not meant to imply an endorsement, but it is to say, we actually have evaluated this and it’s fit for purpose. And we’ve had a, what I’ll call, qualitative approach of saying we can actually measure whether the technology works or not.
[00:06:12] Gary Bisbee, Ph.D.: So when you have an algorithm like that does, part of it come with updating? Periodic updating, is that part of the program?
[00:06:20] John Halamka, M.D., M.S.: As you’ve wisely just said, there’s a phenomenon called data shift. And what happens? So imagine, oh, I dunno. I’ll make this up. It’s January of 2020, we develop an algorithm. And then in April of 2020, we’re running the same algorithm. By the way, it has something to do with tele-health. Well, do you think the data shifted between January and April of 2020? Just a bit. So unless you are continuously updating that algorithm for either changes in society, going from 3% telemedicine to 95% telemedicine in one quarter, or locally tuning it to a population. So Gary, you and I have some somewhat similar geographic backgrounds. So imagine I develop a cool algorithm in Duluth, Minnesota. I use a million Scandinavian Lutherans. And then we put that algorithm, say, into a Boston hospital. It’s potentially going to underdiagnose or overdiagnose. So it’s not only data shift, but it’s local tuning to differences in demographics.
[00:07:37] Gary Bisbee, Ph.D.: You mentioned in a previous discussion, I think, that Mayo Clinic has 10 million patients worth of data. Do I remember that correctly, John?
[00:07:47] John Halamka, M.D., M.S.: Right. So Mayo Clinic is fascinating in that it has a breadth and depth of data that exceeds most other healthcare systems because it’s multimodal. And here’s what I mean by that. So we have, of course, the HR data, structured and unstructured. We have telemetry, all of the various kinds of monitored date. We have images. We have OMEX, right, over a million genomes and 30 million digital pathology slides. So, the key is combining in one longitudinal lifetime record, the entire patient experience across all those modalities. Now, de-identifying it, protecting privacy, and using the data of the past to derive care plans for the future. And that’s what we’ve done in collaboration with our partnership and joint ventures.
[00:08:40] Gary Bisbee, Ph.D.: So, I think you mentioned you had 30 partnerships, what kind of categories of investments or future partners are you looking for, John?
[00:08:49] John Halamka, M.D., M.S.: So let me give you, let’s say, four examples. So one example, the ability to gather high velocity, novel data types. Mayo Clinic spun out a company called Lucem Health in collaboration with General Catalyst and that company is laser-focused on what are we going to do with the data from your Apple Watch, your Fitbit, and other kinds of high velocity, continuous monitors. And then here’s the challenge. The data is sometimes noisy. It’s provenance is not well understood. So Gary, your Fitbit just told me your heart rate’s 20. Should I call an ambulance? Well, the answer is Fitbit, because of the nature of how the data is gathered on your wrist, maybe you’re exercising. Maybe the sensors not picking up. Its provenance suggests, well, maybe if I have five successive measurements at 20, sure. But one, maybe not. As opposed to a Medtronic sensor implanted in your chest. That says your heart rate’s 20, it’s probably 20. So Lucem does that. Second, Inference, which is a data analytics company in Cambridge. Massachusetts, specializes in DID of data. So how do I take that not only structured, but unstructured, data and remove the proper nouns and identifiers, but also knowledge graphs and natural language processing. Problem is about 80% of what I’ll call the knowledge in the medical record is in the text. And how do you turn that into something you can analyze? Then, validation. We’ve worked with a company called Diagnostic Robotics in Israel. And how do you measure algorithmic performance against a given data set? And we worked with a company called TripleBlind to keep that data and algorithms safe and private. And then as we look to delivering results, we’ve partnered with Medically Home to deliver serious and complex care at a distance in non-traditional settings. So those are just some examples of the kind of partnerships we have.
[00:10:56] Gary Bisbee, Ph.D.: One question on the home use is sensors. You mentioned sensorss in the sense of implantable sensors before, but are sensors ready for prime time and use for home care and monitoring and tracking patients at home?
[00:11:12] John Halamka, M.D., M.S.: Well, yes, with a caveat. And so we do a significant amount of remote patient monitoring today, and you can imagine the kinds of monitors you’d use. Heart rate, pulse socks, blood pressure, but here’s, I’ll just tell you this amusing story. So I happened to live in the Boston area, but I work in Minnesota. And so I have an apartment near Mayo Clinic. So you can imagine I’m gone about two weeks a month as I travel from Boston to Rochester. I have a sleep sensor in the mattress in Boston. Now that sleep sensor happens to not only measure sleep patterns, but it also measures things like sleep apnea. And do you know when I’m gone in Rochester, I have sleep apnea in Boston. Why is that? Because as part of Unity Farm Sanctuary, we have a series of rescue dogs And we have a bulldog with sleep apnea that sleeps in my place whenever I go to rochester. So, this is the challenge, is we have a lot of sensors in the home, but understanding the identity of the individual who’s being measured can be noisy.
[00:12:33] Gary Bisbee, Ph.D.: As in barking noisy in that case, right? How about predictive modeling? What’s the state, John, of predictive modeling, particularly in terms of what you’re doing with the Mayo Clinic Platform?
[00:12:46] John Halamka, M.D., M.S.: So predictive modeling has been surprisingly effective and we have to talk for a moment about the testability, explainability and appropriateness of AI models. What we have found is we have 14 cardiology algorithms that we have validated at Mayo Clinic. We’ve done randomized controlled trials. And we can predict your ejection fraction. We can predict future atrial fibrillation, hypertrophic cardiomyopathy, or pulmonary hypertension, and do that with an AUC of 0.92, 0.93, meaning sensitivity, specificity, good. False positives, false negatives, few. But again, here’s the challenge. We’ve tested this in a randomized controlled trial on 60,000 patients and shown that we can predict with great efficacy, but can you explain what the algorithm is actually doing? AI convolutional, neural networks, deep learning, often a black box. So predictability, good. Explainability, not quite so much.
[00:13:55] Gary Bisbee, Ph.D.: If the use of AI machine learning, deep learning, whatnot, if the use of that for clinical decision support was a baseball game, what inning would we be in today?
[00:14:09] John Halamka, M.D., M.S.: Wow. So you know that, whenever I’m asked a question like that, I divid it into technology, policy and culture. So technology. Do we have robust cloud-based AI factories? Absolutely. TensorFlow, a whole variety of approaches. With a curated data of Mayo Clinic and these AI factories, can we produce algorithms? Without question. Do we have the policy guardrails and guidelines for the use of AI algorithms in healthcare? Not quite yet. Mayo Clinic is very careful to use algorithms in a not closed loop fashion, meaning the AI isn’t making any decision. And AI may say, hey clinician, have you thought of this diagnosis? Or maybe you should consider taking a look at the patient’s condition because it looks like it’s changed. But you can imagine the industry is going to set a whole variety of best practices and implementation guidance in place over the next six quarters. And FDA will certainly start to issue its guidance for the use of AI. Culture. So I’m going to ask you an interesting question, Gary. I am going to use an algorithm today to analyze your heart rate and it will make me a 30% better doctor if I do that. How do you feel about that? Well, I will just tell you that some people say, wow, that’s really good. And others, and just, let’s be honest about the bad sci-fi we’ve all seen, say, oh my God, you, you released the AI. The next thing you know, the robots take over humanity. So culturally, I think we’re still at pretty like inning one or two of deciding that AI should be a part of our daily lives.
[00:16:00] Gary Bisbee, Ph.D.: Well, moving to kind of an underlying part of what you’re saying there is physicians use, and that gets to explainability. What is your experience with physicians and how likely are they to adopt these AI driven models?
[00:16:17] John Halamka, M.D., M.S.: Our department of family medicine has 550 physicians and those physicians tested our cardiology algorithms in practice. And what they said was, I may not be able to explain how the algorithm works, but I actually used it in my practice and it provided a pleasing result. In fact, my experience was, I was able to get patients diagnosed faster, treated better, and get to the right outcome faster. So I feel very good about using these cardiology algorithms. So that’s been done in a formal IRB, human subjects controlled clinical trial of 500 plus clinicians. So I think this is going to be our challenge culturally as we chatted about. If I can test an algorithm and get a pleasing result, I’m likely to trust it, even if I can’t explain it.
[00:17:15] Gary Bisbee, Ph.D.: So you’re dealing with a pretty high level population of physicians and long history of best practices at the Mayo Clinic. You look at all of the healthcare being practiced across the country. What’s your sense of, well, I don’t want to call them the average doctor, but the normal doctor, what’s your sense that they would be willing to pick up some kind of AI model driven data?
[00:17:42] John Halamka, M.D., M.S.: You have to align incentives. And no matter who you are or where you are as a clinician, you can imagine you might have certain incentives such as, I’m having a hard time getting through my practice day, right? It’s the time of the great resignation and the great realignment. A time of anxiety, a time of COVID. Oh, you mean the AI algorithm can help me get through my day faster? Well, of course they’ll say that’s a good thing. Or, you know what I’m worried about? Loss of reputation because of a quality challenge or maybe a malpractice assertion. Oh, the AI algorithm can help me with quality and reduction of risk. Oh, well, that’s good. So I think though, there is one caveat we should talk about, which is that AI algorithms should not be blindly accepted as always being right. And in the case of one of our algorithms, a wonderful breast cancer oncology algorithm, it helps patients predict future breast cancer and it’s right 90% of the time. My wife’s a breast cancer survivor. And so I asked her, I said, if you had an algorithm before you had breast cancer that predicted with 90% accuracy, you would develop breast cancer, would you be willing to take a therapy that would cause a chemical menopause today to avoid breast cancer even if it was wrong 10% of the time. And she said, my values are absolutely, that would have been fine, but not every patient will feel that way. And I was chatting with Eric Schmidt, who is on the board of Mayo Clinic, former CEO of Google and Alphabet. And he proposed the following conundrum: if we create an algorithm that’s 10 times better than a human, but yet it makes a mistake, of course, on occasion, humans make mistakes all the time, how will it be judged? Does it need to be a hundred times or a thousand times better? And as a society, we haven’t decided on the answer to that question.
[00:19:51] Gary Bisbee, Ph.D.: Yep. Yeah, that’s well said. I could imagine physicians would be asking the privacy security question. How do you answer that, John?
[00:20:00] John Halamka, M.D., M.S.: So Mayo Clinic, and I’m not here of course, to endorse any product or service, co-lead a series A investment in a company called TripleBlind. TripleBlind is a set of cryptographic algorithms that enabled data to stay behind the firewall of an organization and algorithms to stay behind the firewall of another organization. And over the wire, all you transmit is math back and forth. Neither the data nor the algorithm is transferred between parties. Early, early days, but as I look to this next six quarters, I have a feeling we’re going to have a new set of tooling that is going to be privacy protected, IP protected, data algorithm interaction that will have credibility from the security and privacy community and trust from its users.
[00:20:58] Gary Bisbee, Ph.D.: Hmm, that’s very exciting. Your blog, Geek Doctor, one thing you pointed out there in terms of lab testing is personalized reference ranges. I found that to also be pretty exciting. What’s the kind of state of that, John?
[00:21:15] John Halamka, M.D., M.S.: So here’s this fascinating question. So if I imagine, so Gary, you have a cholesterol test. Now that cholesterol test is based on a 30 year old average male, whatever that is. And compared to a 30 year old average male, your cholesterol is 10% too high. Should you take a statin? I don’t know. And so this is our challenge, is so much of the normal values are based on an average patient, not taking into account everything from your age to your exposal home, right. I’m a vegan. Maybe my LDL and HDL should be actually completely different from a non-vegan. And so this is the kind of modeling, again, looking at the next six quarters, we’re going to have to be more precise about what comparisons we’re using for any lab values or diagnostic testing.
[00:22:15] Gary Bisbee, Ph.D.: Your use of the word precise, I think, is really important here, isn’t it, because the volume of data, the precision of the analytics, is such that the human just cannot do that. Most of them don’t have access to the data anyway, and I don’t think the human mind can process data that quickly. So precision might come down to the most compelling reason for a physician and a patient to want to embrace AI.
[00:22:44] John Halamka, M.D., M.S.: Right, yeah, so if I tell a patient, is it that you want high technology? The answer is, I just want good care. Right? And so if you say, oh, well, we’re going to combine the best of human empathy, high touch, and listening with the best of algorithms based on evidence to get you to wellness, they say, that’s what I want.
[00:23:14] Gary Bisbee, Ph.D.: Yup. Yup. I’ll take it. You mentioned the FDA earlier and another one of your blogs in Geek Doctor talked about a role for the FDA, am expanded rolem I think you were talking about, for the FDA in terms of quality and standards and whatnot. Where does that stand? Is the FDA moving down this path or is this a longer term initiative for them?
[00:23:39] John Halamka, M.D., M.S.: Sure, so, Bakul Patel is the leader at the FDA who has oversight for all healthcare AI and software as a medical device. He’s very engaged in this national discussion with the idea that, if we have an algorithm that has an AUC of 0.33, which you know, for the non statisticians that’s bad, right? A coin toss has an AUC of 0.5. You probably shouldn’t be using that algorithm in practice. Or if you do, you should be using it for entertainment use only, right? And so the FDA is working today with a whole variety of national experts on creating initial guidelines and guardrails, which will ultimately lead to transparency. As an algorithm is purchased, you’ll understand its performance characteristics.
[00:24:44] Gary Bisbee, Ph.D.: So your book, “Reinventing Clinical Decision Support”, I found to be a terrific view of the landscape in this space. Let’s get back to the predictive modeling for a moment. Is there an example today of a clinical decision support system that uses predictive modeling that physicians are actually acting on?
[00:25:06] John Halamka, M.D., M.S.: So at Mayo Clinic, when you get an ECG, a 12 lead heart test, we print on the ECG itself, the results of five AI models. So what’s important about this is it doesn’t increase cognitive burden to the clinician. It’s not a pop-up in Wpic. It’s not yet another alert, reminder or an inbox entry. What it is is, oh, I’m looking at the rate, the rhythm, the usual text interpretation. Oh, and there on it are five predictive AI models. That is in production today. And I’ll just tell you for fun, right, I wanted to test it out myself. And so, I got a 12 lead ECG at Mayo Clinic in the basement of the Gonda Building. My likelihood of, lifetime, afib, 3%. My likelihood of having congestive heart failure, 2%, My likelihood of valve disease, 2%, right? So what’s great is that it was able to tell me, predictively, these are disease states I’m unlikely to have.
[00:26:16] Gary Bisbee, Ph.D.: That’s pretty exciting as well. Another one of your books, “The Digital Reconstruction of Healthcare”, basically asked the question, why do we need digital reconstruction of healthcare? How do you answer that, John?
[00:26:31] John Halamka, M.D., M.S.: So, you know that I went to medical school in the 1980s and a few things have changed since then. I mean, just to give your listeners a context, I was in San Francisco, right, at UCSF. And these young male patients were showing up with reduced T-cell counts and strange infections. And no one could explain why. So think about it. I was educated at a time before HIV was even understood. So here’s the question. I’m an emergency physician. There are approximately 800 papers in my field published every day. I’m just slightly behind on my reading, right? So if you can imagine that I can’t read 800 papers a day, but an algorithm can. And then when I go to see a patient and that patient has a phenotype, a genotype and exposes them and the algorithm says, oh, well actually, based on the 800 papers I read yesterday, you might want to consider this disease or this diagnostic test. It’s exactly as you say. The human mind is no longer capable of integrating all the data streams and helping with probabilistic assessment.
[00:27:57] Gary Bisbee, Ph.D.: So let’s turn to you, John, fascinating background that you have. What was life like growing up for you?
[00:28:04] John Halamka, M.D., M.S.: Well, okay, Gary, this is going to be a very peculiar story. So, I was born in Dumont, Iowa, but in the sixties, moved to Southern California. And back then there was such a thing as a free range child. I know, in 2021, 2022, no one’s going to even understand what that means, but it was completely normal in the 1960s. Hey, go ride your bike. Go out and play. Come home for dinner, maybe. There was no pre-structured day with violin lessons and karate. And so what did I do? I rode my bike to the dumpsters of defense contractors like TRW, now Raytheon and pulled from those dumpsters the various electronics they had discarded as not passing military spec and in my childhood, in the sixties, taught myself analog and digital that early microprocessor technology and started designing functional systems by the age of 14. Ultimately I started selling, as a teenager, these early microprocessor based systems into healthcare, and sold a system to UCLA when I was 14. The embarrassing thing about that is, do you know that when I came back as a resident at UCLA, the system that I designed at age 14 was still in production.
[00:29:32] Gary Bisbee, Ph.D.: Oh, that’s a great story. I love that stor., the staying power.
[00:29:39] John Halamka, M.D., M.S.: Yeah, but I’ll tell you this. So I got my Malcolm Gladwell 10,000 hours of competency in healthcare IT by the age of 14.
[00:29:48] Gary Bisbee, Ph.D.: That’s terrific. So when you went to Stanford, you major both in microbiology and public policy. Very interesting combination. What led to that, John?
[00:29:59] John Halamka, M.D., M.S.: This is going to be the strangest thing you’ve probably ever heard. So, why didn’t I major in something I was good at, engineering. I decided my undergraduate career should be all the things I’m bad at because why learn more about what I’m good at? So I took the areas, public policy, political science, economics. And medical microbiology at that time was what today would be called biochemistry genetics. It was just very early. And so I was able to achieve a mastery of material for which I knew very little. And my whole career has been working at the margins, the edges of multidisciplinary science and technology. And that background helped me understand not only how to build things, but then how to change society if you’re going to get adoption of innovation.
[00:30:55] Gary Bisbee, Ph.D.: What led to medical school and then to emergency medicine, John?
[00:31:01] John Halamka, M.D., M.S.: I don’t know if you interview many emergency physicians, but you’ll find that emergency medicine is amazing in that it’s multidisciplinary, right? So it’s pediatrics, OBGYN, surgery medicine. But it’s also very predictable. You have an eight hour shift, followed by 16 hours off, followed by an eight hour shift, which means emergency physicians tend to be explorers, risk-takers. They tend to say, oh, well, I worked my eight hour shift. Now I’ve got 16 hours free to do something else. So, I went into medicine and emergency medicine because I felt like that background would give me a foundation to work on innovation. And the two have been remarkable accelerants of this digital health IT area where I live.
[00:31:53] Gary Bisbee, Ph.D.: Well, I think the last ER doc I interviewed was also a Stanford grad in information technology. And that was Don Rucker
[00:32:01] John Halamka, M.D., M.S.: Well, you know Don Rucker and I were hired the same day at Beth Israel Deaconess. And we served together for a decade in the emergency department. So as Don became head of ONC, yeah, he and I were close colleagues thinking about policy around standards.
[00:32:20] Gary Bisbee, Ph.D.: S moving then to become the CIO at Beth Israel Deaconess is kind of an interesting move for you. What led to that? John?
[00:32:30] John Halamka, M.D., M.S.: So it’s a bit of an unusual story. When you interviewed Kevin Tabb, Kevin Tabb said, sometimes you just fall into things. So I had just finished a fellowship at MIT working in a variety of advanced informatics areas. And the CEO of Beth Israel Deaconess of the time, a guy named Jim Reinertsen said, you know what we really need is we need a CIO who’s a clinician and understands workflow and understands data. Well, here’s this emergency doc who’s just done a fellowship in informatics and has a combination of public policy, economics, engineering, biochemistry. Let’s make John the next CIO. At the time, to be quite honest with your audience, the auditors of Beth Israel Seaconess said this is an action of administrative malpractice to move a 30-ish year old person into the leadership role of a multi-hundred person, multi-million dollar organization. And as you and I talked about earlier, the joy of doing that was I had the Zen Buddhist concept of the beginner’s mind. I had never been a CIO. And yet I had this rich background that enabled me to combine clinical need and technology. And so those first few years is, hey, let’s migrate the whole institution to the web. Let’s think of federated data across multiple sites. Let’s think of new ways of embracing mobile technology. And it just worked by happenstance. So I have to thank Jim Reinertsen for launching that opportunity.
[00:34:19] Gary Bisbee, Ph.D.: Yeah, for sure. Well, good for Jim. Chief Digital Officers, the incidence of that in health systems seem to be growing dramatically. Is that your sense?
[00:34:29] John Halamka, M.D., M.S.: Oh, without question. So when I became the CIO of Beth Israel Deaconess in 1997, the CIO was the CMIO, the Chief Digital Officer, the Chief Information Security Officer. And my buddy Darren Dworkin, who was at Cedars and now at Press Ganey, and I were chatting last week and we said, the role of the modern CIO is actually impossible. You can’t possibly succeed given all the pressures on you. And there’s only one answer. Divide the CIO role into the Chief Digital Officer and the Chief Medical Information Officer and the Chief Information Security Officer, so that one role is now five.
[00:35:13] Gary Bisbee, Ph.D.: Well, turning to, I believe it’s 15 acres in Massachusetts. You have Unity Farm, which is a producer of organic vegetables and fruits, and Unity Farm Sanctuary, which is basically a resident house for abandoned farm animals. Can you tell us, one, why did you get into that, John, and two, how’s it going?
[00:35:35] John Halamka, M.D., M.S.: Sure. So I mentioned, my wife is a breast cancer survivor. And in December of 2011, when she was diagnosed with breast cancer, she said, I’m not sure how this is going to end. I think it’s really important to give something back to the community. And of course, she’s fine today. Completely fine. 10 years out actually from cure.
[00:35:55] Gary Bisbee, Ph.D.: Oh, good.
[00:35:57] John Halamka, M.D., M.S.: But what we started with was this idea of, well, let’s have a bit of an organic farm, a small rescue for ,oh, chickens and ducks and geese and guinea fowl. It turns out that in Massachusetts, a lot of towns don’t allow roosters. And so there’s this huge problem of all these abandoned roosters. So you know, started with that. And then we heard about some alpaca that had a challenge in Maine. And we took in the alpaca. And people then said, oh, you’re a large animal rescue. I’ve got a horse, I’ve got a cow, I’ve got a pig. So we started with 15 acres. Today we’re a hundred acres. 22 barns, 300 animals, 500 volunteers. And here’s the interesting story. We have become a community benefits startup. And what I mean by that is we are now a destination for the metro west of Boston. As people, especially during a time of COVID, are feeling more anxiety and depression and stress, they’ll come just to spend an hour grooming a horse, walking a donkey, or just spending time watching the cows play with their soccer ball. So it has really become this extraordinary community organization over the 2011 to 2021 decade.
[00:37:25] Gary Bisbee, Ph.D.: Well, good for you and best wishes to your wife, of course. Being 10 years post is just awesome. Give her our best. This has been a very engaging, entertaining, and informative interview. We thank you very much. I have one last question if I could, and that is, about a third of our audience are up and coming leaders in healthcare. What advice would you have for an up and coming leader in healthcare?
[00:37:52] John Halamka, M.D., M.S.: So you already heard me say that I’ve always worked at the margins, right? How do you take an emerging technology in one field, could be FinTech, bring it into healthcare. So work in the margins. But also there is no innovation without risk. So I was meeting with the health minister of a large European country recently. And that person said, well, our population only wants innovation without risk. There’s no such thing. Even if you look at Steve Jobs, Elon Musk, pick your hero. 40, 50% of their ideas failed. Walt Disney went bankrupt five times. You’ve got to embrace risk if you’re going to have a breakthrough innovation.
[00:38:41] Gary Bisbee, Ph.D.: Well said, thank you, sir. This has been a great interview.
[00:38:44] John Halamka, M.D., M.S.: Hey, thanks so much.