podcast
Chasing Life
All over the world, there are people who are living extraordinary lives, full of happiness and health – and with hardly any heart disease, cancer or diabetes. Dr. Sanjay Gupta has been on a decades-long mission to understand how they do it, and how we can all learn from them. Scientists now believe we can even reverse the symptoms of Alzheimer’s dementia, and in fact grow sharper and more resilient as we age. Sanjay is a dad – of three teenage daughters, he is a doctor - who operates on the brain, and he is a reporter with more than two decades of experience - who travels the earth to uncover and bring you the secrets of the happiest and healthiest people on the planet – so that you too, can Chase Life.

AI Is in Your Healthcare Now. Here’s What to Know
Chasing Life
May 8, 2026
More people are turning to AI for health advice, but how reliable is it? Dr. Sanjay Gupta sits down with Dr. Bob Wachter, author of "A Giant Leap," to unpack how AI is already helping patients and doctors, where it can go wrong, and how to use these tools safely without over relying on them. Dr. Gupta is a practicing neurosurgeon and member of the National Academy of Medicine’s AI Code of Conduct Steering Committee. He has previously provided limited consulting to OpenEvidence as a medical advisor.
Our show was produced by Kyra Dahring with support from Jennifer Lai and Elizabeth Corallo
Medical Writer: Andrea Kane
Senior Producer: Dan Bloom
Technical Director: Dan Dzula
Episode Transcript
Dr. Sanjay Gupta
00:00:02
Hey there, welcome to Chasing Life. Tell me if this sounds familiar. You notice a weird symptom. Maybe you have a rash or a headache that won't go away, some strange stomach pain even. And before you call your doctor, you type it into ChatGPT or another AI tool. What could this be? Should I be worried? Honestly, I get it. AI is quickly becoming part of everyday life and now it's becoming a part of healthcare as well. Patients are using it at home. Doctors are using it in hospitals and clinics. This is happening. But I think the big question is this. When it comes to your health, how much should you actually trust AI? Can you trust it? Can chat bots help you ask better questions, understand your symptoms, maybe make faster decisions, or could it lead you in the wrong direction, make you panic, or maybe reassure you when it shouldn't, or even prevent you from getting care when you actually need it? Well, today I'm talking to Dr. Bob Wachter from the University of California, San Francisco. He's the author of a new book called A Giant Leap, How AI is Transforming Health Care and What That Means for Our Future. We're gonna talk about what AI gets right, where it could go wrong, and how to use these tools safely without over relying on them. I'm Dr. Sanjay Gupta, CNN's Chief Medical Correspondent, and this is Chasing Life.
Dr. Sanjay Gupta
00:01:33
I feel like every sidebar conversation in hospitals now among my colleagues and medical students is around AI. And I kept thinking someone like you is going to write a book like this, but then I thought maybe they won't because by the time they write it and it gets published, it may already be outdated. Things seem to be moving that fast. So I'm just curious, before we talk about the specifics, how did you approach that part of things?
Dr. Bob Wachter
00:01:58
'I had exactly the same fear, Sanjay. In fact, my wife, who's an author and journalist, suggested I should write a book about it. I said, no, I don't think so. And then the publisher contacted me and I said oh, good idea, which made my wife very unhappy. But my concern was exactly that. And my editor said something quite smart, which is if you've written a book that's out of date, the minute it comes out, you've written the wrong book. The question is, can you helicopter up a level? And ask the fundamental questions that happen when a technology like this enters our world of medicine. How does it change the nature of being a patient, being a doctor, being a health care system? How do we decide what tools to use? That's not GPT-5.2 versus GPT 5.7. Those are really big cosmic questions.
Dr. Sanjay Gupta
00:02:42
That reduction of cognitive burden on healthcare providers, doctors, nurses, whoever is using the AI, what is that worth? I mean, does that lead to better care for patients? Ultimately, are we a healthier society because of it?
Dr. Bob Wachter
00:02:56
I don't think we know yet. I would frame AI coming into healthcare as being the biggest experiment in the history of medicine.
Dr. Sanjay Gupta
00:03:03
oh.
Dr. Bob Wachter
00:03:03
I think these are the kinds of questions.
Dr. Sanjay Gupta
00:03:05
So we're very much in the experimentation.
Dr. Bob Wachter
00:03:07
yeah. I mean, you know, AI was a dead issue in healthcare until November 30th, 2022, when ChatGPT was released to the public.
Dr. Sanjay Gupta
00:03:15
I mean, in terms of AI and healthcare, you're optimistic about what this could mean.
Dr. Bob Wachter
00:03:21
I am, and I caveat that in a couple ways. One is, I wrote a book 10 years ago called The Digital Doctor, which was about healthcare going from paper to digital. It was basically the lead character was the electronic health record. It's a very grumpy book. I mean, it is like, why did this go so badly? How did we all get it wrong? I am fully capable of writing a book about technology and healthcare that is not optimistic. I came out of this quite optimistic for healthcare. And again, and I guess the other caveat is AI and the rest of our lives, I'm quite worried. I mean, I think if we hit 20% unemployment, I don't know what happens to our society. I think there are massive fundamental questions and the climate issues and distribution of wealth issues, those are very real. So I'm talking just about healthcare. But in some ways a happier doctor, a more effective doctor, because some of the bureaucratic burdens that I used to have in paperwork and filling out forms and faxing requests to insurance companies will be taking care of to a large degree by AI I think patients will get better care in part because they're doctors or other clinicians will be better But in part, because they'll get some of the answers to their questions without having to access the traditional health system Which will save them time and money. So yes, I think in health care. I am pretty optimistic I think it's going to be good for patients for the system and for clinicians.
Dr. Sanjay Gupta
00:04:45
So, there are some core principles which I really want to talk about that I think you've applied in this book, but let me just ask you personally, how are you using AI now as a physician?
Dr. Bob Wachter
00:04:56
Yeah, I use it all the time. And the main example is the electronic health record. You know, we digitized our records 20 years after most industries did. But this technology has entered our world over the last three years and is now ubiquitous. So for me, everybody at my place at UCSF has access to an AI scribe. So I now look patients in the eye when I talk to them and my note is created or at least drafted by AI. The AI will summarize my patient's past medical record. One out of five patients has a chart longer than Moby Dick. So the idea that I'm gonna be able to review that in three minutes before I see them is ridiculous, impossible. And most of us now are using a tool, one we're using is called OpenEvidence, which is essentially GPT for doctors, which keeps up with the literature, which suggests diagnoses. It's not right 100% of the time, but it's actually quite helpful.
Dr. Sanjay Gupta
00:05:52
You know, there seems to be this tension between the trust in the platform and the expectations. And if a autonomous vehicle were to get in some sort of accident, that would feel like a significant blow to the autonomous vehicle industry. Despite the fact that humans get in accidents all the time, when a machine does it, it feels different. We have high expectations and low trust. How do you sort of weigh that when you're, you know, maybe making medical decisions or recommendations based on an AI platform?
Dr. Bob Wachter
00:06:28
Yeah, it's a spectacular question. I spend a lot of time in the book talking about Waymo and autonomous vehicles, because I think the example's relevant in part because when we talk about autonomous AI and medicine, we sometimes will say, you know, we can't have that because what we do is too risky, and if we get it wrong, someone can die. And then I hop in the back of a Waymo about once a week, sometimes take a nap, and it makes a left turn across Divisadero Street in San Francisco during rush hour. I mean, that is high trust. And the thing is, it is merited. There's no question, the data are quite clear now that it is safer than a car with a driver. But you're right, the standards are higher. That's probably appropriate in the beginning until we learn whether to trust it. I do think a bad error made by an AI will sort of resonate more with people than the same error made a human. But I think one of the lines I use in the book is Biden's old line, don't compare me to the Almighty, compare me the alternative. And, you know, we're seeing a little version of this with the mental health chat bots, where, yes, there have been some really egregious cases of when you read what the bot said to a kid and the kid harmed him or herself. It's awful. It needs to be better. It needs be regulated. And yet there are millions of people today getting mental health help from these tools for 20 bucks a month and try to find a mental health professional in San Francisco or Atlanta or New York. And if you can, try to one for less than $300. So It's real, and we're having to figure that out, but it's not a slam dunk that the human and the technology will work perfectly together. I think these are things that we have to test and try out.
Dr. Sanjay Gupta
00:08:07
AI and healthcare ever get to that level of autonomy? Being able to act alone, do you think?
Dr. Bob Wachter
00:08:12
It's unquestionable that it will. I think where the first big experiment is in the state of Utah, you may have heard that the state of Utah relaxed its rules and is now allowing an AI tool built by a San Francisco company to refill medicines by itself. It's about 400 medicines, so it's blood pressure medicines, cholesterol medicines, things like that. I think that's exciting. No doctor really wants to spend their time refilling meds. No patient wants to have to schlep to the doctor's office to go get a refill. And so I think this is the first step in trying to navigate what can and should be autonomous. I'd say in primary care, for example, when I look at, there was a study a couple of years ago that said the average primary care doctor, if all they were doing was the preventive stuff they're supposed to do, that's 27 hours a day. So the only way to make our primary care system viable is to say, what are the tasks that a primary care doc does really has to be done by a human and someone who's trained at that level and costs that much and whose time is that valuable? What are the things that maybe are not that?
Dr. Sanjay Gupta
00:09:19
I do want to ask your your views on patients using AI and you know sort of draw these parallels between you know Dr. Google and some of the stuff that we've seen before and people have strong views on this but one of the things in your book that you I mean you interviewed a hundred more than a hundred people who are pioneers in medicine and policy and technology was there any surprising disagreements that they had?
Dr. Bob Wachter
00:09:47
'I think there's a fair amount of disagreement on the use of patient-facing tools, where you can easily see the potential benefits of patients being able to go to GPT or Gemini and ask a kind of question and not have to go to urgent care, and on the same side, the question is how good these tools are and how often will the tools tell them the wrong diagnosis or to do the wrong thing? And the early studies of patients using these tools are not that positive. I think they're a little bit sobering. And I think one of the things I came to learn that I didn't fully understand the exact same tool that might perform really well for me or for you using it. You know what to put in and you can interpret the results and say, that's really smart, I hadn't thought of that. Patients that really don't have an ability to do that. So the same tool that will work really well for a doctor might not work well at all for a patient. Does that mean that patients shouldn't be using them? No, I think it's better than Dr. Google. I think, it's often better than the alternative, which may be waiting a week to see your doctor or a month to see a doctor. But I think the tools have to be better, and the tools are gonna have to get more doctor-like. And by that I mean, rather than just give an answer when the patient says, I woke up today with a headache, do what you and I would do, which is say, tell me more about it, what part of your head, are you a headache person, does your neck hurt, do your eyes hurt in the bright light? I think those are the tools of the future that are patient-facing really have to crafted to understand that patients don't have deep medical knowledge. They have deep knowledge about their own situation, but not deep medical knowledge, and essentially do a little bit more handholding than the current tools do.
Dr. Sanjay Gupta
00:11:14
Do you think the platforms will get better in that regard so that let's just say we're talking about the patients here and you know you're at home. You have a headache. It's unusual and you're going to chat whatever platform you choose I mean right now what you've just outlined is scenario where they may not know the right questions to ask and therefore they may be getting great information. To me that makes it quite suspect then because the concern is maybe they don't actually end up following up with the doctor You know for something that could be significant Does that part of it get better over time, do you think?
Dr. Bob Wachter
00:11:47
I can't see how it doesn't. I mean, in some ways, if the answer is, and this is quite sobering, if the answer, is you could do it and I can do it, then there's no reason that, yeah, I can't do it. And when you look at some of the demos that I've seen from some of the tools, for example, that Google is rolling out, or when you look at how good open evidence is for doctors and say, well, couldn't you build a tool specifically for patients that recognizes now you're not dealing with an expert user, you're dealing with with a regular person? I don't see any fundamental reason the tools couldn't do that. And I think there are two aspects there. One is understanding what the patient knows and doesn't know and prompts them in the right way. And maybe it prompts them, you know, based on their own medical literacy, and it may, you know have ways of knowing that. What is the right question to ask? The other important point is not just the prompting and the kind of back and forth, but in order to give the right answer, knowing the context, knowing the patient's past history is really quite important. The big tech companies, OpenAI and now Claude, have built a GPT for health, Claude for health. And what they're doing is saying you can load in your entire past record into our system. And I think that's not trivial because your past record again might be 500 pages long and sifting through that by the AI to find out the salient points is a tricky thing to do. I guess the final thing I'd say about today, you know, do I want to take my entire record and port it into GPT or port it in to Claude? Maybe that's a choice, although there are some concerns about privacy. But where this really is going to get good is most patients have a patient portal. Wherever you get your health care, that system has an electronic health record, you have a patients portal. I think what you're going to see is these tools built into the patient portal, so you're not going to have to take your medical record and bring it anywhere. It's going to already be in your patient portal. And therefore, it asks you a bunch of questions. And if the answer is you need to see a doctor, what you can see is a screen pop up saying, you know, would you like to be seen today at three o'clock in your health system? So the integration of these tools into the existing health system, I think will be a real leap forward. I think that's gonna happen within a couple of years.
Dr. Sanjay Gupta
00:14:02
'You know, you've mentioned open evidence a couple times, and I've had a chance to talk to Daniel Nadler about this, who's the founder of open evidence. One of the things that's interesting, for people who don't know, first of all, to use open evidence, you have to submit what is known as an NPI, a national provider number, index number. So the only people who are using this are healthcare providers. Also, the aperture, if you will, of the content from which they pull you know, peer-reviewed journals and things that are sort of legitimate scientific publications as opposed to everything. You know, opening up the aperture to everything, every small case report, but also a Reddit stream and social media and things like that. And I use open evidence all the time. And I gotta say what's really wild for me, and maybe you've seen this as well, is I'll see my residents as we're in clinic and they're walking into the patient room and in the hallway ahead of time, And they're talking to their phone. And they're getting this, even as they're walking to the room, it's really, it is wild.
Dr. Bob Wachter
00:15:06
It's wild. It's a challenge in some very fundamental ways to us because it's smarter than I am. And for many doctors, we were defined, our persona or sort of professional view of ours was defined as I now know more than my patient does. And so we're now going to have tools where the patient has access to information that's as smart as or maybe smarter than than I and then fundamental questions about what our value is, what we bring to the table is. But I also have noticed... I remember when Google first came out, I'm old enough for that...
Dr. Sanjay Gupta
00:15:37
Me too, me too.
Dr. Bob Wachter
00:15:38
I feel badly about this, but I would say to a patient in the exam room, sorry, my beeper just went off to go out and do a Google search because it was embarrassing that I might. And now I think the residents don't feel that. They feel like it's okay, the fact that I'm a human, I'm gonna use a knowledge tool that's in my pocket to get smarter. And I think that's healthy.
Dr. Sanjay Gupta
00:15:59
Up next, Dr. Bob Wachter is going to share what using AI in healthcare might mean for you as the patient, what happens in the years ahead. That's after the break.
Dr. Sanjay Gupta
00:16:16
If someone asks you directly, should I trust an AI platform with my health questions? What do you tell them?
Dr. Bob Wachter
00:16:23
'I say you should trust it. In some ways, it's what is the alternative? Is the alternative Google? I think it's better than Google. I think, it allows you to ask the question, put in the context, I am a 42-year-old woman. I woke up today with a sore throat. And I'm on these medications. What do you think is going on? I think the caveats would be, first of all, they're not perfect. I like asking the question after. It gives you an answer saying, What's the worst thing that this could be, and how do you know it's not that? I think asking for a second AI opinion is a good idea. I can't prove this, but it feels right. They use different ways of getting information. So if you put it into GPT and then put the exact same question into Gemini or Claude and they give you the exact answer, I'd feel a little more comfortable. And then there are just red flag symptoms. If you wake up and you have bad chest pain or you're newly short of breath or you are confused. I think you need to see a doctor, no matter what the tool says. But yeah, I think in many cases it beats the alternative.
Dr. Sanjay Gupta
00:17:25
You referred to this idea of uploading your medical record to Claude, you said, ultimately, it'll probably happen through things like MyChart, but right now, is that something you'd recommend?
Dr. Bob Wachter
00:17:35
First of all, all the companies that are making this available, GPT and Claude so far, and Perplexity, are promising that your data will be kept totally private, they're not going to use it for training, it will not be sold to anyone. You have to believe them because they have no business obligation to do that, whereas your record, your medical record in your doctor's office or your hospital is protected, as you know, by something called HIPAA, meaning there are huge penalties if that data leaks. So you have to decide how much to trust this company with data that might be quite personal for you. I think that's a choice. I think for many patients putting their data into these tools, if it gives them better, more customized answers, would I do it? Yeah, I think so. If I wasn't a doctor and I wanted to be able to talk to this AI tool and get good answers about my health status, which might not be, just today I woke up with a headache, might just be, what would you recommend should be my exercise regimen or my diet? I think the benefit of that for many patients will outweigh what I see as a very small privacy risk because the companies have a tremendous interest in not screwing this up. They know how big a part of their world healthcare is and when they say we're gonna keep your data private, there are a lot of people looking at them to be sure that that's real. So I think I would trust it. If I had something that would be absolutely horrible, if it got leaked, then maybe I'd be careful about that piece of data.
Dr. Sanjay Gupta
00:19:03
Do you think, you know, you hear these stories, and I just saw a news report today, that 15 to 20% of white collar jobs may be at risk as a result of AI. I'm curious what you think of that in terms of the impact on jobs, and would you encourage medical students to go into fields like radiology at this point, or how worried are you about that?
Dr. Bob Wachter
00:19:23
Yeah, I mean, radiology is the most instructive case in medicine, because it's the area that seems most amenable to replacement, you know, after all, it really is a human looking at a collection of digital dots and matching it up against a pattern. I mean what could be more perfect for AI? And yet I tell the story in the book of Jeff Hinton, the founding father of generative AI, who in 2016 gave an infamous speech, where he said, we should stop training radiologists today because it's obvious that by 2021, we won't need any. He's a very smart guy, won the Nobel Prize a year or two ago, but because of that, for a couple years, med student applications for radiology residencies tanked. And then a couple of years later, the med students woke up and said, wait a second, it looks like there are help wanted ads in radiology departments all over the country. And all of a sudden, their application skyrocketed. So, doctor replacement turns out to be much harder. Than the technologists tend to think. You know, I asked one of our AI radiologists, you know, what did Hinton get wrong? And he said he thought that reading a CAT scan of the chest or brain was the same as looking at an image on YouTube and it's differentiating, is this a cinnamon danish or a dog? And it turns out that it's a massively more complex task. And the head of Neuroradiology at the Mayo Clinic said to me, I said, how is it that I get an Waymo once a week and trust that this thing can drive me with no driver, and yet we can't hire radiologists fast enough?
Dr. Sanjay Gupta
00:21:06
Yeah.
Dr. Bob Wachter
00:21:06
'And he said, I can teach a 16-year-old to drive in a day. I cannot teach them to do neuroradiolgy in a date. It's the complexity of the human condition, the fact that there are a thousand different things that be going on in the brain, the fact the same thing on an MRI. In a certain patient might mean cancer and another patient is a ditzel and don't worry about it. Those are really complicated, but where does this go over time? I do think just the diagnostic radiology, diagnostic pathology are fields that I think are going to be compromised, but probably not over the next 10 years because the volume of what they have to do is growing so fast that even if these tools make them 50% more productive, but we still need a radiologist to do the final reading, I think we're still probably going to need as many radiologists we have now. I landed in a place in the book where I think for the next 10-ish years that I don't think we're going to need fewer nurses or doctors. The unmet needs are such that the tools will help them do their jobs. Would I tell a kid to go into medicine? I did. I told my daughter, you know, yes, I think medicine's a great field. I think there are going to be jobs for doctors for the foreseeable future. I think the AI will make us better and maybe replace a portion of what we do but leave the parts that are uniquely human. And this is sort of the most cynical read of it. If jobs for doctors are gone, it means the jobs for lawyers, accountants, journalists, and almost everything else have been gone for five or 10 years. We're probably the last field to go.
Dr. Sanjay Gupta
00:22:36
The idea, again, with the radiologists in particular, there was a study that came out about, I think it was mammography, at how the AI platform actually did a better job at finding breast cancer than human radiologists. Even if it's not replacing doctors, do you think society gets to the point where, hey man, I think that that AI platform is actually better at reading my scans than a human. I want the AI platform. I actually don't want the human to be doing this.
Dr. Bob Wachter
00:23:03
'There was a very important study that came out about a year and a half ago that it was a study of diagnostic reasoning and, you know, how well the doctors sort of thought about a complicated case. And the punchline was the doctor by themselves got it right about 65 percent of the time. The doctor using AI got it about 70 percent of time. So, yeah, it helped a little bit as a co-pilot. But the best performance, about 90 percent, was when the AI did its own thing. In other words, the human intervention served only to muck things up. And I could see that happening. The AI gets to a point where it's so good that having the human co-pilot actually degrades the performance. I think we've got to be open to that possibility. And, you know, I would say right now when I decide whether to get in an Uber or Waymo, I trust that I'm going to be safer in the Waymo. And I think that's not wrong based on the data. So right now, for example, in the world of medical malpractice, you will see some lawsuits if something bad happens. And, you know, the lawsuit will be about you used AI. I can imagine a world five to 10 years from now where the lawsuit will be, why didn't you use AI, doctor? You know, you should have used AI, the standard of care is that the AI reads the mammogram and maybe you look over its shoulder, but if you veto what it said, you're gonna have to be pretty sure that you're smarter than it is. I don't think we should go into it with the mindset that the human intervention is always necessary, always helps. I like to think that, but I'm a human and I'm rooting for the humans. I think the answer has to be empirically, what do we find is the best way to organize care to deliver the best outcomes at the lowest cost?
Dr. Sanjay Gupta
00:24:38
Yeah, this is such a fascinating point. And in neurosurgery, we have these technologies, for example, where we use image guidance in the operating room. So we're using, it's not a robot, but basically we're impregnating all the tools in the operation room with these LED sensors. We have these cameras. So we have a visual sort of representation of the patient in the room at all times. But sometimes, Bob, I'll say to my residents, I'll said, hey, let's pretend that the technology failed.
Dr. Bob Wachter
00:25:06
Right.
Dr. Sanjay Gupta
00:25:06
And now you got to do it. Quote unquote, the old fashioned way.
Dr. Bob Wachter
00:25:11
Right.
Dr. Sanjay Gupta
00:25:11
And 10 years ago, when I would say that, you know, it kind of made sense. And now when I say that to them, I'll get some version of why don't we just do the case tomorrow then when the technology is working.
Dr. Bob Wachter
00:25:22
Haha.
Dr. Sanjay Gupta
00:25:22
And I think their point being a little bit like, hey man, if it were me, if I were the patient on that table, I would want the technology. So the fact that you're not utilizing it, you may be giving me less than what is currently the best standard of care.
Dr. Bob Wachter
00:25:38
Yeah, yeah.
Dr. Sanjay Gupta
00:25:38
And maybe AI falls into that same bucket.
Dr. Bob Wachter
00:25:40
'Absolutely fascinating. I think you're, you know, you're right. For the time being, there's certain kinds of de-skilling, which is really what this is about, where it's really dangerous if the person doesn't, if the AI is good at diagnosis but not perfect, and I'm the final arbiter, I need me or my students or my residents to still be very good at diagnoses. You know, I will say before my resident pulls out OpenEvidence to search something, I will push them and say, tell me what you think is going on before we use the crutch, and then use that to get smarter.
Dr. Sanjay Gupta
00:26:15
Look, you're the guy, you know, I think people come to you for thoughts on these big issues like this and I certainly enjoyed reading the book very much. I mean, we've talked about so many optimistic things today, Bob, right? I mean the benefits for physicians in terms of what they can provide for patients, for patients to be able to have access to things that they maybe didn't have access to before. I really enjoyed the conversation. I love your books, I always learn so much and it's a real privilege to be able to this time with you. Thank you.
Dr. Bob Wachter
00:26:45
Sanjay, it's a pleasure and thank you. It means a lot coming from you. Really appreciate it.
Dr. Sanjay Gupta
00:26:51
That was Dr. Bob Wachter from the University of California, San Francisco. He's got a new book out. It's called A Giant Leap, How AI is Transforming Healthcare, and What That Means for Our Future. Thanks so much for listening.
Dr. Sanjay Gupta
00:27:06
'We did reach out to some of the companies behind these tools. A spokesperson at Google, which is the parent company of Gemini, told us, quote, we work in close consultation with medical professionals to build safeguards in the Gemini app. We've always been transparent that Gemini should be for informational use, and we build reminders directly into the app, prompting users to double-check information and consult with qualified professionals when asked about medical matters. Other companies, including OpenAI, the parent company of ChatGPT, Anthropic, the parent company of Claude, and Perplexity, did not respond to our request for comment.





