AAMC - Association of American Medical Colleges

10/15/2024 | News release | Distributed by Public on 10/15/2024 08:20

Can AI fundamentally improve patient care

  • AAMCNews

Can AI fundamentally improve patient care?

Microsoft exec and Learn, Serve, Lead speaker James Weinstein, DO, says the technology could eventually expand access and quality, but privacy concerns remain.

By Patrick Boyle, Senior Staff Writer
Oct. 15, 2024

While artificial intelligence (AI) tools are drawing attention for their potential to improve doctor-patient dynamics - taking notes during doctor-patient visits, communicating with patients at home, and organizing electronic medical records - the vice president of Microsoft Health Futures looks beyond the "cool stuff." James Weinstein, DO, MS, sees AI transforming the very foundation of how medical care is delivered in the United States.

"I've been studying variation in the U.S. health care system for decades," Weinstein says. "Very little has changed in those decades. Geography remains destiny.

"The communities that are poor generally have poorer health and higher mortality rates," he continues. "The AI tools that are being used now are interesting, but how do we use AI to effect real change?"

Weinstein - former CEO and president of Dartmouth Health and former director of the Dartmouth Institute, both in New Hampshire - will talk about generative AI as "an ecosystem transformation" for medicine at Learn Serve Lead: The AAMC Annual Meeting, on Nov. 10.

Weinstein is gung ho on AI in medicine, with caveats.

"Channeled correctly, AI really can be a rising tide that lifts a great many boats," noted a Harvard Business Review article that Weinstein coauthored last year. "And like a tide, it is already coming in."

Microsoft's role in AI includes well-known ventures (such as its collaboration with OpenAI) and an array of AI health tools focused on population health, real-world evidence, imaging analytics, and genomics.

AAMCNews spoke with Weinstein about the potential for AI to transform medical care, and the risks that the technology brings.

This interview has been edited for length and clarity.

Let's start by talking about why medicine even needs AI. What problems can AI solve beyond relatively simple functions like providing the tools that listen to a doctor-patient visit and produce written notes?

I like to think about AI as actionable intelligence, not artificial intelligence. How do we turn it into actionable things that change the lives of everyday people?

It's not just for cool stuff, like taking notes, which I call technology substitution. Instead of the doctor writing his notes, now the note can be written by AI. I can listen to my patient and do my job as a physician better, actually use my ability to focus on the problem and make a patient-centric decision.

But I can't know everything that AI can know. Its ability to consume information in so many fashions, in such large quantities, and the ability to communicate that back to you in seconds with some useful information, has changed everything. How do we use that potential to solve everyday health-related problems? These tools can effect tremendous ecosystem changes, which then affect individuals.

Editor's note: Weinstein is talking about generative AI, which creates original content and analysis by learning from information that it is fed, including text and data.

You've written about ecosystem disruption in the medical system. What does that mean?

People talk about technology as being disruptive. Disruption generally refers to a substitution of technology to make a specific thing easier to do, like organizing the electronic health record.

But that doesn't necessarily change the patient experience for the better. That doesn't get the patient an earlier appointment. That doesn't help the person pay for the medication they need. With AI-enabled technologies, we begin to put those pieces together in a more seamless manner. How do we use these technologies like AI in the systems of the payers, providers, patients, pharma, and supply chain, to connect the dots, to spend money more efficiently and effectively? AI can help with diagnosing, testing, and treatment choices like never before.

What would that look like for patient care?

Can we access "multimodal" information from these communities to intervene with patients before the problems start, before they become less amenable to treatment? Before one has the manifestations of breast cancer, lung cancer, before they get prostate cancer. Diagnose in the earliest stages, where simpler treatments may be much more effective. We haven't had so much upstream predictive capability before.

When you talk about rural health care, you're talking about millions of Americans who don't have regular access to health care, especially specialists and subspecialists. With all the population and environmental-based data coming together in this AI brain of supersized capacity, we can say, "I need to target this county/community, this zip code in Mississippi that I know is at risk for many health ailments. Do they [a particular resident there] have food insecurity, environmental risk? Do they have housing problems? How many people live in the house? Are they smokers? Are they drinkers? What medications do they take? What is their family history? Do they wish to share genomic information?"

Clinical care accounts for only 20-30% of someone's health. Behaviors - smoking, exercise, alcohol use, sexual activity - that's another 30%. Social/economic issues, that's 40%. Physical environment, that's 10%. AI can look at all those pieces - 100% - and we can have much more targeted approaches, and more upstream approaches.

The issue of patient privacy comes up a lot when discussing AI in medical care. What you've described, if I understand correctly, is breaking down silos that hold information about patients, so that information is shared. I get the value of that.

But what do you tell people who say, "Now you've gathered information about me from all sorts of places into one system, and I don't understand that system. And even though you're Microsoft, I don't believe anybody can guarantee that my data doesn't get violated somehow, like by hackers. Or be used for research."

Patients must own their own information. Systems that use AI must respect that. A lot of people talk about implied consent: Because I'm here participating in a medical process, you're implying that I'm consenting to share information. I don't agree with implied consent. I believe in informed choice versus traditional informed consent.

You've told me you're going to use my data in some large database to discover a cancer drug. I agree with that as informed choice. But if you're going to use it for something you haven't told me about, I don't agree with that.

Good, accurate data are essential to the future of artificial intelligence, if it is to be actionable. But not sharing who owns the data and what you're going to do with it is not fair.

But if I'm a patient who needs care urgently, I feel pretty vulnerable. I'm probably going to waive my rights to withhold my personal information in order to solve my problem.

You just said something I don't agree with. I don't think you should accept a risk that we haven't discussed, even if you're in a vulnerable situation.

We lost a daughter to cancer at age 12. She went through chemotherapy and radiation for 11 years. I wasn't happy with the discussions I had with the doctors about the risks. I didn't feel totally informed on some choices. But we were vulnerable and stuck in the unknown. We couldn't not treat our daughter.

So, I understand that people are vulnerable when they're in a situation where they're compromised by some illness or disease. That's even more reason for doctors and researchers to respect their rights as humans, to honor their privacy and to share information only with their permission.

This is a good place to discuss oversight of AI. The book The AI Revolution in Medicine quotes your observations about data and safety monitoring boards, which operate under the National Institutes of Health to oversee clinical trials to ensure patient safety and the reliability of data collection. You worked with a monitoring board when you led a 15-year trial [the Spine Patient Outcomes Research Trial] at the Dartmouth Geisel School of Medicine, on the effects of back surgery.

Would oversight boards like that work for AI in medicine?

The governance of this is important. Transparency is key. I found this external group of subject matter experts in the oversight group to be extremely helpful. It's an external body, non-interested parties, with no financial gain but with subject matter expertise.

For AI, what kind of expertise do we bring to the table so that patients feel, and institutions feel, that they [institutions] are doing the best they can to avoid potential compromise of patient information, to do no harm?

I appreciate how thoughtful you are about this.

I want to tell you one thing. I don't know if you know Adi Ignatius, editor in chief of the Harvard Business Review. The new edition just came out, and he wrote about producing work with the help of AI: "Times have changed. What once felt like cheating is now a way to be more productive."

I think what Adi said is that our professional success in dealing with humans is going to be much better with these tools. If you don't use them, I fear that you're going to be leaving people behind and hurting people. As a colleague puts it, "Physicians won't be replaced by AI, but physicians that don't use AI will be replaced by those who do."

Patrick Boyle, Senior Staff Writer

Patrick Boyle is a senior staff writer for AAMCNews whose areas of focus include medical research, climate change, and artificial intelligence. He can be reached at [email protected].