Northwestern University

01/09/2026 | Press release | Distributed by Public on 01/09/2026 16:22

ChatGPT Health’s pros and cons from an AI-in-medicine expert

ChatGPT Health's pros and cons from an AI-in-medicine expert

Platform can help patients make sense of their health data but it is not protected by HIPAA

Media Information

  • Release Date: January 9, 2026

Media Contacts

Kristin Samuelson

CHICAGO --- OpenAI this week introduced ChatGPT Health, "a dedicated experience in ChatGPT designed for health and wellness," as a response to the more than 40 million people who ask ChatGPT a health care-related question every day, the company said.

Northwestern University AI-in-clinical-medicine expert Dr. David Liebovitz can speak to media about the pros and cons of the new platform, including how it is "a significant step forward from patients showing up with Google searches" but also how "patients must understand that health data shared with ChatGPT is not protected by HIPAA," unlike in conversations with physicians or therapists. He also can speak to what true democratization of health AI looks like, and what Northwestern University research is driving to make these advances practical for patients.

Contact Kristin Samuelson at [email protected] to schedule an interview.

Liebovitz is the co-director of the Institute for Artificial Intelligence in Medicine's Center for Medical Education in Data Science and Digital Health at Northwestern University Feinberg School of Medicine. He has been teaching clinical informatics for several decades, incorporating new methods for education and applications of AI within clinical patient care. Liebovitz has been a chief medical information officer at two organizations where he actively implemented AI in clinical medicine.

On the opportunity:

Liebovitz: "The 21st Century Cures Act now requires health care systems to provide patients complete access to their medical records through standardized application programming interfaces (APIs) that electronic health record vendors like Epic are now required to provide. AI tools like ChatGPT Health can help patients make sense of that data. For essentially zero incremental cost, a patient can get help understanding lab results, preparing questions for appointments and identifying gaps in their care that might otherwise be missed."

'A significant step forward'

"More than 25 years after the Institute of Medicine report 'To Err is Human: Building a Safer Health System' documented tens of thousands of preventable deaths from diagnostic errors and care gaps, we still haven't solved this problem. AI assistants that can review a patient's full history and flag potential concerns represent a significant step forward from patients showing up with Google searches. These tools synthesize information in context rather than generating alarm from isolated symptoms."

On concerns:

"Patients should understand that health data shared with ChatGPT is not protected by HIPAA. Unlike conversations with physicians or therapists, there's no legal privilege. This data could potentially be subpoenaed in litigation or accessed through other legal processes. For sensitive health matters, particularly reproductive or mental health concerns, that's a real consideration."

On the bigger picture:

"The question isn't whether patients will use AI for health information, 40 million people already ask ChatGPT health questions daily. The question is whether we can help them do so more effectively and safely, with appropriate guardrails and realistic expectations about what these tools can and cannot do."

On local/on-device models:

"There's an alternative approach that sidesteps the privacy concerns entirely: running AI models locally on a patient's own device. Modern smartphones now have sufficient processing power to run capable language models without any data ever leaving the phone. No cloud storage, no corporate servers, no subpoena risk."

On the technical trajectory:

"On-device AI capabilities, which run AI directly on local hardware such as phones and wearables instead of sending data to the cloud, are advancing rapidly. Apple's own approach with Apple Intelligence validates that sophisticated AI can run locally. Open-source models optimized for mobile hardware are improving month over month. Within a year or two, a patient could have a highly capable health assistant running entirely on their phone, analyzing their downloaded medical records with complete privacy."

On the democratization angle:

"Here is what true democratization of health AI looks like: A patient downloads their records using the APIs health care systems are now required to provide, runs them through an AI model on their own phone and gets personalized insights without their data ever touching a third-party server. No subscription fees, no privacy tradeoffs, no dependence on any company's policies or terms of service."

On what Northwestern is exploring:

"Our research group is actively exploring how to make this practical for the public. The technical pieces are falling into place: access to standardized health records, powerful mobile hardware and increasingly capable open-source models. The goal is giving everyone access to meaningful second opinions on their health data while keeping that data entirely under their control."

Interview the Experts

Dr. David Liebovitz

Northwestern University published this content on January 09, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 09, 2026 at 22:22 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]