AMA - Australian Medical Association Ltd.

08/14/2025 | News release | Distributed by Public on 08/13/2025 21:29

AI can save lives, but needs safeguards to protect patients

News

AI can save lives, but needs safeguards to protect patients

Published 14 August 2025

AI has gained significant momentum over the past decade, promising to revolutionise medical practice and transform patient outcomes. However, healthcare is a high-risk sector requiring tailored oversight.

AI has the potential to save lives, however, there must be guardrails to manage risks, protect patient safety and ensure privacy.

Releasing our report on AIthis week, AMA President Dr Danielle McMullen said the use of AI in healthcare must be clinically-led, safe and patient-centred with its sole purpose to advance the health and wellbeing of patients and the broader community.

While AI has the potential to transform Australian healthcare and research it also introduces new risks for patients and the medical profession if it is introduced without robust safeguards.

Dr McMullen told the Medical Republic: "there has been lots of focus on AI scribes because it's a tangible thing that lots of our member doctors are using and that patients are hearing about.

"But AI is and could be much broader than that and it's important that doctors and the general public do have an awareness of where could AI add value and what sort of guardrails we need in place to make sure that there's safe and appropriate use of AI."

New AI guidance

Meanwhile, the Australian Commission on Safety and Quality in Health Care has released guidance for clinicians, offering a complementary framework to the AMA's policy advocacy for safe and responsible AI integration in Australian healthcare.

The guidance aligns with our own position for the safe and responsible use of AI, particularly regarding the principle that AI must never replace clinical judgment and that final decisions must always rest with medical practitioners. The guidance reinforces this by warning against automation bias and emphasizing the clinician's responsibility to critically evaluate AI outputs.

On data governance and privacy, we have opposed the sale of patient data and highlighted reidentification risks. The commission's guidance instructs clinicians to comply with the Privacy Act and confirm how data is stored and used, especially for AI training and third-party sharing.

The commission's publication offers pragmatic, clinician-level guidance for assessing, using, and monitoring AI tools in practice. However, given healthcare's high-risk nature, we continue to advocate for structural reforms, including a dedicated AI health advisory body integrated with Ahpra and the TGA, to ensure AI tools meet the highest standards of safety, reliability, and data integrity.

We also welcome the commission's recognition of "scope creep" in evolving AI tools. The AMA maintains that functionality - not origin - should determine whether an AI tool is regulated. General Purpose AI (GPAI), such as large language model scribes, must be subject to the same scrutiny as purpose-built clinical AI.

Related topics

E-Health

More across the AMA

News
President's update: AI report, medicinal cannabis, Indigenous health and a productivity roundtable
Report
Artificial intelligence in healthcare
Media release
AMA calls for expert clinical oversight of the use of AI in healthcare
News
President's update: Moo Deng, ADHD, AI and a bulk-bill bust
AMA - Australian Medical Association Ltd. published this content on August 14, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on August 14, 2025 at 03:29 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]