Rutgers, The State University of New Jersey

11/16/2025 | Press release | Distributed by Public on 11/16/2025 22:32

Unregulated and Unsafe: Expert Warns of Risks in Substance Use Reduction Apps

A Rutgers Health researcher says untested mobile and artificial intelligence health tools are misleading users with false claims

In a commentary published by the Journal of the American Medical Association, researchers at Rutgers Health, Harvard University and the University of Pittsburgh discuss the impact of unregulated mobile health and generative artificial intelligence (AI) applications that claim to assist in substance use reduction.

Jon-Patrick Allem, a member of the Rutgers Institute for Nicotine and Tobacco Studies, an associate professor at the Rutgers School of Public Health and senior author of the commentary, focuses on the need for greater oversight of new and untested technologies, such as mobile health and generative AI applications, and why public marketplaces need better rules to manage them.

Allem discusses the notion that greater transparency and stricter regulation could safeguard people from being misled by information portrayed as verifiable public health information.

What are the issues with substance use reduction mobile health apps?

Research shows that some mobile health apps can help people cut back on substance use, like alcohol, at least in controlled studies. But in the real world, their impact is limited.

App stores often promote products that generate revenue through ads rather than those backed by science, so the most visible apps are sometimes untested or misleading due to the prioritization of ad revenue.

As a result, evidence-based apps may be harder to find. Systematic reviews of substance use reduction apps consistently show that most fail to use proven evidence-based approaches. Instead, they often make bold claims about how effective they are and use scientific-sounding language to seem more credible than they are.

How do you know if an app is evidence-based?

To know whether an app is evidence-based, consumers can look for specific signs that the app is built on proven research, not just marketing claims. A few indicators would be that the app cites scientific research (like a peer-reviewed study), the app was developed by experts in the area (built in collaboration with a University or licensed clinician or professional organization), the app has been independently evaluated (published evaluations in scientific journals), the app follows strict data standards (clear explanation of how data is stored, complies with regulations like HIPAA) and/or the app is free from exaggerated promises (like guaranteed results).

What is the current landscape of regulation and enforcement in the app marketplace?

As of now, there is a massive lack of enforcement. Because so many health-related claims made by mobile applications are unsubstantiated, this leaves huge populations of people vulnerable to misinformation, which can hinder the treatment and chances of recovery for individuals with a substance use disorder.

What are your concerns with using generative AI for substance use reduction apps?

Generative AI's integration into the health mobile app has been flooding the marketplace with unregulated and untested products because of the rapid development and output from generative AI tools.

Although general purpose models such as ChatGPT have demonstrated that access to accurate health information is potentially increasing, there are major safety lapses. These safety lapses can range from providing inaccurate health information to failing to respond appropriately to crisis situations, to normalizing unsafe behaviors.

What can consumers do to best protect themselves from unregulated health apps?

Consumers should avoid apps that use vague phrases like "clinically proven" without specific details or references, or apps that use methods that seem overly simple or too good to be true.

In what ways could we strengthen its oversight of generative AI?

One potentially promising way to regulate today's health app marketplace is to make Food and Drug Administration approval a requirement. This means apps should go through randomized clinical trials and meet a defined standard before becoming available to the public.

Until then, clear labeling is key since people need to know which apps are backed by evidence and which are not. With the right safeguards and enforcement mechanisms in place, like fines, suspensions or removal of noncompliant products from app stores, we can make sure that mobile health apps are accurate, safe, and responsible.

Rutgers, The State University of New Jersey published this content on November 16, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on November 17, 2025 at 04:32 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]