Stony Brook University

01/21/2026 | News release | Distributed by Public on 01/21/2026 11:43

Who Decides How AI Is Used in Mental Healthcare? A Stony Brook Psychologist Weighs In

As the use of artificial intelligence tools in healthcare increases at a dramatic rate, the conversation about usage has focused on efficiency and innovation. But in a new article published in Nature Mental Health, Stony Brook University's Briana Lastargues that a more pressing question has been largely overlooked: who gets to decide how AI is used in mental healthcare, and in whose interests are those decisions made?

Last, an assistant professor in the Department of Psychology, co-authored the article, "Empowering service users, the public, and providers to determine the future of artificial intelligence in behavioral healthcare,"with Gabriela Kattan Khazanov, assistant professor at Yeshiva University's Ferkauf Graduate School of Psychology. Citing research from psychology, public health and technology ethics, Last and Khazanov argue that AI cannot fix the mental health crisis without addressing deeper structural inequities, and that those most affected by the system must have a seat at the table.

Briana Last. Photo by Luis Pedro Castillo Pictures for PRIDEnet.

"Much of my research examines how the U.S. mental health system does not currently meet the needs of most people who depend on it," Last said. "The people who need care the most are often the least likely to receive it, and the clinicians who deliver care to underserved communities are often underpaid and overworked."

That imbalance has shaped her interest in patient and clinician perspectives. "Based on my and others' scholarship, the message is fairly consistent," Last said. "People want care to be more accessible, affordable and agency-promoting. They want more autonomy and decision-making power over how care is distributed and delivered, with most people wanting morehuman connection, not less."

Her concerns grew as generative AI tools, particularly mental health chatbots, began to gain public attention. "Tech CEOs and even some researchers began to tell the public that these chatbots would help solve the mental health crisis," she said. "Though that may be an effective sales pitch for investors, it is both unlikely and inconsistent with what most people seeking and delivering treatment want or need."

In the article, Last and Khazanov emphasize that AI development reflects human choices and power dynamics, particularly the influence of private companies.

"There's a tendency to think that the proliferation of AI in mental healthcare is inevitable, as if the current use-cases of AI are just the natural course of history," Last said. "That kind of technological determinism fails to recognize that there are humans and powerful private interests behind these decisions."

While the private sector has driven much of the recent innovation, Last argues that the public has already invested heavily in AI on computing research and infrastructure. "The public has spent decades funding AI's development, and they are currently paying for many of its costs," she said. "They should have a say in how these technologies are used."

The article warns that when AI tools are designed primarily to reduce costs or increase profits, they risk worsening inequities in care. "I am concerned that technology companies, employers and insurers will use these technologies to cut human-delivered mental healthcare and clinical training in the name of cost-cutting," Last said. "We are already seeing this happening."

In that scenario, she added, access to human clinicians could become increasingly stratified. "Human-delivered mental healthcare might become a luxury good," she said.

Rather than rejecting AI outright, the authors argue for a different model of development and governance. One of their central recommendations is greater public investment and public ownership of AI technologies used in mental healthcare.

Last expressed skepticism that regulation alone can keep pace with rapid technological change. "While regulation is necessary, I don't think it's sufficient," she said. "Public investment and ownership can shift technological investments to prioritize care for the neediest - care which may not always be profitable, but will always be essential."

The article also calls for participatory research methods that actively involve service users, clinicians and communities throughout the AI development process. That involvement, Last said, must go beyond surface-level consultation.

"The people who will be routinely using these technologies should have a seat at the table at every stage of the research process, from idea generation to implementation," she said. "Right now, there's a huge disconnect between what most people think and feel about AI and how AI is actually being deployed in mental healthcare."

Providers have raised concerns about how AI could affect training, supervision and relationships with patients. "People have very serious concerns about how AI is and will be used in mental healthcare," Last said. "When it comes to chatbots, there are real questions about safety and efficacy, especially for vulnerable individuals."

Last believes that public universities like Stony Brook have a critical role to play in determining a more ethical path forward and ensuring ensure that technologies serve the public's interests. "They exist to produce knowledge that benefits society, not just private investors."

She said that mission is already evident at Stony Brook. "Whenever I open the SBU newsletter, I'm amazed by the innovative work happening here," Last said. "It's a testament to what publicly oriented research can achieve."

While the article outlines policy recommendations, Last cautions against looking for quick fixes. "Our mental healthcare system is plagued by a lot of problems, and I don't think they can be fixed by a few top-down policies or technological innovations," she said. Instead, she advocates for what she calls a more democratic approach to technology.

"The public already pays for many of the costs of AI technologies," she said. "We need to start feeling empowered to have a say in how AI is developed and deployed."

"It's easy to feel like the current ways AI is being used in mental healthcare are inescapable," Last said. "But if we remember that humans, not the technologies themselves, are the ones making decisions about how AI is designed and deployed, we can begin to reimagine how AI could actually promote public mental health and well-being. Service users, the public and providers deserve a real voice. The future of mental healthcare should be shaped by the people it is meant to serve."

- Beth Squire

Stony Brook University published this content on January 21, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on January 21, 2026 at 17:43 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]