Stichting VU

06/13/2025 | Press release | Distributed by Public on 06/13/2025 05:43

Mark Hoogendoorn in NPO Radio1 on ChatGPT in mental health care

Share
13 June 2025
What guarantees do we have that the ChatGPT will come up with good advice? And what are the risks?

ChatGPT is trained on a lot of mental health data. We don't exactly know which sources, but we do know very many sources have been used.

There are many companies offering help with mental health issues, however, no proper evaluation thereof has been done yet.

Before we integrate AI in the mental health care and before we recommend it to people with serious issues, proper research needs to be done.

One of the risks is, that a lot of sensitive information is involved.

Data is also used for training; therefore we need to be very careful - if everything is shared with the ChatGPT, there's a high risk information will end up in the USA.

What guarantees do we have that the ChatGPT will come up with good advice? And what are the risks? All this is still unknown.

ChatGPT in itself has a safety net - suppose someone is explicitly uses the word suicide.

However, less explicit signals may not be recognized by the ChatGPT. It can most likely only intercept a clearly expressed intent.

ChatGPT also misses out on the non-verbal expression. Wearables (like a smart watch measuring heartbeat), cameras may offer help in the future.

However, the question is whether we should let the system take care of it.

I'm therefore very much in favour of a treatment by a therapist in combination with the Chat GPT, combining the best of both worlds.

To watch and listen (in Dutch), click here.

Stichting VU published this content on June 13, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on June 13, 2025 at 11:43 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at support@pubt.io