06/13/2025 | Press release | Distributed by Public on 06/13/2025 05:43
ChatGPT is trained on a lot of mental health data. We don't exactly know which sources, but we do know very many sources have been used.
There are many companies offering help with mental health issues, however, no proper evaluation thereof has been done yet.
Before we integrate AI in the mental health care and before we recommend it to people with serious issues, proper research needs to be done.
One of the risks is, that a lot of sensitive information is involved.
Data is also used for training; therefore we need to be very careful - if everything is shared with the ChatGPT, there's a high risk information will end up in the USA.
What guarantees do we have that the ChatGPT will come up with good advice? And what are the risks? All this is still unknown.
ChatGPT in itself has a safety net - suppose someone is explicitly uses the word suicide.
However, less explicit signals may not be recognized by the ChatGPT. It can most likely only intercept a clearly expressed intent.
ChatGPT also misses out on the non-verbal expression. Wearables (like a smart watch measuring heartbeat), cameras may offer help in the future.
However, the question is whether we should let the system take care of it.
I'm therefore very much in favour of a treatment by a therapist in combination with the Chat GPT, combining the best of both worlds.
To watch and listen (in Dutch), click here.