05/07/2026 | Press release | Distributed by Public on 05/07/2026 02:07
AI algorithms and psychological vulnerabilities can interact and increase the risk of violent extremism. This is demonstrated by a new theoretical model developed by an international team of researchers.
How are ordinary people drawn into extremist circles - and what role can artificial intelligence play in that process?
This question is addressed by a new study which, for the first time, combines psychological theories of radicalisation with knowledge of modern AI technologies such as recommendation algorithms, generative AI and botnets.
'We have developed a comprehensive model that shows how digital systems can exploit - or amplify - people's social and psychological needs in ways we do not yet fully understand,' explains Milan Obaidi, associate professor at the Department of Psychology at the University of Copenhagen.
Radicalisation rarely begins as a sudden upheaval. Instead, individuals move gradually through a process in which digital technologies and psychological vulnerabilities can influence one another.
The study divides this process into four key phases:
According to the researchers, AI systems can be seen as a kind of accelerator: they can identify psychologically vulnerable individuals, tailor content and create synthetic communities that resemble human interactions.
'We are seeing an environment where users are not only exposed to extreme content, but also have it reflected back to them by algorithms in ways that can amplify their sense of meaning, anger or injustice,' says Milan Obaidi, adding:
'It is the combination of the technology's scalability and people's psychological needs that makes this development particularly worrying.'
Whereas recommendation algorithms primarily control what content the user sees, generative models such as large language models add a new layer: they can create the content that radicalises.
AI can:
'This development may make it harder to distinguish between human and non-human influences - and thus amplify radicalisation processes that were previously limited by human labour,' highlights Milan Obaidi.
The study emphasises that not all users are equally vulnerable. AI particularly affects people who are already experiencing social isolation, identity insecurity, injustice or marginalisation - or a need for clarity, order and strong group affiliations.
Precisely because AI systems are designed to maximise engagement, they may inadvertently exploit these very vulnerabilities - without any ideological intent.
'It is important to emphasise that AI does not create radicalisation out of the blue. But the technology can amplify known psychological mechanisms and make it easier for extreme ideas to gain a foothold among those who are already at risk,' says Milan Obaidi.
The study 'Intelligent Systems, Vulnerable Minds: A Framework for Radicalisation to Violence in the Age of AI' has been published in the journal Personality and Social Psychology Review. Read it here.
Milan Obaidi, Associate Professor
Department of Psychology
Email: [email protected]
T: +45 35 32 91 76
Simon Knokgaard Halskov
UCPH Communications
Email: [email protected]
M: +45 93 56 53 29