Københavns Universitet

05/07/2026 | Press release | Distributed by Public on 05/07/2026 02:07

Artificial intelligence may accelerate the path to radicalisation

7 May 2026

Artificial intelligence may accelerate the path to radicalisation

Extremism

AI algorithms and psychological vulnerabilities can interact and increase the risk of violent extremism. This is demonstrated by a new theoretical model developed by an international team of researchers.

Photo: Saradasish Pradhan, Unsplash

How are ordinary people drawn into extremist circles - and what role can artificial intelligence play in that process?

This question is addressed by a new study which, for the first time, combines psychological theories of radicalisation with knowledge of modern AI technologies such as recommendation algorithms, generative AI and botnets.

'We have developed a comprehensive model that shows how digital systems can exploit - or amplify - people's social and psychological needs in ways we do not yet fully understand,' explains Milan Obaidi, associate professor at the Department of Psychology at the University of Copenhagen.

Anger grows step by step

Radicalisation rarely begins as a sudden upheaval. Instead, individuals move gradually through a process in which digital technologies and psychological vulnerabilities can influence one another.

The study divides this process into four key phases:

  1. Exposure - algorithms present users with polarising or extreme content, often without the user actively seeking it out.
  2. Reinforcement - repeated exposure and algorithmic personalisation create echo chambers and reinforce the new attitudes.
  3. Group integration - online communities and even AI-generated 'peers' can create strong bonds of identity reminiscent of group membership.
  4. Violent acts - in rare cases, this development can culminate in violent extremism.

According to the researchers, AI systems can be seen as a kind of accelerator: they can identify psychologically vulnerable individuals, tailor content and create synthetic communities that resemble human interactions.

'We are seeing an environment where users are not only exposed to extreme content, but also have it reflected back to them by algorithms in ways that can amplify their sense of meaning, anger or injustice,' says Milan Obaidi, adding:

'It is the combination of the technology's scalability and people's psychological needs that makes this development particularly worrying.'

Generative AI introduces entirely new risks

Whereas recommendation algorithms primarily control what content the user sees, generative models such as large language models add a new layer: they can create the content that radicalises.

AI can:

  • Produce vast amounts of personalised propaganda.
  • Simulate communities via swarms of bots.
  • Act as "AI companions" that reinforce users' extreme beliefs.
  • Create highly convincing deepfakes and manipulated material.

'This development may make it harder to distinguish between human and non-human influences - and thus amplify radicalisation processes that were previously limited by human labour,' highlights Milan Obaidi.

Psychological vulnerability plays a crucial role

The study emphasises that not all users are equally vulnerable. AI particularly affects people who are already experiencing social isolation, identity insecurity, injustice or marginalisation - or a need for clarity, order and strong group affiliations.

The researchers behind the study
  • Jonas R. Kunst, University of Oslo
  • Milan Obaidi, University of Copenhagen
  • Anton Gollwitzer, BI Norwegian Business School and Max Planck Institute
  • Petter B. Brandtzæg, University of Oslo
  • Yannic Hinrichs, University of Oslo
  • Neha Saini, University of Oslo
  • Daniel T. Schroeder, SINTEF Digital

Precisely because AI systems are designed to maximise engagement, they may inadvertently exploit these very vulnerabilities - without any ideological intent.

'It is important to emphasise that AI does not create radicalisation out of the blue. But the technology can amplify known psychological mechanisms and make it easier for extreme ideas to gain a foothold among those who are already at risk,' says Milan Obaidi.

The study 'Intelligent Systems, Vulnerable Minds: A Framework for Radicalisation to Violence in the Age of AI' has been published in the journal Personality and Social Psychology Review. Read it here.

Contact

Milan Obaidi, Associate Professor
Department of Psychology
Email: [email protected]
T: +45 35 32 91 76

Simon Knokgaard Halskov
UCPH Communications
Email: [email protected]
M: +45 93 56 53 29

Topics

Københavns Universitet published this content on May 07, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on May 07, 2026 at 08:07 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]