03/06/2026 | News release | Archived content
As funny as some deepfake videos and photos are, at the same time there is a lot of abuse of this technology. But... What exactly is a deepfake? What impact does it have on insurers? And how can fraud with deepfakes be prevented? Parya Lotfi, CEO of DuckDuckGoose, explains it in three articles.
Part 3: about how insurers can tackle deepfake fraud.
She studied Systems Engineering and Policy Analysis at TU Delft. During her studies, she researched new technologies that can be fascinating on the one hand, and harmful to society on the other. For individuals. And for companies and governments. This led to the establishment of DuckDuckGoose in 2020. Since then, Lotfi, co-founder and CEO of the tech company, and her team members have been developing deepfake detection for companies and governments.
In the eerste and tweede articles of this triptych, Lotfi emphasised that insurers must be aware of the large scale on which photos, videos and documents are manipulated and traded. "Fraud has always existed, of course, but fraud with deepfake technology is of a different order. Firstly, the technology is becoming more and more accessible. And it can be scaled up quickly, something that is already happening on the dark web. It is shifting from individual incidents to large-scale and repeatable attacks in which the same techniques are used simultaneously at multiple organisations."
Lotfi still finds it difficult to estimate the exact consequences of this type of fraud in the longer term. But according to the AI expert, it will be much bigger than just unjustified benefits. "Also think about the psychological impact on employees. Because what does it do to people when they can no longer tell the difference between real and fake? How does that affect their trust, in themselves and in colleagues and customers? And how do people and processes remain efficient when volume increases? For insurers, this can mean that traditional checks come under pressure."
If AI is able to generate convincing false images, documents and voices, it requires a counterforce that is at least as scalable and smart. Not to imitate fraudsters, but to understand and understand their techniques. The starting point of Lotfi and her colleagues is therefore: to catch a thief, you need to be a thief yourself. "Everything created by generative AI has some kind of AI DNA in it. And that can be recognised with the help of AI. So that means that fraud with AI can also be combated with AI."
But technology alone is not enough. Effective protection only comes when detection is integrated into processes early on, and combined with clear working arrangements and human review. That doesn't mean that employees have to learn how to spot deepfakes. That will simply become unfeasible. More importantly, employees learn when extra verification is needed. And how they can include signals from systems in their decision-making.
Lotfi: "Train employees in awareness, risk thinking and in collaboration with, for example, detection systems. The future is an interplay in which AI provides signals and interpretation. And in which employees learn how to translate that into well-founded decisions and actions. And because development is going so fast, continuous training is needed more than ever," she concludes.
Lotfi will provide a sub-session on March 26 during De Boef de Baas, an event of the Association that this year is all about technology and AI as a weapon and opponent in the fight against fraud and crime. Relevant for strategists and policymakers, security managers, fraud and incident investigators, data analysts, cybersecurity specialists, risk officers and professionals in the field of digital resilience.
Thank you for the feedback
How can we improve?
In 2025, the Resilient and Vigilant vision on tackling fraud and (cyber) crime in the insurance sector will be published.