MIT - Melbourne Institute of Technology

04/08/2025 | News release | Distributed by Public on 04/07/2025 19:27

MIT Research Graduate Publishes Groundbreaking Study on DeepFake Detection

Melbourne Institute of Technology (MIT) proudly announces the publication of a significant research paper by Ramcharan Ramanaharan, one of the first graduates from MIT's Master of ICT (Research) program. The article, titled "DeepFake video detection: Insights into model generalisation - A Systematic review", was published in the Elsevier Q2 journal Data and Information Management in March 2025. This project was supervised by Dr. Deepani B. Guruge and Professor Johnson Ihyeh Agbinya from MIT School of IT & Engineering.
This peer-reviewed study represents a major milestone for both Ramcharan and MIT, contributing to the global discourse on artificial intelligence and video manipulation detection.

Systematic Review of DeepFake Video Detection

As DeepFake technology continues to evolve, the challenge of detecting AI-generated videos becomes increasingly complex, posing significant risks to misinformation, privacy, and security. This paper investigates the current landscape of DeepFake video detection techniques, with a particular focus on their generalisability across different datasets and manipulation methods. Through a rigorous screening and selection process, 108 relevant research articles published between January 2018 and February 2024 were included in the review, providing a comprehensive analysis of the strengths and limitations of existing detection approaches.

This systematic review reveals that while a wide range of DeepFake detection techniques-including commonly employed Convolutional Neural Networks (CNNs), hybrid architectures such as CNN-RNNs and CNN-Transformers, and advanced models like Vision Transformers, Generative Adversarial Networks (GANs), and Capsule Networks-have achieved high accuracy in controlled experimental settings, their ability to generalise effectively remains a critical challenge.

The review identifies several limitations within current detection approaches:

  1. Overfitting to Specific Datasets:Ensuring that the framework's goals and operations are transparent and comprehensible to all involved parties.
  2. Lack of Standardisation:Variability in evaluation metrics and dataset characteristics limits reproducibility and objective performance comparison across models.
  3. Computational Complexity:Many of the most accurate detection methods require high computational resources, restricting their usability in real-time or mobile applications.

Recommendations for Future Research

The paper outlines a series of recommendations to address these challenges:

  1. Develop standardized datasets accompanied by an appropriate scoring system that reflects the types of DeepFakes within the dataset and their associated complexities, enabling consistent benchmarking.
  2. Explore hybrid models and attention-based mechanisms to improve robustness and interpretability
  3. Develop a well-defined framework to support cross-study comparisons, assist in identifying best practices, and improve reproducibility in DeepFake research.

The authors emphasise the need for generalisable, real-time solutions that are capable of operating effectively outside controlled laboratory settings. This includes strategies that incorporate explainability, resilience to adversarial attacks, and the ability to detect previously unseen DeepFake techniques.

Contribution to the Field

By consolidating and critically assessing existing deepfake detection models, this systematic review provides a valuable foundation for future advancements in generalising deepfake detection. It not only synthesises the strengths and limitations of current approaches but also identifies key gaps in the literature, such as the need for diverse datasets that more accurately reflect the wide range of deepfake manipulations encountered in real-world applications. Furthermore, the review highlights the importance of developing standardized datasets with an appropriate scoring system that reflects the types and complexities of deepfakes. The findings underscore the urgency of establishing a framework to enhance cross-study comparisons and improve reproducibility in deepfake research, ultimately contributing to the development of robust detection methods that can effectively mitigate the risks posed by deepfakes in various domains, including security, media, and public trust. This publication stands as a testament to the calibre of research undertaken at MIT and highlights the importance of academic inquiry in addressing emerging technological threats.

Read Full Paper Here

Authors:

Ramcharan Ramanaharan

Student, Master of ICT (Research)

LinkedIn

Dr. Deepani B. Guruge

Senior Lecturer, School of IT & Engineering

Read More

Professor Johnson Ihyeh Agbinya

Professor of Artificial Intelligence & Machine Learning

Read More

About the Master of ICT (Research) at MIT

The Master of ICT (Research) at Melbourne Institute of Technology is designed to equip students with advanced knowledge and skills in ICT, while also fostering independent research capabilities. The program allows students to undertake a significant research project under the supervision of experienced academics, contributing to knowledge creation in fast-evolving areas such as artificial intelligence, cybersecurity, and data analytics.
Ramcharan's success reflects the program's commitment to cultivating research excellence and industry-relevant innovation.

Learn more about this course