04/08/2025 | News release | Distributed by Public on 04/07/2025 19:27
Melbourne Institute of Technology (MIT) proudly announces the publication of a significant research paper by Ramcharan Ramanaharan, one of the first graduates from MIT's Master of ICT (Research) program. The article, titled "DeepFake video detection: Insights into model generalisation - A Systematic review", was published in the Elsevier Q2 journal Data and Information Management in March 2025. This project was supervised by Dr. Deepani B. Guruge and Professor Johnson Ihyeh Agbinya from MIT School of IT & Engineering.
This peer-reviewed study represents a major milestone for both Ramcharan and MIT, contributing to the global discourse on artificial intelligence and video manipulation detection.
As DeepFake technology continues to evolve, the challenge of detecting AI-generated videos becomes increasingly complex, posing significant risks to misinformation, privacy, and security. This paper investigates the current landscape of DeepFake video detection techniques, with a particular focus on their generalisability across different datasets and manipulation methods. Through a rigorous screening and selection process, 108 relevant research articles published between January 2018 and February 2024 were included in the review, providing a comprehensive analysis of the strengths and limitations of existing detection approaches.
This systematic review reveals that while a wide range of DeepFake detection techniques-including commonly employed Convolutional Neural Networks (CNNs), hybrid architectures such as CNN-RNNs and CNN-Transformers, and advanced models like Vision Transformers, Generative Adversarial Networks (GANs), and Capsule Networks-have achieved high accuracy in controlled experimental settings, their ability to generalise effectively remains a critical challenge.
The review identifies several limitations within current detection approaches:
The paper outlines a series of recommendations to address these challenges:
The authors emphasise the need for generalisable, real-time solutions that are capable of operating effectively outside controlled laboratory settings. This includes strategies that incorporate explainability, resilience to adversarial attacks, and the ability to detect previously unseen DeepFake techniques.
By consolidating and critically assessing existing deepfake detection models, this systematic review provides a valuable foundation for future advancements in generalising deepfake detection. It not only synthesises the strengths and limitations of current approaches but also identifies key gaps in the literature, such as the need for diverse datasets that more accurately reflect the wide range of deepfake manipulations encountered in real-world applications. Furthermore, the review highlights the importance of developing standardized datasets with an appropriate scoring system that reflects the types and complexities of deepfakes. The findings underscore the urgency of establishing a framework to enhance cross-study comparisons and improve reproducibility in deepfake research, ultimately contributing to the development of robust detection methods that can effectively mitigate the risks posed by deepfakes in various domains, including security, media, and public trust. This publication stands as a testament to the calibre of research undertaken at MIT and highlights the importance of academic inquiry in addressing emerging technological threats.
The Master of ICT (Research) at Melbourne Institute of Technology is designed to equip students with advanced knowledge and skills in ICT, while also fostering independent research capabilities. The program allows students to undertake a significant research project under the supervision of experienced academics, contributing to knowledge creation in fast-evolving areas such as artificial intelligence, cybersecurity, and data analytics.
Ramcharan's success reflects the program's commitment to cultivating research excellence and industry-relevant innovation.