Author : Sarvesh S. Gharat, Dr. Puja Padiya, Dr. Amarsinh Vidhate
Date of Publication :5th July 2025
Abstract: Deepfake detection refers to the process of identifying synthetically generated or manipulated facial images that closely resemble real human faces. These images, often created using advanced generative adversarial networks (GANs) like StyleGAN2, present a significant challenge for traditional detection systems due to their high realism and subtle artifacts. Traditional machine learning models and shallow neural networks often fall short in effectively distinguishing real from fake faces, primarily because they lack the capacity to capture intricate pixel-level features and contextual semantics within images. This study addresses those limitations by applying advanced deep learning techniques, including convolutional neural networks (CNNs) and several state-of-the-art pretrained models VGG16, InceptionResNet, Xception, MobileNet, and EfficientNet-B2 leveraging transfer learning for improved performance. Extensive experiments were conducted using a robust image dataset containing both real and synthetic faces. Each model was fine-tuned for binary classification (Real vs. Fake), and evaluated using precision, recall, F1-score, accuracy, and confusion matrix. Among all, EfficientNet-B2 enhanced with an attention mechanism emerged as the best-performing model, achieving an impressive 83% accuracy. The integration of attention allows the model to focus more effectively on distinguishing facial features, making it particularly robust against complex deepfakes. This research introduces a novel and efficient framework for real-time, high-accuracy deepfake detection.
Reference :