Authors:
N. Prananya, Prathvi U. Shetty, Preethi Shenoy, R. Vaishnavi, T. Shreekumar
Addresses:
Department of Computer Science and Engineering, Mangalore Institute of Technology and Engineering, Dakshina Kannada, Karnataka, India.
DeepFakes are a serious threat in today's digital world. They hurt privacy, authenticity, and the integrity of information in social, academic, and professional settings. This research introduces DeepGuard, an advanced AI-based framework engineered to detect, identify, and mitigate DeepFake threats in images and videos before they are disseminated online. The suggested system combines Optical Flow Analysis with Generative Adversarial Network (GAN)-based detection to find both spatial and temporal inconsistencies in altered media. DeepGuard has a TensorFlow-based deep learning model for risk assessment, a MySQL database for efficiently managing user interactions and data, and a Flask-based web application that provides users with personalised predictive insights and authenticity recommendations based on what they enter. The platform is meant to be easy to use, scalable, and available for real-time verification. Experimental testing shows that the detection rate is very high, that fake visual content can be found quickly, and that authenticity can be reliably assessed across a wide range of datasets. The results show that the hybrid analytical method is very good at distinguishing real media from fabricated content. In general, this study offers a thorough and useful approach to boosting digital trust, improving media verification processes, and making communication safer, in a time when AI-generated false information is becoming increasingly common.
Keywords: DeepFake Detection; Generative Adversarial Network (GAN); Media Authenticity; Optical Flow Analysis; Convolutional Neural Network (CNN); Image and Video Forensics.
Received on: 17/02/2025, Revised on: 24/04/2025, Accepted on: 08/07/2025, Published on: 03/01/2026
DOI: 10.69888/FTSIN.2026.000602
FMDB Transactions on Sustainable Intelligent Networks, 2026 Vol. 3 No. 1, Pages: 15–25