Status: unlisted
digital age challenge, allowing the generation of extremely realistic manipulated facial photographs that are hard to identify. These synthetic media can be utilized for disinformation, identity theft, and damaging reputation, posing serious social and security issues. This paper suggests an Explainable Deepfake Detection System for Images that not only labels content as original or manipulated but also offers human-understandable justification. The system is implemented as a Flask-based web application wherein users upload face images for real-time analysis. A pre-trained XceptionNet model classifies, and Grad-CAM produces heatmaps of the suspicious areas. These are then projected on to meaningful face landmarks via MediaPipe Face Mesh to identify areas like the eyes, mouth, cheeks, and jawline. The system provides both visual overviews and text-based explanations, making results interpretable to experts as well as non-technical users. As opposed to black-box methods, this model prioritizes transparency and ease of use. While centered on images, the process can be applied to video deepfakes using frame-level temporal analysis, proving itself useful as a credible and explainable forensic tool.
Keywords: Deepfake detection, XceptionNet, Flask, Grad-CAM, MediaPipe, Explainable AI, Image forensics