Egyptian Informatics Journal (Sep 2024)
Digital forensics for the socio-cyber world (DF-SCW): A novel framework for deepfake multimedia investigation on social media platforms
Abstract
Owing to the major development of social media platforms, the usage of technological adaptation increases by means of editing software tools. Posting media in social communication environments has become one of our common daily routines. Before posting, various editing generators are used to manipulate pixel values, such as for enhancing brightness and contrast. Undoubtedly, this software helps bring posting media from ordinary to outstanding. But such a type of editing crosses the line in terms of creating fakes—anything that comes from anywhere and does not retain its originality anyway. It poses a series of issues in the process of multimedia forensics investigation and chain of custody. In order to restrict the attempts at deep faking and make the investigation hierarchy more effective, efficient, and reliable in the socio-cyber space (SCS), this paper presents a novel framework called DF-SCW. A digital forensics-enabled socio-cyber world with artificial intelligence (AI), especially deep neural networks (DNNs), for detecting and analyzing deep fake media investigations on social media platforms. It compares pixels with their neighboring values in the same media (such as images and videos) to identify information about the original one. There is a media flag designed to filter out malicious and dangerous attempts, like a powerful leader declaring war. Putting flags on such fakes helps digital investigators resist sharing the posts. In addition, the other prospect of this research is to make the digital forensics ecosystem more appropriate to take qualitative judgments in real-time while media is uploaded on social media platforms. The simulation of the proposed DF-SCW is tested on three different platforms, such as Instagram, Facebook, and Twitter. Through the experiment, the DF-SCW outperformed in terms of detection, identification, and analysis of deepfake media by an increased rate of 3.77%.