Jisuanji kexue yu tansuo (Aug 2023)

Cascaded Two-Stream Attention Networks for Traceability Analysis of Copy-Move Images

  • JI Yanqing, ZHANG Yujin

DOI
https://doi.org/10.3778/j.issn.1673-9418.2203118
Journal volume & issue
Vol. 17, no. 8
pp. 1981 – 1994

Abstract

Read online

Copy-move is a common way of the image forgery. Traditional methods are committed to locating tam-pering regions of copy-move tampering images, but the accurate distinction between the source and target of the copy-move image has become a bottleneck in the field of image forensics. At present, algorithms which can locate the tampering source and target regions from the original copy-move forged images still have some disadvantages. Therefore, this paper proposes a cascaded two-stream attention network for traceability analysis of copy-move images. It is divided into two stages. The first stage of the network includes a coding network, a module to analyze features and a decoding network. In the coding part, lightweight MobileNetV2 is used as the backbone to extract low and deep features as the double outputs of the network. In the module of analyzing features, tampering regions in deep features are multi-dimensionally captured by the attention mechanism of similar features and atrous spatial pyramid pooling module. At the same time, the low feature is used to improve the model’s performance of segmenting edges and details of tampered regions. In the decoding part, the feature map is predicted pixel by pixel and sampled. In the second stage of the network, the tampering regions detected in the first stage network are distinguished between the source and target. It is also a two-stream network. The inputs of two-branch are the original image blocks including the source or target and image blocks after extracting noise. The multiscale features are used to predict category, and the final mask is output by the region mapping. Experimental results show that the proposed network can not only locate the tampering regions, but also distinguish the source and target. The performance compared with the latest algorithm of the first stage of the network in the test dataset and two public datasets is increased by 9.4, 2.6, and 2.5 percentage points respectively, and the end-to-end performance in the test dataset is improved by 12.03%. At the same time, it has better robustness to conventional image post-processing.

Keywords