IEEE Access (Jan 2023)

SCGAN: Extract Features From Normal Semantics for Unsupervised Anomaly Detection

  • Yang Dai,
  • Lin Zhang,
  • Fu-You Fan,
  • Ya-Juan Wu,
  • Ze-Kuan Zhao

DOI
https://doi.org/10.1109/ACCESS.2023.3339780
Journal volume & issue
Vol. 11
pp. 137957 – 137968

Abstract

Read online

Anomaly detection within the realm of industrial products seeks to identify regions of image semantics that deviate from established normal patterns. Given the inherent challenges associated with collecting anomaly samples, we exclusively extract features from normal semantics. Our proposed solution involves a Semantic CopyPaste based Generative Adversarial Network (SCGAN) for unsupervised anomaly detection. To enable the comprehensive acquisition of semantic features within intricate real-world images, we embrace an encoder-decoder-encoder as the fundamental network structure. In practical terms, our approach commences with the input image being subjected to the CopyPaste augmentation module. Here, we strategically copy N patches, each constituting 1% of the image’s area, from normal samples. These patches are then randomly pasted into different regions of the original image. Subsequently, a generative adversarial network is trained to facilitate sample reconstruction. A noteworthy augmentation to the network’s channel attention capabilities entails the incorporation of a multi-scale channel attention module within the first encoder. This module serves to emphasize contextual features across varying scales within the image. During the test, we detect anomalous regions by meticulously comparing residuals between the input image and its reconstructed counterpart. Our methodology is rigorously validated through diverse experiments conducted on challenging MVTec and BTAD public datasets. The results conclusively affirm the state-of-the-art performance achieved by our proposed method in the domain of anomaly detection.

Keywords