IEEE Access (Jan 2023)

Autoencoder-Like Knowledge Distillation Network for Anomaly Detection

  • Caie Xu,
  • Bingyan Wang,
  • Dandan Ni,
  • Jin Gan,
  • Mingyang Wu,
  • Wujie Zhou

DOI
https://doi.org/10.1109/ACCESS.2023.3314199
Journal volume & issue
Vol. 11
pp. 100622 – 100631

Abstract

Read online

Anomaly detection is a crucial research field in computer vision with diverse applications in practical scenarios. The common anomaly detection methods employed currently consist of autoencoders, generative adversarial networks, and knowledge distillation (KD) models. However, the teacher and student models in KD might not always yield distinct representations to signify anomalies due to their similar model structure and data flow. This study proposes a novel autoencoder-like KD model based on the attention mechanism for anomaly detection. The pre-trained teacher model incorporates a dual attention module as the encoder, while the student model integrates the same dual attention module as the decoder. The teacher guides the student to learn the feature knowledge of the input image. To connect the teacher-student model, a BottleNeck module is employed, converting the features extracted from the teacher model into more compact latent codes for precise restoration by the student model, thereby achieving anomaly detection. In general, the proposed model exhibits superior performance compared to other existing anomaly detection models on specific datasets. Experimental results demonstrate that the proposed model attains the state-of-the-art (SOTA) performance in anomaly detection on the public dataset MVTec. It achieves an average AUC of 98.2% and 98.0% at sample and pixel levels, respectively.

Keywords