IET Image Processing (Nov 2024)

RDMS: Reverse distillation with multiple students of different scales for anomaly detection

  • Ziheng Chen,
  • Chenzhi Lyu,
  • Lei Zhang,
  • ShaoKang Li,
  • Bin Xia

DOI
https://doi.org/10.1049/ipr2.13210
Journal volume & issue
Vol. 18, no. 13
pp. 3815 – 3826

Abstract

Read online

Abstract Unsupervised anomaly detection, often approached as a one‐class classification problem, is a critical task in computer vision. Knowledge distillation has emerged as a promising technique for enhancing anomaly detection accuracy, especially with the advent of reverse distillation networks that employ encoder–decoder architectures. This study introduces a novel reverse knowledge distillation framework known as RDMS, which incorporates a pretrained teacher encoding module, a multi‐level feature fusion connection module, and a student decoding module consisting of three independent decoders. RDMS is designed to distill distinct features from the teacher encoder, mitigating overfitting issues associated with similar or identical teacher–student structures. The model achieves an average of 99.3% image‐level AUROC and 98.34% pixel‐level AUROC on the MVTec‐AD dataset and demonstrates state‐of‐the‐art performance on the more challenging BTAD dataset. The RDMS model's high accuracy in anomaly detection and localization underscores the potential of multi‐student reverse distillation to advance unsupervised anomaly detection capabilities. The source code is available at https://github.com/zihengchen777/RDMS

Keywords