Zhejiang Daxue xuebao. Lixue ban (Jan 2025)

SegRD++:Revisiting RD++ for anomaly detection(SegRD++:重新审视基于RD++的异常检测)

  • 张浚然(ZHANG Junran),
  • 曹俊杰(CAO Junjie)

DOI
https://doi.org/10.3785/j.issn.1008-9497.2025.01.014
Journal volume & issue
Vol. 52, no. 1
pp. 133 – 145

Abstract

Read online

Recently, anomaly detection methods based on reverse distillation architecture have generally achieved good results on various datasets and the RD++ method is particularly outstanding in design ideas and experimental results among them. However, the inference approach of RD++ is the same as most previous knowledge distillation methods, which still summarizes the feature differences between the teacher network and the student counterpart in an empirical way. We improve the RD++ method by adding a segmentation network to its model and training the segmentation network to replace empirical inference for anomaly detection and localization. To further improve the performance of the segmentation network, the attention-based fusion module and the hierarchical context loss are added to the RD++ model. And the original pseudo anomalies mechanism of RD++ is replaced by a new pseudo anomalies mechanism that can generate more natural anomalies. We call the improved method SegRD++. The experiments on MVTec AD dataset show that SegRD++ can significantly improve the performance compared with RD ++ and the added improvements are helpful in improving the performance of the model. The soure code for SegRD++ is available at https://github.com/JRZhang323/SegRRD.(基于反向蒸馏架构的异常检测方法在多种数据集上普遍取得了较好的结果,其中RD++算法的设计思路和实验结果尤为突出。由于RD++在推理环节的设计与以往大多数知识蒸馏算法相同,仍以经验方式汇总师生的特征差异,为此对RD++算法进行了改进。在其网络模型上加入分割网络,用通过训练的分割网络取代经验推理进行异常检测和定位;为进一步提升分割网络的性能,加入基于注意力的融合(attention based fusion,ABF)模块和层级上下文损失(hierarchical context loss,HCL),并选用能生成更自然异常的伪异常机制取代RD++的伪异常机制。基于以上4项改进,提出了SegRD++算法。在MVTec AD数据集上的实验结果表明,SegRD++的性能较RD++有明显提高,验证了加入改进措施对提升模型性能有帮助。改进算法的代码公开在https://github.com/JRZhang323/SegRRD上。)

Keywords