IEEE Access (Jan 2024)

Fast Explanation Using Shapley Value for Object Detection

  • Michihiro Kuroki,
  • Toshihiko Yamasaki

DOI
https://doi.org/10.1109/ACCESS.2024.3369890
Journal volume & issue
Vol. 12
pp. 31047 – 31054

Abstract

Read online

In explainable artificial intelligence (XAI) for object detection, saliency maps are employed to highlight important regions for a learned model’s prediction. However, a trade-off exists wherein the higher the accuracy of explanation results, the higher the computational cost, posing a challenge for practical applications. Therefore, this study proposes a novel XAI method for object detection to address this challenge. In recent years, research on XAI that satisfies desirable properties for explanatory validity by introducing the Shapley value has been widely conducted. However, a common drawback across these approaches is the high computational cost, which has hindered broad implementation. Our proposed method utilizes an explainer model that learns to estimate the Shapley value and provides a reliable explanation for object detection in a real-time inference. This framework can be applied to various object detectors in a model-agnostic manner. Through quantitative evaluation, we experimentally demonstrate that our method achieves the fastest explanation while delivering superior performance compared with other existing methods.

Keywords