IEEE Open Journal of Intelligent Transportation Systems (Jan 2024)

Global-Mapping-Consistency-Constrained Visual-Semantic Embedding for Interpreting Autonomous Perception Models

  • Chi Zhang,
  • Meng Yuan,
  • Xiaoning Ma,
  • Ping Wei,
  • Yuanqi Su,
  • Li Li,
  • Yuehu Liu

DOI
https://doi.org/10.1109/OJITS.2024.3418552
Journal volume & issue
Vol. 5
pp. 393 – 408

Abstract

Read online

From the perspective of artificial intelligence evaluation, the need to discover and explain the potential shortness of the evaluated intelligent algorithms/systems as well as the need to evaluate the intelligence level of such testees are of equal importance. In this paper, we propose a possible solution to these challenges: Explainable Evaluation for visual intelligence. Specifically, we focus on the problem setting where the internal mechanisms of AI algorithms are sophisticated, heterogeneous or unreachable. In this case, a latent attribute dictionary learning method with constrained by mapping consistency is proposed to explain the performance variation patterns of visual perception intelligence under different test samples. By jointly iteratively solving the learning of latent concept representation for test samples and the regression of latent concept-generalization performance, the mapping relationship between deep representation, semantic attribute annotation, and generalization performance of test samples is established to predict the degree of influence of semantic attributes on visual perception generalization performance. The optimal solution of proposed method could be reached via an alternating optimization process. Through quantitative experiments, we find that global mapping consistency constraints can make the learned latent concept representation strictly consistent with deep representation, thereby improving the accuracy of semantic attribute-perception performance correlation calculation.

Keywords