IET Computer Vision (Mar 2024)

StableNet: Distinguishing the hard samples to overcome language priors in visual question answering

  • Zhengtao Yu,
  • Jia Zhao,
  • Chenliang Guo,
  • Ying Yang

DOI
https://doi.org/10.1049/cvi2.12249
Journal volume & issue
Vol. 18, no. 2
pp. 315 – 327

Abstract

Read online

Abstract With the booming fields of computer vision and natural language processing, cross‐modal intersections such as visual question answering (VQA) have become very popular. However, several studies have shown that many VQA models suffer from severe language prior problems. After a series of experiments, the authors found that previous VQA models are in an unstable state, that is, when training is repeated several times on the same dataset, there are significant differences between the distributions of the predicted answers given by the models each time, and these models also perform unsatisfactorily in terms of accuracy. The reason for model instability is that some of the difficult samples bring serious interference to model training, so we design a method to measure model stability quantitatively and further propose a method that can alleviate both model imbalance and instability phenomena. Precisely, the question types are classified into simple and difficult ones different weighting measures are applied. By imposing constraints on the training process for both types of questions, the stability and accuracy of the model improve. Experimental results demonstrate the effectiveness of our method, which achieves 63.11% on VQA‐CP v2 and 75.49% with the addition of the pre‐trained model.

Keywords