Big Data Mining and Analytics (Sep 2024)

Prompting Large Language Models with Knowledge-Injection for Knowledge-Based Visual Question Answering

  • Zhongjian Hu,
  • Peng Yang,
  • Fengyuan Liu,
  • Yuan Meng,
  • Xingyu Liu

DOI
https://doi.org/10.26599/BDMA.2024.9020026
Journal volume & issue
Vol. 7, no. 3
pp. 843 – 857

Abstract

Read online

Previous works employ the Large Language Model (LLM) like GPT-3 for knowledge-based Visual Question Answering (VQA). We argue that the inferential capacity of LLM can be enhanced through knowledge injection. Although methods that utilize knowledge graphs to enhance LLM have been explored in various tasks, they may have some limitations, such as the possibility of not being able to retrieve the required knowledge. In this paper, we introduce a novel framework for knowledge-based VQA titled “Prompting Large Language Models with Knowledge-Injection” (PLLMKI). We use vanilla VQA model to inspire the LLM and further enhance the LLM with knowledge injection. Unlike earlier approaches, we adopt the LLM for knowledge enhancement instead of relying on knowledge graphs. Furthermore, we leverage open LLMs, incurring no additional costs. In comparison to existing baselines, our approach exhibits the accuracy improvement of over 1.3 and 1.7 on two knowledge-based VQA datasets, namely OK-VQA and A-OKVQA, respectively.

Keywords