Big Data and Cognitive Computing (Sep 2024)

QA-RAG: Exploring LLM Reliance on External Knowledge

  • Aigerim Mansurova,
  • Aiganym Mansurova,
  • Aliya Nugumanova

DOI
https://doi.org/10.3390/bdcc8090115
Journal volume & issue
Vol. 8, no. 9
p. 115

Abstract

Read online

Large language models (LLMs) can store factual knowledge within their parameters and have achieved superior results in question-answering tasks. However, challenges persist in providing provenance for their decisions and keeping their knowledge up to date. Some approaches aim to address these challenges by combining external knowledge with parametric memory. In contrast, our proposed QA-RAG solution relies solely on the data stored within an external knowledge base, specifically a dense vector index database. In this paper, we compare RAG configurations using two LLMs—Llama 2b and 13b—systematically examining their performance in three key RAG capabilities: noise robustness, knowledge gap detection, and external truth integration. The evaluation reveals that while our approach achieves an accuracy of 83.3%, showcasing its effectiveness across all baselines, the model still struggles significantly in terms of external truth integration. These findings suggest that considerable work is still required to fully leverage RAG in question-answering tasks.

Keywords