Applied Sciences (Mar 2024)

In-House Knowledge Management Using a Large Language Model: Focusing on Technical Specification Documents Review

  • Jooyeup Lee,
  • Wooyong Jung,
  • Seungwon Baek

DOI
https://doi.org/10.3390/app14052096
Journal volume & issue
Vol. 14, no. 5
p. 2096

Abstract

Read online

In complex construction projects, technical specifications have to be reviewed in a short period of time. Even experienced engineers find it difficult to review every detail of technical specifications. In addition, it is not easy to transfer experienced knowledge to junior engineers. With the technological innovation of large language models such as ChatGPT, a fine-tuned language model is proposed as an effective solution for the automatic review of technical specification documents. Against this backdrop, this study examines the in-house technical specification documents that are not publicly available. Then, two fine-tuned large language models, GPT-3 and LLaMA2, are trained to answer questions related to technical specification documents. The results show that the fine-tuned LLaMA2 model generally outperforms the fine-tuned GPT-3 model in terms of accuracy, reliability, and conciseness of responses. In particular, the fine-tuned LLaMA2 model suppressed hallucinogenic effects better than the fine-tuned GPT-3 model. Based on the results, this study discussed the applicability and limitations of a fine-tuned large language model for in-house knowledge management. The results of this study are expected to assist practitioners in developing a domain-specific knowledge management solution by fine-tuning an open-source large language model with private datasets.

Keywords