Applied Sciences (Jan 2024)

Harnessing Generative Pre-Trained Transformers for Construction Accident Prediction with Saliency Visualization

  • Byunghee Yoo,
  • Jinwoo Kim,
  • Seongeun Park,
  • Changbum R. Ahn,
  • Taekeun Oh

DOI
https://doi.org/10.3390/app14020664
Journal volume & issue
Vol. 14, no. 2
p. 664

Abstract

Read online

Leveraging natural language processing models using a large volume of text data in the construction safety domain offers a unique opportunity to improve understanding of safety accidents and the ability to learn from them. However, little effort has been made to date in regard to utilizing large language models for the prediction of accident types that can help to prevent and manage potential accidents. This research aims to develop a model for predicting the six types of accidents (caught-in-between, cuts, falls, struck-by, trips, and others) by employing transfer learning with a fine-tuned generative pre-trained transformer (GPT). Additionally, to enhance the interpretability of the fine-tuned GPT model, a method for saliency visualization of input text was developed to identify words that significantly impact prediction results. The models were evaluated using a comprehensive dataset comprising 15,000 actual accident records. The results indicate that the suggested model for detecting the six accident types achieves 82% accuracy. Furthermore, it was observed that the proposed saliency visualization method can identify accident precursors from unstructured free-text data of construction accident reports. These results highlight the advancement of the generalization performance of large language processing-based accident prediction models, thereby proactively preventing construction accidents.

Keywords