Scientific Reports (Aug 2022)

A pre-trained BERT for Korean medical natural language processing

  • Yoojoong Kim,
  • Jong-Ho Kim,
  • Jeong Moon Lee,
  • Moon Joung Jang,
  • Yun Jin Yum,
  • Seongtae Kim,
  • Unsub Shin,
  • Young-Min Kim,
  • Hyung Joon Joo,
  • Sanghoun Song

DOI
https://doi.org/10.1038/s41598-022-17806-8
Journal volume & issue
Vol. 12, no. 1
pp. 1 – 10

Abstract

Read online

Abstract With advances in deep learning and natural language processing (NLP), the analysis of medical texts is becoming increasingly important. Nonetheless, despite the importance of processing medical texts, no research on Korean medical-specific language models has been conducted. The Korean medical text is highly difficult to analyze because of the agglutinative characteristics of the language, as well as the complex terminologies in the medical domain. To solve this problem, we collected a Korean medical corpus and used it to train the language models. In this paper, we present a Korean medical language model based on deep learning NLP. The model was trained using the pre-training framework of BERT for the medical context based on a state-of-the-art Korean language model. The pre-trained model showed increased accuracies of 0.147 and 0.148 for the masked language model with next sentence prediction. In the intrinsic evaluation, the next sentence prediction accuracy improved by 0.258, which is a remarkable enhancement. In addition, the extrinsic evaluation of Korean medical semantic textual similarity data showed a 0.046 increase in the Pearson correlation, and the evaluation for the Korean medical named entity recognition showed a 0.053 increase in the F1-score.