BMC Medical Informatics and Decision Making (Nov 2021)

Combining data augmentation and domain information with TENER model for Clinical Event Detection

  • Zhichang Zhang,
  • Dan Liu,
  • Minyu Zhang,
  • Xiaohui Qin

DOI
https://doi.org/10.1186/s12911-021-01618-3
Journal volume & issue
Vol. 21, no. S9
pp. 1 – 12

Abstract

Read online

Abstract Background In recent years, with the development of artificial intelligence, the use of deep learning technology for clinical information extraction has become a new trend. Clinical Event Detection (CED) as its subtask has attracted the attention from academia and industry. However, directly applying the advancements in deep learning to CED task often yields unsatisfactory results. The main reasons are due to the following two points: (1) A great number of obscure professional terms in the electronic medical record leads to poor recognition performance of model. (2) The scarcity of datasets required for the task leads to poor model robustness. Therefore, it is urgent to solve these two problems to improve model performance. Methods This paper proposes a combining data augmentation and domain information with TENER Model for Clinical Event Detection. Results We use two evaluation metrics to compare the overall performance of the proposed model with the existing model on the 2012 i2b2 challenge dataset. Experimental results demonstrate that our proposed model achieves the best F1-score of 80.26%, type accuracy of 93% and Span F1-score of 90.33%, and outperforms the state-of-the-art approaches. Conclusions This paper proposes a multi-granularity information fusion encoder-decoder framework, which applies the TENER model to the CED task for the first time. It uses the pre-trained language model (BioBERT) to generate word-level features, solving the problem of a great number of obscure professional terms in the electronic medical record lead to poor recognition performance of model. In addition, this paper proposes a new data augmentation method for sequence labeling tasks, solving the problem of the scarcity of datasets required for the task leads to poor model robustness.

Keywords