Frontiers in Psychiatry (Jan 2021)

Deep Learning-Based Natural Language Processing for Screening Psychiatric Patients

  • Hong-Jie Dai,
  • Hong-Jie Dai,
  • Hong-Jie Dai,
  • Chu-Hsien Su,
  • You-Qian Lee,
  • You-Chen Zhang,
  • Chen-Kai Wang,
  • Chian-Jue Kuo,
  • Chian-Jue Kuo,
  • Chi-Shin Wu

DOI
https://doi.org/10.3389/fpsyt.2020.533949
Journal volume & issue
Vol. 11

Abstract

Read online

The introduction of pre-trained language models in natural language processing (NLP) based on deep learning and the availability of electronic health records (EHRs) presents a great opportunity to transfer the “knowledge” learned from data in the general domain to enable the analysis of unstructured textual data in clinical domains. This study explored the feasibility of applying NLP to a small EHR dataset to investigate the power of transfer learning to facilitate the process of patient screening in psychiatry. A total of 500 patients were randomly selected from a medical center database. Three annotators with clinical experience reviewed the notes to make diagnoses for major/minor depression, bipolar disorder, schizophrenia, and dementia to form a small and highly imbalanced corpus. Several state-of-the-art NLP methods based on deep learning along with pre-trained models based on shallow or deep transfer learning were adapted to develop models to classify the aforementioned diseases. We hypothesized that the models that rely on transferred knowledge would be expected to outperform the models learned from scratch. The experimental results demonstrated that the models with the pre-trained techniques outperformed the models without transferred knowledge by micro-avg. and macro-avg. F-scores of 0.11 and 0.28, respectively. Our results also suggested that the use of the feature dependency strategy to build multi-labeling models instead of problem transformation is superior considering its higher performance and simplicity in the training process.

Keywords