Journal of Medical Internet Research (Oct 2024)

Ascle—A Python Natural Language Processing Toolkit for Medical Text Generation: Development and Evaluation Study

  • Rui Yang,
  • Qingcheng Zeng,
  • Keen You,
  • Yujie Qiao,
  • Lucas Huang,
  • Chia-Chun Hsieh,
  • Benjamin Rosand,
  • Jeremy Goldwasser,
  • Amisha Dave,
  • Tiarnan Keenan,
  • Yuhe Ke,
  • Chuan Hong,
  • Nan Liu,
  • Emily Chew,
  • Dragomir Radev,
  • Zhiyong Lu,
  • Hua Xu,
  • Qingyu Chen,
  • Irene Li

DOI
https://doi.org/10.2196/60601
Journal volume & issue
Vol. 26
p. e60601

Abstract

Read online

BackgroundMedical texts present significant domain-specific challenges, and manually curating these texts is a time-consuming and labor-intensive process. To address this, natural language processing (NLP) algorithms have been developed to automate text processing. In the biomedical field, various toolkits for text processing exist, which have greatly improved the efficiency of handling unstructured text. However, these existing toolkits tend to emphasize different perspectives, and none of them offer generation capabilities, leaving a significant gap in the current offerings. ObjectiveThis study aims to describe the development and preliminary evaluation of Ascle. Ascle is tailored for biomedical researchers and clinical staff with an easy-to-use, all-in-one solution that requires minimal programming expertise. For the first time, Ascle provides 4 advanced and challenging generative functions: question-answering, text summarization, text simplification, and machine translation. In addition, Ascle integrates 12 essential NLP functions, along with query and search capabilities for clinical databases. MethodsWe fine-tuned 32 domain-specific language models and evaluated them thoroughly on 27 established benchmarks. In addition, for the question-answering task, we developed a retrieval-augmented generation (RAG) framework for large language models that incorporated a medical knowledge graph with ranking techniques to enhance the reliability of generated answers. Additionally, we conducted a physician validation to assess the quality of generated content beyond automated metrics. ResultsThe fine-tuned models and RAG framework consistently enhanced text generation tasks. For example, the fine-tuned models improved the machine translation task by 20.27 in terms of BLEU score. In the question-answering task, the RAG framework raised the ROUGE-L score by 18% over the vanilla models. Physician validation of generated answers showed high scores for readability (4.95/5) and relevancy (4.43/5), with a lower score for accuracy (3.90/5) and completeness (3.31/5). ConclusionsThis study introduces the development and evaluation of Ascle, a user-friendly NLP toolkit designed for medical text generation. All code is publicly available through the Ascle GitHub repository. All fine-tuned language models can be accessed through Hugging Face.