Journal of Personalized Medicine (Jun 2023)

Hybrid Value-Aware Transformer Architecture for Joint Learning from Longitudinal and Non-Longitudinal Clinical Data

  • Yijun Shao,
  • Yan Cheng,
  • Stuart J. Nelson,
  • Peter Kokkinos,
  • Edward Y. Zamrini,
  • Ali Ahmed,
  • Qing Zeng-Treitler

DOI
https://doi.org/10.3390/jpm13071070
Journal volume & issue
Vol. 13, no. 7
p. 1070

Abstract

Read online

Transformer is the latest deep neural network (DNN) architecture for sequence data learning, which has revolutionized the field of natural language processing. This success has motivated researchers to explore its application in the healthcare domain. Despite the similarities between longitudinal clinical data and natural language data, clinical data presents unique complexities that make adapting Transformer to this domain challenging. To address this issue, we have designed a new Transformer-based DNN architecture, referred to as Hybrid Value-Aware Transformer (HVAT), which can jointly learn from longitudinal and non-longitudinal clinical data. HVAT is unique in the ability to learn from the numerical values associated with clinical codes/concepts such as labs, and in the use of a flexible longitudinal data representation called clinical tokens. We have also trained a prototype HVAT model on a case-control dataset, achieving high performance in predicting Alzheimer’s disease and related dementias as the patient outcome. The results demonstrate the potential of HVAT for broader clinical data-learning tasks.

Keywords