Applied Sciences (May 2024)

A Historical Survey of Advances in Transformer Architectures

  • Ali Reza Sajun,
  • Imran Zualkernan,
  • Donthi Sankalpa

DOI
https://doi.org/10.3390/app14104316
Journal volume & issue
Vol. 14, no. 10
p. 4316

Abstract

Read online

In recent times, transformer-based deep learning models have risen in prominence in the field of machine learning for a variety of tasks such as computer vision and text generation. Given this increased interest, a historical outlook at the development and rapid progression of transformer-based models becomes imperative in order to gain an understanding of the rise of this key architecture. This paper presents a survey of key works related to the early development and implementation of transformer models in various domains such as generative deep learning and as backbones of large language models. Previous works are classified based on their historical approaches, followed by key works in the domain of text-based applications, image-based applications, and miscellaneous applications. A quantitative and qualitative analysis of the various approaches is presented. Additionally, recent directions of transformer-related research such as those in the biomedical and timeseries domains are discussed. Finally, future research opportunities, especially regarding the multi-modality and optimization of the transformer training process, are identified.

Keywords