European Radiology Experimental (Mar 2024)

Shallow and deep learning classifiers in medical image analysis

  • Francesco Prinzi,
  • Tiziana Currieri,
  • Salvatore Gaglio,
  • Salvatore Vitabile

DOI
https://doi.org/10.1186/s41747-024-00428-2
Journal volume & issue
Vol. 8, no. 1
pp. 1 – 13

Abstract

Read online

Abstract An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians’ decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between “shallow” learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and “deep” learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence. Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics). • Deep classifiers implement automatic feature extraction and classification. • The classifier selection is based on data and computational resources availability, task, and explanation needs. Graphical Abstract

Keywords