IEEE Access (Jan 2024)

An Experimental Survey of Incremental Transfer Learning for Multicenter Collaboration

  • Yixing Huang,
  • Christoph Bert,
  • Ahmed Gomaa,
  • Rainer Fietkau,
  • Andreas Maier,
  • Florian Putz

DOI
https://doi.org/10.1109/ACCESS.2024.3431885
Journal volume & issue
Vol. 12
pp. 101210 – 101227

Abstract

Read online

Due to data privacy constraints, data sharing among multiple clinical centers is restricted, which impedes the development of high performance deep learning models from multicenter collaboration. Naive weight transfer methods share intermediate model weights without raw data and hence can bypass data privacy restrictions. However, performance drops are typically observed when the model is transferred from one center to the next because of the forgetting problem. Incremental transfer learning, which combines peer-to-peer federated learning and domain incremental learning, can overcome the data privacy issue and meanwhile preserve model performance by using continual learning techniques. In this work, a conventional domain/task incremental learning framework is adapted for incremental transfer learning. A survey on the efficacy of prevalent regularization-based continual learning methods for multicenter collaboration is performed. The influences of data heterogeneity, classifier head setting, network optimizer, model initialization, center order, and weight transfer type have been investigated thoroughly. Our framework is publicly accessible to the research community for further development.

Keywords