Scientific Reports (Jul 2024)

Progressive transfer learning for advancing machine learning-based reduced-order modeling

  • Teeratorn Kadeethum,
  • Daniel O’Malley,
  • Youngsoo Choi,
  • Hari S. Viswanathan,
  • Hongkyu Yoon

DOI
https://doi.org/10.1038/s41598-024-64778-y
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 13

Abstract

Read online

Abstract To maximize knowledge transfer and improve the data requirement for data-driven machine learning (ML) modeling, a progressive transfer learning for reduced-order modeling (p-ROM) framework is proposed. A key concept of p-ROM is to selectively transfer knowledge from previously trained ML models and effectively develop a new ML model(s) for unseen tasks by optimizing information gates in hidden layers. The p-ROM framework is designed to work with any type of data-driven ROMs. For demonstration purposes, we evaluate the p-ROM with specific Barlow Twins ROMs (p-BT-ROMs) to highlight how progress learning can apply to multiple topological and physical problems with an emphasis on a small training set regime. The proposed p-BT-ROM framework has been tested using multiple examples, including transport, flow, and solid mechanics, to illustrate the importance of progressive knowledge transfer and its impact on model accuracy with reduced training samples. In both similar and different topologies, p-BT-ROM achieves improved model accuracy with much less training data. For instance, p-BT-ROM with four-parent (i.e., pre-trained models) outperforms the no-parent counterpart trained on data nine times larger. The p-ROM framework is poised to significantly enhance the capabilities of ML-based ROM approaches for scientific and engineering applications by mitigating data scarcity through progressively transferring knowledge.