Data-Centric Engineering (Jan 2021)

Model order reduction based on Runge–Kutta neural networks

  • Qinyu Zhuang,
  • Juan Manuel Lorenzi,
  • Hans-Joachim Bungartz,
  • Dirk Hartmann

DOI
https://doi.org/10.1017/dce.2021.15
Journal volume & issue
Vol. 2

Abstract

Read online

Model order reduction (MOR) methods enable the generation of real-time-capable digital twins, with the potential to unlock various novel value streams in industry. While traditional projection-based methods are robust and accurate for linear problems, incorporating machine learning to deal with nonlinearity becomes a new choice for reducing complex problems. These kinds of methods are independent to the numerical solver for the full order model and keep the nonintrusiveness of the whole workflow. Such methods usually consist of two steps. The first step is the dimension reduction by a projection-based method, and the second is the model reconstruction by a neural network (NN). In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three different simulation models. In all cases Proper orthogonal decomposition is used for dimension reduction. For this step, the effects of generating the snapshot database with constant input parameters is compared with time-dependent input parameters. For the model reconstruction step, three types of NN architectures are compared: multilayer perceptron (MLP), explicit Euler NN (EENN), and Runge–Kutta NN (RKNN). The MLPs learn the system state directly, whereas EENNs and RKNNs learn the derivative of system state and predict the new state as a numerical integrator. In the tests, RKNNs show their advantage as the network architecture informed by higher-order numerical strategy.

Keywords