Frontiers in Physics (Mar 2024)

Effect of network architecture on physics-informed deep learning of the Reynolds-averaged turbulent flow field around cylinders without training data

  • Jan Hauke Harmening,
  • Jan Hauke Harmening,
  • Franz-Josef Peitzmann,
  • Ould el Moctar

DOI
https://doi.org/10.3389/fphy.2024.1385381
Journal volume & issue
Vol. 12

Abstract

Read online

Unsupervised physics-informed deep learning can be used to solve computational physics problems by training neural networks to satisfy the underlying equations and boundary conditions without labeled data. Parameters such as network architecture and training method determine the training success. However, the best choice is unknown a priori as it is case specific. Here, we investigated network shapes, sizes, and types for unsupervised physics-informed deep learning of the two-dimensional Reynolds-averaged flow around cylinders. We trained mixed-variable networks and compared them to traditional models. Several network architectures with different shape factors and sizes were evaluated. The models were trained to solve the Reynolds-averaged Navier-Stokes equations incorporating Prandtl’s mixing length turbulence model. No training data were deployed to train the models. The superiority of the mixed-variable approach was confirmed for the investigated high Reynolds number flow. The mixed-variable models were sensitive to the network shape. For the two cylinders, differently deep networks showed superior performance. The best fitting models were able to capture important flow phenomena such as stagnation regions, boundary layers, flow separation, and recirculation. We also encountered difficulties when predicting high Reynolds number flows without training data.

Keywords