APL Machine Learning (Jun 2023)

Rotationally equivariant super-resolution of velocity fields in two-dimensional flows using convolutional neural networks

  • Yuki Yasuda,
  • Ryo Onishi

DOI
https://doi.org/10.1063/5.0132326
Journal volume & issue
Vol. 1, no. 2
pp. 026107 – 026107-19

Abstract

Read online

This paper investigates the super-resolution of velocity fields in two-dimensional flows from the viewpoint of rotational equivariance. Super-resolution refers to techniques that enhance the resolution of an image from low to high resolution, and it has recently been applied in fluid mechanics. Rotational equivariance of super-resolution models is defined as the property by which the super-resolved velocity field is rotated according to a rotation of the input, leading to inferences that are covariant with the orientation of fluid systems. In physics, covariance is often related to symmetries. To better understand the connection with symmetries, the notion of rotational consistency of datasets is introduced within the framework of supervised learning, which is defined as the invariance of pairs of low- and high-resolution velocity fields with respect to rotation. This consistency is sufficient and necessary for super-resolution models to learn rotational equivariance from large datasets. Such a large dataset is not required when rotational equivariance is imposed on super-resolution models through the use of prior knowledge in the form of equivariant kernel patterns. Nonetheless, even if a fluid system has rotational symmetry, this symmetry may not carry over to a velocity dataset, which is not rotationally consistent. This inconsistency can arise when the rotation does not commute with the generation of low-resolution velocity fields. These theoretical assertions are supported by the results of numerical experiments, where two existing convolutional neural networks (CNNs) are converted into rotationally equivariant CNNs and the inferences of these CNNs are compared after the supervised training.