AIP Advances (Dec 2019)

Exploiting locality and translational invariance to design effective deep reinforcement learning control of the 1-dimensional unstable falling liquid film

  • Vincent Belus,
  • Jean Rabault,
  • Jonathan Viquerat,
  • Zhizhao Che,
  • Elie Hachem,
  • Ulysse Reglade

DOI
https://doi.org/10.1063/1.5132378
Journal volume & issue
Vol. 9, no. 12
pp. 125014 – 125014-13

Abstract

Read online

Instabilities arise in a number of flow configurations. One such manifestation is the development of interfacial waves in multiphase flows, such as those observed in the falling liquid film problem. Controlling the development of such instabilities is a problem of both academic interest and industrial interest. However, this has proven challenging in most cases due to the strong nonlinearity and high dimensionality of the underlying equations. In the present work, we successfully apply Deep Reinforcement Learning (DRL) for the control of the one-dimensional depth-integrated falling liquid film. In addition, we introduce for the first time translational invariance in the architecture of the DRL agent, and we exploit locality of the control problem to define a dense reward function. This allows us to both speed up learning considerably and easily control an arbitrary large number of jets and overcome the curse of dimensionality on the control output size that would take place using a naïve approach. This illustrates the importance of the architecture of the agent for successful DRL control, and we believe this will be an important element in the effective application of DRL to large two-dimensional or three-dimensional systems featuring translational, axisymmetric, or other invariance.