AIP Advances (Nov 2024)
Optimization of fluid control laws through deep reinforcement learning using dynamic mode decomposition as the environment
Abstract
The optimization of fluid control laws through deep reinforcement learning (DRL) presents a challenge owing to the considerable computational costs associated with trial-and-error processes. In this study, we examine the feasibility of deriving an effective control law using a reduced-order model constructed by dynamic mode decomposition with control (DMDc). DMDc is a method of modal analysis of a flow field that incorporates external inputs, and we utilize it to represent the time development of flow in the DRL environment. We also examine the amount of computation time saved by this method. We adopt the optimization problem of the control law for managing lift fluctuations caused by the Kármán vortex shedding in the flow around a cylinder. The deep deterministic policy gradient is used as the DRL algorithm. The external input for the DMDc model consists of a superposition of the chirp signal, containing various amplitudes and frequencies, and random noise. This combination is used to express random actions during the exploration phase. With DRL in a DMDc environment, a control law that exceeds the performance of conventional mathematical control is derived, although the learning is unstable (not converged). This lack of convergence is also observed with DRL in a computational fluid dynamics (CFD) environment. However, when the number of learning epochs is the same, a superior control law is obtained with DRL in a DMDc environment. This outcome could be attributed to the DMDc representation of the flow field, which tends to smooth out high-frequency fluctuations even when subjected to signals of larger amplitude. In addition, using DMDc results in a computation time savings of up to a factor of 3 compared to using CFD.