IEEE Access (Jan 2024)
An Adaptive Sampling Method Based on Expected Improvement Function and Residual Gradient in PINNs
Abstract
In recent years, physics-informed neural networks (PINNs) have developed significantly as a deep learning technology. In analogy to the selection of grid cells in traditional numerical methods, the distribution of sample points used for training PINN can have a greater impact on the solution accuracy. Based on the Residual-based Adaptive Refinement (RAR) algorithm, many improved adaptive sampling algorithms have been proposed. However, these sampling algorithms rely on residuals as the error indicator and focus only on the sample points within the solution domain. Therefore, we introduce a novel adaptive sampling algorithm EI-RAR. This algorithm incorporates a new expected improvement (EI) function, which increases focus on the sample points at the boundaries of the solution domain. Additionally, EI-RAR integrates attention mechanism with a sample point generation algorithm, aimed at reinforcing the connection between newly-added and existing sample points. To increase the accuracy of solving problems with sharp solutions, we build upon the EI-RAR algorithm by incorporating gradient information of the residual values as a criterion for sample point selection, leading to the development of a second adaptive sampling algorithm EI-Grad. We select residual neural network and combine it with adaptive sampling algorithms for a series of numerical experiments, aiming to reduce the phenomenon of gradient vanishing during the training process. These experiments select the Diffusion equation, Burgers’ equation, Allen-Cahn equation, and Navier-Stokes equation, respectively. Numerical results indicate that, with the same number of residual points, the EI-RAR algorithm is more precise compared to other sampling methods, and the EI-Grad algorithm can also effectively solve partial differential equations with sharp solutions.
Keywords