IEEE Access (Jan 2024)
Video Variational Deep Atmospheric Turbulence Correction
Abstract
This paper presents a novel variational deep-learning approach for video atmospheric turbulence correction. We modify and tailor the Nonlinear Activation Free Network (NAFNet) architecture for video restoration, introducing a new transformer-based channel attention mechanism to exploit long-range high-level relations among frames. Close-range low-level relations are handled using 3D convolutions. We further boost the model’s performance by including it in a variational inference framework. We achieve this objective by conditioning the model on features extracted by a variational autoencoder (VAE). Furthermore, we enhance these features by incorporating information relevant to the image formation process into the VAE features. This is done via a new loss function based on the prediction of the parameters of the geometrical distortion, the spatially variant blur, and the noise, all responsible for the video degradation. Experiments on synthetic and physically simulated video datasets demonstrate the effectiveness and reliability of the proposed method and validate its superiority over existing state-of-the-art approaches.
Keywords