IEEE Access (Jan 2021)
Performance Analysis of 3D Video Transmission Over Deep-Learning-Based Multi-Coded N-ary Orbital Angular Momentum FSO System
Abstract
Orbital angular momentum-shift keying (OAM-SK), which is the rapid switching of OAM modes, is vital but seriously impeded by the deficiency of OAM demodulation techniques, particularly when videos are transmitted over the system. Thus, in this paper, 3D chaotic interleaved multi-coded video frames (VFs) are conveyed via an N-OAM-SK free-space optical (FSO) communication system to enhance the reliability and efficiency of video communication. To tackle the defects of the OAM-SK-FSO mechanism, two efficient deep learning (DL) techniques, namely convolution recurrent neural network (CRNN) and 3D convolution neural network (3DCNN) are used to decode OAM modes with a low bit error rate (BER). Moreover, a graphics processing unit (GPU) is used to accelerate the training process with slight power consumption. The utilized datasets for OAM states are generated by applying different scenarios using a trial-and-error method. The simulation results imply that LDPC-coded VFs achieve the largest peak signal-to-noise ratios (PSNRs) and the lowest BERs using the 16-OAM-SK model. Both 3DCNN and CRNN techniques have nearly the same performance, but this performance deteriorates in the case of larger dataset classes. Moreover, the GPU accelerates the performance by almost 67.6% and 36.9% for the CRNN and 3DCNN techniques, respectively. These two DL techniques are more effective in evaluating the classification accuracy than the other traditional techniques by almost 10 – 20%.
Keywords