Atmosphere (Jul 2024)

Deep Learning for High-Speed Lightning Footage—A Semantic Segmentation Network Comparison

  • Tyson Cross,
  • Jason R. Smit,
  • Carina Schumann,
  • Tom A. Warner,
  • Hugh G. P. Hunt

DOI
https://doi.org/10.3390/atmos15080873
Journal volume & issue
Vol. 15, no. 8
p. 873

Abstract

Read online

We present a novel deep learning approach to a unique image processing application: high-speed (>1000 fps) video footage of lightning. High-speed cameras enable us to observe lightning with microsecond resolution, characterizing key processes previously analyzed manually. We evaluate different semantic segmentation networks (DeepLab3+, SegNet, FCN8s, U-Net, and AlexNet) and provide a detailed explanation of the image processing methods for this unique imagery. Our system architecture includes an input image processing stage, a segmentation network stage, and a sequence classification stage. The ground-truth data consists of high-speed videos of lightning filmed in South Africa, totaling 48,381 labeled frames. DeepLab3+ performed the best (93–95% accuracy), followed by SegNet (92–95% accuracy) and FCN8s (89–90% accuracy). AlexNet and U-Net achieved below 80% accuracy. Full sequence classification was 48.1% and stroke classification was 74.1%, due to the linear dependence on the segmentation. We recommend utilizing exposure metadata to improve noise misclassifications and extending CNNs to use tapped gates with temporal memory. This work introduces a novel deep learning application to lightning imagery and is one of the first studies on high-speed video footage using deep learning.

Keywords