Remote Sensing (Aug 2024)

Spatio-Temporal Pruning for Training Ultra-Low-Latency Spiking Neural Networks in Remote Sensing Scene Classification

  • Jiahao Li,
  • Ming Xu,
  • He Chen,
  • Wenchao Liu,
  • Liang Chen,
  • Yizhuang Xie

DOI
https://doi.org/10.3390/rs16173200
Journal volume & issue
Vol. 16, no. 17
p. 3200

Abstract

Read online

In remote sensing scene classification (RSSC), restrictions on real-time processing on power consumption, performance, and resources necessitate the compression of neural networks. Unlike artificial neural networks (ANNs), spiking neural networks (SNNs) convey information through spikes, offering superior energy efficiency and biological plausibility. However, the high latency of SNNs restricts their practical application in RSSC. Therefore, there is an urgent need to research ultra-low-latency SNNs. As latency decreases, the performance of the SNN significantly deteriorates. To address this challenge, we propose a novel spatio-temporal pruning method that enhances the feature capture capability of ultra-low-latency SNNs. Our approach integrates spatial fundamental structures during the training process, which are subsequently pruned. We conduct a comprehensive evaluation of the impacts of these structures across classic network architectures, such as VGG and ResNet, demonstrating the generalizability of our method. Furthermore, we develop an ultra-low-latency training framework for SNNs to validate the effectiveness of our approach. In this paper, we successfully achieve high-performance ultra-low-latency SNNs with a single time step for the first time in RSSC. Remarkably, our SNN with one time step achieves at least 200 times faster inference time while maintaining a performance comparable to those of other state-of-the-art methods.

Keywords