Advanced Intelligent Systems (Sep 2024)
BrainQN: Enhancing the Robustness of Deep Reinforcement Learning with Spiking Neural Networks
Abstract
As the third‐generation network succeeding artificial neural networks (ANNs), spiking neural networks (SNNs) offer high robustness and low energy consumption. Inspired by biological systems, the limitations of low robustness and high‐power consumption in deep reinforcement learning (DRL) are addressed by introducing SNNs. The Brain Q‐network (BrainQN) is proposed, which replaces the neurons in the classic Deep Q‐learning (DQN) algorithm with SNN neurons. BrainQN is trained using surrogate gradient learning (SGL) and ANN‐to‐SNN conversion methods. Robustness tests with input noise reveal BrainQN's superior performance, achieving an 82.14% increase in rewards under low noise and 71.74% under high noise compared to DQN. These findings highlight BrainQN's robustness and superior performance in noisy environments, supporting its application in complex scenarios. SGL‐trained BrainQN is more robust than ANN‐to‐SNN conversion under high noise. The differences in network output correlations between noisy and original inputs, along with training algorithm distinctions, explain this phenomenon. BrainQN successfully transitioned from a simulated Pong environment to a ball‐catching robot with dynamic vision sensors (DVS). On the neuromorphic chip PAICORE, it shows significant advantages in latency and power consumption compared to Jetson Xavier NX.
Keywords