Scientific Reports (Nov 2023)

Deep reinforcement learning with significant multiplications inference

  • Dmitry A. Ivanov,
  • Denis A. Larionov,
  • Mikhail V. Kiselev,
  • Dmitry V. Dylov

DOI
https://doi.org/10.1038/s41598-023-47245-y
Journal volume & issue
Vol. 13, no. 1
pp. 1 – 10

Abstract

Read online

Abstract We propose a sparse computation method for optimizing the inference of neural networks in reinforcement learning (RL) tasks. Motivated by the processing abilities of the brain, this method combines simple neural network pruning with a delta-network algorithm to account for the input data correlations. The former mimics neuroplasticity by eliminating inefficient connections; the latter makes it possible to update neuron states only when their changes exceed a certain threshold. This combination significantly reduces the number of multiplications during the neural network inference for fast neuromorphic computing. We tested the approach in popular deep RL tasks, yielding up to a 100-fold reduction in the number of required multiplications without substantial performance loss (sometimes, the performance even improved).