Neuromorphic Computing and Engineering (Jan 2025)

Distributed representations enable robust multi-timescale symbolic computation in neuromorphic hardware

  • Madison Cotteret,
  • Hugh Greatorex,
  • Alpha Renner,
  • Junren Chen,
  • Emre Neftci,
  • Huaqiang Wu,
  • Giacomo Indiveri,
  • Martin Ziegler,
  • Elisabetta Chicca

DOI
https://doi.org/10.1088/2634-4386/ada851
Journal volume & issue
Vol. 5, no. 1
p. 014008

Abstract

Read online

Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms, which are each formed by the vector binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly nonideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. Moreover, it demonstrates that distributed symbolic representations serve as a highly capable representation-invariant language for cognitive algorithms in neuromorphic hardware.

Keywords