Frontiers in Neuroscience (Feb 2022)

Neuromorphic Engineering Needs Closed-Loop Benchmarks

  • Moritz B. Milde,
  • Saeed Afshar,
  • Ying Xu,
  • Alexandre Marcireau,
  • Damien Joubert,
  • Bharath Ramesh,
  • Yeshwanth Bethi,
  • Nicholas O. Ralph,
  • Sami El Arja,
  • Nik Dennler,
  • André van Schaik,
  • Gregory Cohen

DOI
https://doi.org/10.3389/fnins.2022.813555
Journal volume & issue
Vol. 16

Abstract

Read online

Neuromorphic engineering aims to build (autonomous) systems by mimicking biological systems. It is motivated by the observation that biological organisms—from algae to primates—excel in sensing their environment, reacting promptly to their perils and opportunities. Furthermore, they do so more resiliently than our most advanced machines, at a fraction of the power consumption. It follows that the performance of neuromorphic systems should be evaluated in terms of real-time operation, power consumption, and resiliency to real-world perturbations and noise using task-relevant evaluation metrics. Yet, following in the footsteps of conventional machine learning, most neuromorphic benchmarks rely on recorded datasets that foster sensing accuracy as the primary measure for performance. Sensing accuracy is but an arbitrary proxy for the actual system's goal—taking a good decision in a timely manner. Moreover, static datasets hinder our ability to study and compare closed-loop sensing and control strategies that are central to survival for biological organisms. This article makes the case for a renewed focus on closed-loop benchmarks involving real-world tasks. Such benchmarks will be crucial in developing and progressing neuromorphic Intelligence. The shift towards dynamic real-world benchmarking tasks should usher in richer, more resilient, and robust artificially intelligent systems in the future.

Keywords