Frontiers in Neuroscience (Feb 2015)

Spatiotemporal Features for Asynchronous Event-based Data

  • Xavier eLagorce,
  • Sio Hoi eIeng,
  • Xavier eClady,
  • Michael ePfeiffer,
  • Ryad Benjamin Benosman

DOI
https://doi.org/10.3389/fnins.2015.00046
Journal volume & issue
Vol. 9

Abstract

Read online

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the realiable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

Keywords