PLoS Computational Biology (Jul 2019)

Reservoir computing model of prefrontal cortex creates novel combinations of previous navigation sequences from hippocampal place-cell replay with spatial reward propagation.

  • Nicolas Cazin,
  • Martin Llofriu Alonso,
  • Pablo Scleidorovich Chiodi,
  • Tatiana Pelc,
  • Bruce Harland,
  • Alfredo Weitzenfeld,
  • Jean-Marc Fellous,
  • Peter Ford Dominey

DOI
https://doi.org/10.1371/journal.pcbi.1006624
Journal volume & issue
Vol. 15, no. 7
p. e1006624

Abstract

Read online

As rats learn to search for multiple sources of food or water in a complex environment, they generate increasingly efficient trajectories between reward sites. Such spatial navigation capacity involves the replay of hippocampal place-cells during awake states, generating small sequences of spatially related place-cell activity that we call "snippets". These snippets occur primarily during sharp-wave-ripples (SWRs). Here we focus on the role of such replay events, as the animal is learning a traveling salesperson task (TSP) across multiple trials. We hypothesize that snippet replay generates synthetic data that can substantially expand and restructure the experience available and make learning more optimal. We developed a model of snippet generation that is modulated by reward, propagated in the forward and reverse directions. This implements a form of spatial credit assignment for reinforcement learning. We use a biologically motivated computational framework known as 'reservoir computing' to model prefrontal cortex (PFC) in sequence learning, in which large pools of prewired neural elements process information dynamically through reverberations. This PFC model consolidates snippets into larger spatial sequences that may be later recalled by subsets of the original sequences. Our simulation experiments provide neurophysiological explanations for two pertinent observations related to navigation. Reward modulation allows the system to reject non-optimal segments of experienced trajectories, and reverse replay allows the system to "learn" trajectories that it has not physically experienced, both of which significantly contribute to the TSP behavior.