IEEE Transactions on Neural Systems and Rehabilitation Engineering (Jan 2022)

Associating Latent Representations With Cognitive Maps via Hyperspherical Space for Neural Population Spikes

  • Yicong Huang,
  • Zhu Liang Yu

DOI
https://doi.org/10.1109/TNSRE.2022.3212997
Journal volume & issue
Vol. 30
pp. 2886 – 2895

Abstract

Read online

Recently, there has been a focus on drawing progress on representation learning to obtain more identifiable and interpretable latent representations for spike trains, which helps analyze neural population activity and understand neural mechanisms. Most existing deep generative models adopt carefully designed constraints to capture meaningful latent representations. For neural data involving navigation in cognitive space, based on insights from studies on cognitive maps, we argue that the good representations should reflect such directional nature. Due to manifold mismatch, models utilizing the Euclidean space learn a distorted geometric structure that is difficult to interpret. In the present work, we explore capturing the directional nature in a simpler yet more efficient way by introducing hyperspherical neural latent variable models (SNLVM). SNLVM is an improved deep latent variable model modeling neural activity and behavioral variables simultaneously with hyperspherical latent space. It bridges cognitive maps and latent variable models. We conduct experiments on modeling a static unidirectional task. The results show that while SNLVM has competitive performance, a hyperspherical prior naturally provides more informative and significantly better latent structures that can be interpreted as spatial cognitive maps.

Keywords