Frontiers in Systems Neuroscience (Mar 2014)
A dynamical systems approach to characterizing the contribution of neurogenesis to neural coding
Abstract
In the mammalian brain new neurons are being born throughout adult life in two specific regions: the dentate gyrus (Eriksson et al., 1998) and the olfactory bulb (Lazarini and Lledo, 2011). The neurogenesis process has been shown to play an important role in a number of memory tasks and learning behaviors (Aimone et al., 2011; Deng et al., 2010; Ming and Song, 2011; Sahay et al., 2011). In the olfactory bulb, impaired adult neurogenesis can also lead to a number of deficits in odor-guided behaviors (Lazarini and Lledo, 2011). Importantly, from a clinical standpoint, altered neurogenesis has been implicated in a number of cognitive disorders including early onset Alzheimer’s disease (Mu and Gage, 2011), in the regulation of emotion, and in mediating of some of the behavioral effects of antidepressants (Sahay et al., 2007; Sahay and Hen, 2007). However, despite the clinical importance and fundamental biological questions that neurogenesis embodies, the specific mechanisms of how adult-born neurons contribute to memory and cognitive function remain a matter of intense debate (Aimone et al., 2011; Lazarini and Lledo, 2011; Ming and Song, 2011; Sahay et al., 2011). In fact, a recent study pointed out that young neurons might not have a pre-determined function and acquire distinct responses depending on prior sensory experience and its behavioral context (Livneh et al., 2014). Here we use computational analyses to demonstrate how the relatively small number of newly added neurons can place a network in the regime where its ability to reproduce desired output signals, for example as part of pattern completion, is substantially enhanced. Specifically, we consider a recurrent firing rate network model with balanced excitation and inhibition and study how the addition of neurons changes its computational capacity. The simulation results (Figure 1) yielded estimates of the optimal number of young neurons and their hyperexcitatbility relatively to mature neurons that agreed with experimental measurements (Cameron and McKay, 2001; Deng et al., 2010; Tashiro et al., 2007), with no adjustable parameters. It is also important to note that the optimal regime for encoding input signals is often poised near an instability associated with chaotic dynamics (Aljadeff et al., 2013; Sompolinsky et al., 1988). This observation could explain the frequent occurrence of seizures at the early stages of Alzheimer’s disease (Palop and Mucke, 2010a; Palop and Mucke, 2010b). To that extent, we analytically derive conditions for observing chaotic dynamics in networks with of an arbitrary number of neuron types. The analytical results accurately mirrored simulation in predicting the composition of the network (fraction of young neurons and the difference in their excitability and number of synapses) when the networks undergoes transformation from stable to chaotic dynamics (Figure 2). Overall, these results demonstrate how a small fraction of neurons can increase the representational capacity of the neural circuit as a whole in a distributed way and provide a quantitative framework for characterizing more heterogeneous networks composed of multiple types of neurons. Figure 1. The representational capacity of a heterogeneous network. Results are shown as a function of the fraction of young neurons (y-axis) and the ratio of their hyper-excitability relatively to mature neurons (x-axis). The synaptic weights between neurons are initially set to random values drawn from a Gaussian distribution. In the case of young neurons we used a distribution with larger variance compared to the value used for mature neurons. The networks were tasked with encoding a desired input pattern; the connection weights were adjusted using the algorithm from (Sussillo and Abbott, 2009). The average representation error divided by the average activity of the network is the “learning capacity index” (color). Black lines are contour plots of equal magnitude. The learning capacity index reaches a maximum when young neurons having hyper-excitability ratio of ~4 comprised ~2% of the population. This agrees with experimental estimates (Cameron and McKay, 2001; Deng et al., 2010; Spalding et al., 2013; Tashiro et al., 2007) without any adjustable parameters in the model. Figure 2. Computational analysis of distributed coding in heterogeneous networks. Networks can be efficiently trained only in the regime where prior to training to reproduce target input patterns they exhibit chaotic dynamics (Sussillo and Abbott, 2009). For a model network based on one type of rate neurons (Sompolinsky et al., 1988), the transition to chaotic dynamics occurs when at least some modes in the network respond to perturbation with exponents (eigenvalues) that have real parts > 1 (purple line). Imaginary parts indicate oscillatory dynamics along the respective modes, and are not relevant indicators of chaotic dynamics. Our analytic estimate for the limits of exponents (blue circle) matches the numerical simulation (small open circles, each circle is a separate mode). In contrast, predictions based on average synaptic weights (red circle) are not accurate. This example network is in the chaotic regime prior to training, because some modes have exponents with real parts > 1. The corresponding neural responses over time from different types of neurons (group 1 and group 2) are shown on the right.
Keywords