Frontiers in Computational Neuroscience (Oct 2009)
Dimensional reduction for the inverse problem of neural field theory
Abstract
Inverse problems in computational neuroscience comprise the determination of synaptic weight matrices or kernels for neural networks or neural fields respectively. Here, we reduce multi-dimensional inverse problems to inverse problems in lower dimensions which can be solved in an easier way or even explicitly through kernel construction. In particular, we discuss a range of embedding techniques and analyze their properties. We study the Amari equation as a particular example of a neural field theory. We obtain a solution of the full 2D or 3D problem by embedding 0D or 1D kernels into the domain of the Amari equation using a suitable path parametrization and basis transformations. Pulses are interconnected at branching points via path gluing. As instructive examples we construct logical gates, such as the persistent XOR and binary addition in neural fields. In addition, we compare results of inversion by dimensional reduction with a recently proposed global inversion scheme for neural fields based on Tikhonov-Hebbian learning. The results show that stable construction of complex distributed processes is possible via neural field dynamics. This is an important first step to study the properties of such constructions and to analyze natural or artificial realizations of neural field architectures.
Keywords