Frontiers in Computational Neuroscience (Aug 2014)
Finding and recognising objects in natural scenes: complementary computations in the dorsal and ventral visual systems
Abstract
Searching for and recognising objects in complex natural scenes is implemented by multiple saccades until the eyes reach within the reduced receptive field sizes of inferior temporal cortex (IT) neurons. We analyse and model how the dorsal and ventral visual streams both contribute to this. Saliency detection in the dorsal visual system including area LIP is modelled by graph-based visual saliency, and allows the eyes to fixate potential objects within several degrees. Visual information at the fixated location subtending approximately 9 degrees corresponding to the receptive fields of IT neurons is then passed through a four layer hierarchical model of the ventral cortical visual system, VisNet. We show that VisNet can be trained using a synaptic modification rule with a short-term memory trace of recent neuronal activity to capture both the required view and translation invariances to allow in the model approximately 90% correct object recognition for 4 objects shown in any view across a range of 135 degrees anywhere in a scene.The model was able to generalize correctly within the four trained views and the 25 trained translations.This approach analyses the principles by which complementary computations in the dorsal and ventral visual cortical streams enable objects to be located and recognised in complex natural scenes.
Keywords