PLoS ONE (Jan 2010)

Attribute pair-based visual recognition and memory.

  • Masahiko Morita,
  • Shigemitsu Morokami,
  • Hiromi Morita

DOI
https://doi.org/10.1371/journal.pone.0009571
Journal volume & issue
Vol. 5, no. 3
p. e9571

Abstract

Read online

BACKGROUND: In the human visual system, different attributes of an object, such as shape, color, and motion, are processed separately in different areas of the brain. This raises a fundamental question of how are these attributes integrated to produce a unified perception and a specific response. This "binding problem" is computationally difficult because all attributes are assumed to be bound together to form a single object representation. However, there is no firm evidence to confirm that such representations exist for general objects. METHODOLOGY/PRINCIPAL FINDINGS: Here we propose a paired-attribute model in which cognitive processes are based on multiple representations of paired attributes. In line with the model's prediction, we found that multiattribute stimuli can produce an illusory perception of a multiattribute object arising from erroneous integration of attribute pairs, implying that object recognition is based on parallel perception of paired attributes. Moreover, in a change-detection task, a feature change in a single attribute frequently caused an illusory perception of change in another attribute, suggesting that multiple pairs of attributes are stored in memory. CONCLUSIONS/SIGNIFICANCE: The paired-attribute model can account for some novel illusions and controversial findings on binocular rivalry and short-term memory. Our results suggest that many cognitive processes are performed at the level of paired attributes rather than integrated objects, which greatly facilitates the binding problem and provides simpler solutions for it.