Scientific Reports (Sep 2024)

Attentional templates for target features versus locations

  • Mikel Jimenez,
  • Ziyi Wang,
  • Anna Grubert

DOI
https://doi.org/10.1038/s41598-024-73656-6
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 10

Abstract

Read online

Abstract Visual search is guided by visual working memory representations (i.e., attentional templates) that are activated prior to search and contain target-defining features (e.g., color). In the present study, we tested whether attentional templates can also contain spatial target properties (knowing where to look for) and whether attentional selection guided by such feature-specific templates is equally efficient than selection that is based on feature-specific templates (knowing what to look for). In every trial, search displays were either preceded by semantic color or location cues, indicating the upcoming target color or location, respectively. Qualitative differences between feature- and location-based template guidance were substantiated in terms of selection efficiency in low-load (one target color/location) versus high-load trials (two target colors/locations). Behavioral and electrophysiological (N2pc) measures of target selection speed and accuracy were combined for converging evidence. In line with previous studies, we found that color search was highly efficient, even under high-low conditions, when multiple attentional templates were activated to guide attentional selection in a spatially global fashion. Importantly, results in the location task almost perfectly mirrored the findings of the color task, suggesting that multiple templates for different target locations were activated concurrently when two possible target locations were task relevant. Our findings align with accounts that assume a common neuronal network during preparation for location and color search, but regard spatial and feature-based selection mechanisms as independent.

Keywords