Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom; UCL Institute of Ophthalmology, University College London, London, United Kingdom
Emily Hueske
Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, United States; RIKEN-MIT Laboratory at the Picower Institute for Learning and Memory at Department of Biology and Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, United States; McGovern Institute for Brain Research at Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, United States; Cold Spring Harbor Laboratory, Cold Spring Harbor, United States; Watson School of Biological Sciences, Cold Spring Harbor, United States
Torben Ott
Cold Spring Harbor Laboratory, Cold Spring Harbor, United States; Departments of Neuroscience and Psychiatry, Washington University School of Medicine, St. Louis, United States
Cold Spring Harbor Laboratory, Cold Spring Harbor, United States; Department of Neurophysiology, University Medical Center, Hamburg-Eppendorf, Hamburg, Germany
UCL Institute of Ophthalmology, University College London, London, United Kingdom
Susumu Tonegawa
RIKEN-MIT Laboratory at the Picower Institute for Learning and Memory at Department of Biology and Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, United States; Howard Hughes Medical Institute at Massachusetts Institute of Technology, Cambridge, United States
Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, United States
Adam Kepecs
Cold Spring Harbor Laboratory, Cold Spring Harbor, United States; Departments of Neuroscience and Psychiatry, Washington University School of Medicine, St. Louis, United States
Learning from successes and failures often improves the quality of subsequent decisions. Past outcomes, however, should not influence purely perceptual decisions after task acquisition is complete since these are designed so that only sensory evidence determines the correct choice. Yet, numerous studies report that outcomes can bias perceptual decisions, causing spurious changes in choice behavior without improving accuracy. Here we show that the effects of reward on perceptual decisions are principled: past rewards bias future choices specifically when previous choice was difficult and hence decision confidence was low. We identified this phenomenon in six datasets from four laboratories, across mice, rats, and humans, and sensory modalities from olfaction and audition to vision. We show that this choice-updating strategy can be explained by reinforcement learning models incorporating statistical decision confidence into their teaching signals. Thus, reinforcement learning mechanisms are continually engaged to produce systematic adjustments of choices even in well-learned perceptual decisions in order to optimize behavior in an uncertain world.