PLoS ONE (Jan 2023)
Explicit learning based on reward prediction error facilitates agile motor adaptations.
Abstract
Error based motor learning can be driven by both sensory prediction error and reward prediction error. Learning based on sensory prediction error is termed sensorimotor adaptation, while learning based on reward prediction error is termed reward learning. To investigate the characteristics and differences between sensorimotor adaptation and reward learning, we adapted a visuomotor paradigm where subjects performed arm movements while presented with either the sensory prediction error, signed end-point error, or binary reward. Before each trial, perturbation indicators in the form of visual cues were presented to inform the subjects of the presence and direction of the perturbation. To analyse the interconnection between sensorimotor adaptation and reward learning, we designed a computational model that distinguishes between the two prediction errors. Our results indicate that subjects adapted to novel perturbations irrespective of the type of prediction error they received during learning, and they converged towards the same movement patterns. Sensorimotor adaptations led to a pronounced aftereffect, while adaptation based on reward consequences produced smaller aftereffects suggesting that reward learning does not alter the internal model to the same degree as sensorimotor adaptation. Even though all subjects had learned to counteract two different perturbations separately, only those who relied on explicit learning using reward prediction error could timely adapt to the randomly changing perturbation. The results from the computational model suggest that sensorimotor and reward learning operate through distinct adaptation processes and that only sensorimotor adaptation changes the internal model, whereas reward learning employs explicit strategies that do not result in aftereffects. Additionally, we demonstrate that when humans learn motor tasks, they utilize both learning processes to successfully adapt to the new environments.