Scientific Reports (Jun 2023)
Representational momentum of biological motion in full-body, point-light and single-dot displays
Abstract
Abstract Observing the actions of others triggers, in our brain, an internal and automatic simulation of its unfolding in time. Here, we investigated whether the instantaneous internal representation of an observed action is modulated by the point of view under which an action is observed and the stimulus type. To this end, we motion captured the elliptical arm movement of a human actor and used these trajectories to animate a photorealistic avatar, a point-light stimulus or a single dot rendered either from an egocentric or an allocentric point of view. Crucially, the underlying physical characteristics of the movement were the same in all conditions. In a representational momentum paradigm, we then asked subjects to report the perceived last position of an observed movement at the moment in which the stimulus was randomly stopped. In all conditions, subjects tended to misremember the last configuration of the observed stimulus as being further forward than the veridical last showed position. This misrepresentation was however significantly smaller for full-body stimuli compared to point-light and single dot displays and it was not modulated by the point of view. It was also smaller when first-person full body stimuli were compared with a stimulus consisting of a solid shape moving with the same physical motion. We interpret these findings as evidence that full-body stimuli elicit a simulation process that is closer to the instantaneous veridical configuration of the observed movements while impoverished displays (both point-light and single-dot) elicit a prediction that is further forward in time. This simulation process seems to be independent from the point of view under which the actions are observed.