NeuroImage (Dec 2024)

Direction and velocity kinematic features of point-light displays grasping actions are differentially coded within the action observation network

  • Settimio Ziccarelli,
  • Antonino Errante,
  • Leonardo Fogassi

Journal volume & issue
Vol. 303
p. 120939

Abstract

Read online

The processing of kinematic information embedded in observed actions is an essential ability for understanding others' behavior. Previous research showed that the action observation network (AON) may encode some action kinematic features. However, our understanding of how direction and velocity are encoded within the AON is still limited. In this study, we employed event-related fMRI to investigate the neural substrates specifically activated during observation of hand grasping actions presented as point-light displays, performed with different directions (right, left) and velocities (fast, slow). Twenty-three healthy adult participants took part in the study. To identify brain regions differentially recruited by grasping direction and velocity, univariate and multivariate pattern analysis (MVPA) were performed. The results of univariate analysis demonstrate that direction is encoded in occipito-temporal and posterior visual areas, while velocity recruits lateral occipito-temporal, superior parietal and intraparietal areas. Results of MVPA further show: a) a significant decoding accuracy of both velocity and direction at the network level; b) the possibility to decode within lateral occipito-temporal and parietal areas both direction and velocity; c) a contribution of bilateral premotor areas to velocity decoding models. These results indicate that posterior parietal nodes of the AON are mainly involved in coding grasping direction and that premotor regions are crucial for coding grasping velocity, while lateral occipito-temporal cortices play a key role in encoding both parameters. The current findings could have implications for observational-based rehabilitation treatments of patients with motor disorders and artificial intelligence-based hand action recognition models.

Keywords