Экспериментальная психология (Jan 2022)
Part-Whole Perception of Audiovideoimages of Multimodal Emotional States of a Person
Abstract
The patterns of perception of a part and a whole of multimodal emotional dynamic states of people unfamiliar to observers are studied. Audio-video clips of fourteen key emotional states expressed by specially trained actors were randomly presented to two groups of observers. In one group (N=96, average age — 34, SD — 9.4l.), each audio—video image was shown in full, in the other (N=78, average age — 25, SD — 9.6l.), it was divided into two parts of equal duration from the beginning to the conditional middle (short phonetic pause) and from the middle to the end of the exposure. The stimulus material contained facial expressions, gestures, head and eye movements, changes in the position of the body of the sitters, who voiced pseudolinguistic statements accompanied by affective intonations. The accuracy of identification and the structure of categorical fields were evaluated depending on the modality and form (whole/part) of the exposure of affective states. After the exposure of each audio-video image from the presented list of emotions, observers were required to choose the one that best corresponds to what they saw. According to the data obtained, the accuracy of identifying the emotions of the initial and final fragments of audio-video images practically coincide, but significantly less than with full exposure. Functional differences in the perception of fragmented audio-video images of the same emotional states are revealed. The modes of transitions from the initial stage to the final one and the conditions affecting the relative speed of the perceptual process are shown. The uneven formation of the information basis of multimodal expressions and the heterochronous perceptogenesis of emotional states of actors are demonstrated.