Data in Brief (Dec 2023)
Dataset for the assessment of presence and performance in an augmented reality environment for motor imitation learning: A case-study on violinists
Abstract
This dataset comprises motion capture, audio, and questionnaire data from violinists who underwent four augmented reality training sessions spanning a month. The motion capture data was meticulously recorded using a 42-marker Qualisys Animation marker set, capturing movement at a high rate of 120 Hz. Audio data was captured using two condenser microphones, boasting a bit depth of 24 and a sampling rate of 48 kHz. The dataset encompasses recordings from 2 violin orchestra section leaders and 11 participants.Initially, we collected motion capture (MoCap) and audio data from the section leaders, who performed 2 distinct musical pieces. These recordings were then utilized to create 2 avatars, each representing a section leader and their respective musical piece. Subsequently, each avatar was assigned to a group of violinists, forming groups of 5 and 6 participants. Throughout the experiment, participants rehearsed one piece four times using a 2D representation of the avatar, and the other piece four times using a 3D representation.During the practice sessions, participants were instructed to meticulously replicate the avatar's bowing techniques, encompassing gestures related to bowing, articulation, and dynamics. For each trial, we collected motion capture, audio data, and self-reported questionnaires from all participants. The questionnaires included the Witmer presence questionnaire, a subset of the Makransky presence questionnaire, the sense of musical agency questionnaire, as well as open-ended questions for participants to express their thoughts and experiences.Additionally, participants completed the Immersive Tendencies questionnaire, the Music Sophistication Index questionnaire, and provided demographic information before the first session commenced.