Frontiers in Robotics and AI (May 2024)

A comparison of visual and auditory EEG interfaces for robot multi-stage task control

  • Kai Arulkumaran,
  • Marina Di Vincenzo,
  • Rousslan Fernand Julien Dossa,
  • Shogo Akiyama,
  • Dan Ogawa Lillrank,
  • Motoshige Sato,
  • Kenichi Tomeoka,
  • Shuntaro Sasai

DOI
https://doi.org/10.3389/frobt.2024.1329270
Journal volume & issue
Vol. 11

Abstract

Read online

Shared autonomy holds promise for assistive robotics, whereby physically-impaired people can direct robots to perform various tasks for them. However, a robot that is capable of many tasks also introduces many choices for the user, such as which object or location should be the target of interaction. In the context of non-invasive brain-computer interfaces for shared autonomy—most commonly electroencephalography-based—the two most common choices are to provide either auditory or visual stimuli to the user—each with their respective pros and cons. Using the oddball paradigm, we designed comparable auditory and visual interfaces to speak/display the choices to the user, and had users complete a multi-stage robotic manipulation task involving location and object selection. Users displayed differing competencies—and preferences—for the different interfaces, highlighting the importance of considering modalities outside of vision when constructing human-robot interfaces.

Keywords