PLoS ONE (Jan 2010)

Real-time decision fusion for multimodal neural prosthetic devices.

  • James Robert White,
  • Todd Levy,
  • William Bishop,
  • James D Beaty

DOI
https://doi.org/10.1371/journal.pone.0009493
Journal volume & issue
Vol. 5, no. 3
p. e9493

Abstract

Read online

BACKGROUND: The field of neural prosthetics aims to develop prosthetic limbs with a brain-computer interface (BCI) through which neural activity is decoded into movements. A natural extension of current research is the incorporation of neural activity from multiple modalities to more accurately estimate the user's intent. The challenge remains how to appropriately combine this information in real-time for a neural prosthetic device. METHODOLOGY/PRINCIPAL FINDINGS: Here we propose a framework based on decision fusion, i.e., fusing predictions from several single-modality decoders to produce a more accurate device state estimate. We examine two algorithms for continuous variable decision fusion: the Kalman filter and artificial neural networks (ANNs). Using simulated cortical neural spike signals, we implemented several successful individual neural decoding algorithms, and tested the capabilities of each fusion method in the context of decoding 2-dimensional endpoint trajectories of a neural prosthetic arm. Extensively testing these methods on random trajectories, we find that on average both the Kalman filter and ANNs successfully fuse the individual decoder estimates to produce more accurate predictions. CONCLUSIONS: Our results reveal that a fusion-based approach has the potential to improve prediction accuracy over individual decoders of varying quality, and we hope that this work will encourage multimodal neural prosthetics experiments in the future.