IEEE Access (Jan 2020)

CNN With Large Data Achieves True Zero-Training in Online P300 Brain-Computer Interface

  • Jongmin Lee,
  • Kyungho Won,
  • Moonyoung Kwon,
  • Sung Chan Jun,
  • Minkyu Ahn

DOI
https://doi.org/10.1109/ACCESS.2020.2988057
Journal volume & issue
Vol. 8
pp. 74385 – 74400

Abstract

Read online

Each person has his or her own distinct event-related potential (ERP) signals. Thus, traditional brain-computer interface (BCI) systems require a calibration process in which the subject's data are extracted in order to train machine-learning classifiers. Despite past efforts to eliminate this process, often referred to as “zero-training,” BCI systems' best performance is achievable only with some level of calibration. This tedious process is one of the factors that have limited the use of BCI systems in the real world. Meanwhile, convolutional neural networks (CNN) have been proven to be useful in distinguishing neurophysiological features. In this study, we investigated whether an existing convolutional neural network (CNN) combined with large ERP samples (n = 99,000) can achieve zero-training in a P300 BCI speller system. As a result, the zero-trained CNN achieved comparable performance (89%, p <; 0.05) when compared to traditional approaches (94%) in an offline study. We also demonstrate that the constructed CNN works successfully in an online experiment in which twelve BCI subjects achieved 85% mean accuracy without calibration, compared to 82% (but the difference was not significant, p > 0.05) with calibration. Additionally, we illustrate a hybrid approach in order to further enhance performance, which adaptively updates a linear classifier using label information generated from a zero-trained CNN. With this technique, the hybrid approach achieved reasonable performance (92%), showing no statistical difference (p > 0.05) when compared to the traditional approach in the same offline data.

Keywords