IEEE Access (Jan 2024)

Adversarial Defense Based on Denoising Convolutional Autoencoder in EEG-Based Brain–Computer Interfaces

  • Yongting Ding,
  • Lin Li,
  • Qingyan Li

DOI
https://doi.org/10.1109/ACCESS.2024.3467154
Journal volume & issue
Vol. 12
pp. 146441 – 146452

Abstract

Read online

The exploration and implementation of brain-computer interfaces (BCIs) utilizing electro- encephalography (EEG) are becoming increasingly widespread. However, their safety considerations have received scant attention. Recent studies have shown that EEG-based BCIs are vulnerable to adversarial attacks. Remarkably, only a limited amount of literature has addressed adversarial defense strategies against EEG-based BCIs. This study introduces a defense approach based on autoencoders, termed the Denoising Convolutional Autoencoder (DCAE), which serves as a preprocessing unit preceding the classification model. The DCAE aims to mitigate adversarial disturbances prior to inputting samples into the classifier, thereby preserving the classifier’s original structure. Experiments were conducted using two different EEG datasets and three convolutional neural network (CNN) models to evaluate the effectiveness of DCAE. The experimental results show that the proposed method can achieve better defense effect in most cases against various adversarial attack methods. Additionally, the sensitivity of the DCAE to different magnitudes of perturbation was evaluated. The findings indicate that the robustness of DCAE is not affected by the variation of attack intensity, a characteristic not observed in existing defense strategies for EEG-based BCIs. It is our aspiration that these results will advance the frontier of research on defending EEG-based BCIs against adversarial threats.

Keywords