A hybrid network using transformer with modified locally linear embedding and sliding window convolution for EEG decoding

J Neural Eng. 2024 Dec 24. doi: 10.1088/1741-2552/ada30b. Online ahead of print.

Abstract

Objective: Brain-computer interface(BCI) is leveraged by artificial intelligence in EEG signal decoding, which makes it possible to become a new means of human-machine interaction. However, the performance of current EEG decoding methods is still insufficient for clinical applications because of inadequate EEG information extraction and limited computational resources in hospitals. This paper introduces a hybrid network that employs a Transformer with modified locally linear embedding and sliding window convolution for EEG decoding.

Approach: This network separately extracts channel and temporal features from EEG signals, subsequently fusing these features using a cross-attention mechanism. Simultaneously, manifold learning is employed to lower the computational burden of the model by mapping the high-dimensional EEG data to a low-dimensional space by its dimension reduction function.

Main results: The proposed model achieves accuracy rates of 84.44%, 94.96%, and 82.79% on the BCI Competition IV dataset 2a, High Gamma dataset, and a self-constructed motor imagery dataset from the left and right hand fist-clenching tests respectively. The results indicate our model outperforms the baseline models by EEG-channel Transformer with dimension-reduced EEG data and window attention with sliding window convolution. Additionally, to enhance the interpretability of the model, features preceding the temporal feature extraction network were visualized. This visualization promotes the understanding of how the model prefers task-related channels.

Significance: The Transformer-based method makes the MI-EEG decoding more practical for further clinical applications.

Keywords: Convolutional neural network; Cross attention; EEG classification; Modified locally linear embedding; Transformer.