EEG-based emotion recognition uses high-level information from neural activities to predict emotional responses in subjects. However, this information is sparsely distributed in frequency, time, and spatial domains and varied across subjects. To address these challenges in emotion recognition, we propose a novel neural network model named Temporal-Spectral Graph Convolutional Network (TSGCN). To capture high-level information distributed in time, spatial, and frequency domains, TSGCN considers both neural oscillation changes in different time windows and topological structures between different brain regions. Specifically, a Minimum Category Confusion (MCC) loss is used in TSGCN to reduce the inconsistencies between subjective ratings and predefined labels. In addition, to improve the generalization of TSGCN on cross-subject variation, we propose Deep and Shallow feature Dynamic Adversarial Learning (DSDAL) to calculate the distance between the source domain and the target domain. Extensive experiments were conducted on public datasets to demonstrate that TSGCN outperforms state-of-the-art methods in EEG-based emotion recognition. Ablation studies show that the mixed neural networks and our proposed methods in TSGCN significantly contribute to its high performance and robustness. Detailed investigations further provide the effectiveness of TSGCN in addressing the challenges in emotion recognition.
Keywords: Affective computing; Cross-subjects; EEG signal; Graph neural network.
© 2024. The Author(s).