Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2025, Vol. 59 Issue (9): 1872-1880    DOI: 10.3785/j.issn.1008-973X.2025.09.011
    
EEG emotion recognition based on electrode arrangement and Transformer
Xuan MENG(),Xueying ZHANG*(),Ying SUN,Yaru ZHOU
College of Electronic Information Engineering, Taiyuan University of Technology, Taiyuan 030024, China
Download: HTML     PDF(1081KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

The RM-STC (Riemannian Manifold Space Transformer CNN) model based on Riemannian manifold space was proposed to explore the true order of information flow represented by Electroencephalogram (EEG) channels and improve emotion recognition performance. Firstly, the spatial covariance matrix features of EEG signals were calculated and mapped to Riemannian manifold space. The Riemannian distance matrix between EEG channels was then computed and subjected to a non-metric multidimensional scale transformation operation to obtain the one-dimensional ranking of the channels. The Pearson correlation coefficient feature matrix was rearranged according to the calculated relative distance order of the channels, allowing the CNN network to better convolve and learn local features. The advantage of modeling long-range dependencies in Transformer networks was utilized to learn global features to supplement the CNN network perspective, and the electrode channel order based on Riemannian manifold space computation was mapped into vector encoding embedded in the position encoding of the Transformer-CNN branch network, adding additional spatial position encoding information to the network. On the DEAP database, the average recognition rates of the valence dimension and arousal dimension by the proposed method reached 90.51% and 90.98%, respectively. The experimental results demonstrated that electrode arrangement based on Riemannian manifold space and effective spatial position encoding could effectively improve the accuracy of emotion recognition.



Key wordsEEG signal      Riemannian manifold      electrode arrangement      Transformer      emotion recognition     
Received: 12 September 2024      Published: 25 August 2025
CLC:  TP 391  
Fund:  国家自然科学基金资助项目(62271342).
Corresponding Authors: Xueying ZHANG     E-mail: mengxuan202109@163.com;tyzhangxy@163.com
Cite this article:

Xuan MENG,Xueying ZHANG,Ying SUN,Yaru ZHOU. EEG emotion recognition based on electrode arrangement and Transformer. Journal of ZheJiang University (Engineering Science), 2025, 59(9): 1872-1880.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2025.09.011     OR     https://www.zjujournals.com/eng/Y2025/V59/I9/1872


基于电极排列和Transformer的脑电情感识别

为了探索脑电通道所表征信息流的真正顺序以提升情感识别效果,提出基于黎曼流形空间的RM-STC模型. 计算脑电信号的空间协方差矩阵特征,将其映射到黎曼流形空间,计算得出脑电通道之间的黎曼距离矩阵;将该距离矩阵进行非度量型多维尺度变换运算获得通道的一维排序;按照计算得出的通道相对远近顺序重新排列皮尔逊相关系数特征矩阵,使CNN网络可以更好地卷积学习局部特征. 利用Transformer网络建模长距离依赖的优势学习全局特征补充CNN网络视角,并将基于黎曼流形空间计算的电极通道顺序映射为向量编码嵌入到Transformer-CNN分支网络的位置编码处,为该网络添加额外的空间位置编码信息. 在DEAP数据库上,本研究所提方法的效价维和唤醒维的平均识别率分别达到90.51%和90.98%,实验结果证明,基于黎曼流形空间的电极排列和有效的空间位置编码可以有效提升情感识别的准确率.


关键词: 脑电信号,  黎曼流形,  电极排列,  Transformer,  情感识别 
Fig.1 Overall framework of proposed algorithm
Fig.2 Riemannian manifold space and tangent space
Fig.3 PCC local feature map
Fig.4 Electrode arrangement method
Fig.5 Recognition network based on Riemannian manifold position encoding
模型A1A2
MESAE[23]76.1777.19
KNN[24]82.7682.77
Merged LSTM[25]84.8983.85
CNN[13]80.28
DWT-KNN[26]89.5087.20
STS-Transformer[17]89.8686.83
RM-LSTM[22]89.9588.73
RM- STC90.5190.98
Tab.1 Performance comparison results of different emotion recognition models %
类型输出形状kernstep
Convolution32×2×3231
Convolution32×32×6431
Max-pooling16×16×642×22
Convolution16×16×12831
Convolution16×16×25631
Max-pooling8×8×2562×22
Dense256
Softmax2
Tab.2 CNN network architecture
电极排列方式A2/%
K = 3K = 5K = 7
dist[13]84.5486.9086.82
dist-restr[13]84.6587.0187.04
ES-PER84.2487.2687.30
ES-SUM84.9887.1387.37
RM-PER85.7387.3887.49
RM-SUM85.8487.7987.88
Tab.3 Results of ablation experiment with different electrode arrangements
模型A1 (std1)A2 (std2)
CNN87.20(4.22)87.44(4.15)
TC89.39(3.98)89.79(3.58)
RM-STC90.51(3.82)90.98(3.80)
Tab.4 Ablation experiment results of RM-STC model %
Fig.6 Results of each subject in different models (valence)
Fig.7 Results of each subject in different models (arousal)
[1]   LI Y, GUO W, WANG Y Emotion recognition with attention mechanism-guided dual-feature multi-path interaction network[J]. Signal, Image and Video Processing, 2024, 18 (1): 617- 626
[2]   RANA A, JHA S. Emotion based hate speech detection using multimodal learning [EB/OL]. (2022-02-13) [2024-9-17]. https://arxiv.org/abs/2202.06218v1.
[3]   ZHANG S, ZHAO X, TIAN Q Spontaneous speech emotion recognition using multiscale deep convolutional LSTM[J]. IEEE Transactions on Affective Computing, 2022, 13 (2): 680- 688
doi: 10.1109/TAFFC.2019.2947464
[4]   ZHANG S, YANG Y, CHEN C, et al Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: a systematic review of recent advancements and future prospects[J]. Expert Systems with Applications, 2024, 237: 121692
doi: 10.1016/j.eswa.2023.121692
[5]   TOISOUL A, KOSSAIFI J, BULAT A, et al Estimation of continuous valence and arousal levels from faces in naturalistic conditions[J]. Nature Machine Intelligence, 2021, 3 (1): 42- 50
doi: 10.1038/s42256-020-00280-0
[6]   HAN L, ZHANG X, YIN J EEG emotion recognition based on the TimesNet fusion model[J]. Applied Soft Computing, 2024, 159: 111635
doi: 10.1016/j.asoc.2024.111635
[7]   HERRMANN C S, STRÜBER D, HELFRICH R F, et al EEG oscillations: from correlation to causality[J]. International Journal of Psychophysiology, 2016, 103: 12- 21
doi: 10.1016/j.ijpsycho.2015.02.003
[8]   WU X, ZHENG W L, LI Z, et al Investigating EEG-based functional connectivity patterns for multimodal emotion recognition[J]. Journal of Neural Engineering, 2022, 19 (1): 016012
doi: 10.1088/1741-2552/ac49a7
[9]   WYCZESANY M, CAPOTOSTO P, ZAPPASODI F, et al Hemispheric asymmetries and emotions: evidence from effective connectivity[J]. Neuropsychologia, 2018, 121: 98- 105
doi: 10.1016/j.neuropsychologia.2018.10.007
[10]   WANG W Brain network features based on theta-gamma cross-frequency coupling connections in EEG for emotion recognition[J]. Neuroscience Letters, 2021, 761: 136106
doi: 10.1016/j.neulet.2021.136106
[11]   CHENG C, ZHANG Y, LIU L, et al Multi-domain encoding of spatiotemporal dynamics in EEG for emotion recognition[J]. IEEE Journal of Biomedical and Health Informatics, 2023, 27 (3): 1342- 1353
doi: 10.1109/JBHI.2022.3232497
[12]   MOON S, MOON S E, LEE J S. Resting-state fNIRS classification using connectivity and convolutional neural networks [C]// IEEE International Conference on Systems, Man, and Cybernetics. Prague: IEEE, 2022: 1724–1729.
[13]   MOON S E, CHEN C J, HSIEH C J, et al Emotional EEG classification using connectivity features and convolutional neural networks[J]. Neural Networks, 2020, 132: 96- 107
doi: 10.1016/j.neunet.2020.08.009
[14]   CHEN C J, WANG J L A new approach for functional connectivity via alignment of blood oxygen level-dependent signals[J]. Brain Connectivity, 2019, 9 (6): 464- 474
doi: 10.1089/brain.2018.0636
[15]   DOSE H, MØLLER J S, IVERSEN H K, et al An end-to-end deep learning approach to MI-EEG signal classification for BCIs[J]. Expert Systems with Applications, 2018, 114: 532- 542
doi: 10.1016/j.eswa.2018.08.031
[16]   GUO J Y, CAI Q, AN J P, et al A Transformer based neural network for emotion recognition and visualizations of crucial EEG channels[J]. Physica A: Statistical Mechanics and Its Applications, 2022, 603: 127700
doi: 10.1016/j.physa.2022.127700
[17]   ZHENG W, PAN B A spatiotemporal symmetrical transformer structure for EEG emotion recognition[J]. Biomedical Signal Processing and Control, 2024, 87: 105487
doi: 10.1016/j.bspc.2023.105487
[18]   HU X, CHEN Y, YAN J, et al Masked self-supervised pre-training model for EEG-based emotion recognition[J]. Computational Intelligence, 2024, 40 (3): e12659
doi: 10.1111/coin.12659
[19]   DEXTER E, ROLLWAGEN-BOLLENS G, BOLLENS S M The trouble with stress: a flexible method for the evaluation of nonmetric multidimensional scaling[J]. Limnology and Oceanography: Methods, 2018, 16 (7): 434- 443
doi: 10.1002/lom3.10257
[20]   KULKARNI S, PATIL P R. Analysis of DEAP dataset for emotion recognition [C]// International Conference on Intelligent and Smart Computing in Data Analytics: ISCDA 2020. Singapore: Springer Singapore, 2021: 67–76.
[21]   KOBLER R J, HIRAYAMA J I, ZHAO Q, et al. SPD domain-specific batch normalization to crack interpretable unsupervised domain adaptation in EEG [EB/OL]. (2022-10-12)[2024-09-12]. https://arxiv.org/abs/2206.01323v2.
[22]   ZHANG G, ETEMAD A Spatio-temporal EEG representation learning on Riemannian manifold and euclidean space[J]. IEEE Transactions on Emerging Topics in Computational Intelligence, 2024, 8 (2): 1469- 1483
doi: 10.1109/TETCI.2023.3332549
[23]   YIN Z, ZHAO M, WANG Y, et al Recognition of emotions using multimodal physiological signals and an ensemble deep learning model[J]. Computer Methods and Programs in Biomedicine, 2017, 140: 93- 110
doi: 10.1016/j.cmpb.2016.12.005
[24]   PIHO L, TJAHJADI T A mutual information based adaptive windowing of informative EEG for emotion recognition[J]. IEEE Transactions on Affective Computing, 2020, 11 (4): 722- 735
doi: 10.1109/TAFFC.2018.2840973
[25]   GARG A, KAPOOR A, BEDI A K, et al. Merged LSTM Model for emotion classification using EEG signals [C]// International Conference on Data Science and Engineering. Patna: IEEE, 2019: 139-143.
[1] Jie LIU,You WU,Jiahe TIAN,Ke HAN. Based on improved Transformer for super-resolution reconstruction of lung CT images[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1434-1442.
[2] Yongqing CAI,Cheng HAN,Wei QUAN,Wudi CHEN. Visual induced motion sickness estimation model based on attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1110-1118.
[3] Mengyao ZHANG,Jie ZHOU,Wenting LI,Yong ZHAO. Three-dimensional mesh segmentation framework using global and local information[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(5): 912-919.
[4] Dejun ZHANG,Yanzi BAI,Feng CAO,Yiqi WU,Zhanya XU. Point cloud Transformer adapter for dense prediction task[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(5): 920-928.
[5] Zhenli ZHANG,Xinkai HU,Fan LI,Zhicheng FENG,Zhichao CHEN. Semantic segmentation algorithm for multiscale remote sensing images based on CNN and Efficient Transformer[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(4): 778-786.
[6] Li MA,Yongshun WANG,Yao HU,Lei FAN. Pre-trained long-short spatiotemporal interleaved Transformer for traffic flow prediction applications[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(4): 669-678.
[7] Bing YANG,Chuyang XU,Jinliang YAO,Xueqin XIANG. 3D hand pose estimation method based on monocular RGB images[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(1): 18-26.
[8] Xianwei MA,Chaohui FAN,Weizhi NIE,Dong LI,Yiqun ZHU. Robust fault diagnosis method for failure sensors[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(7): 1488-1497.
[9] Kang FAN,Ming’en ZHONG,Jiawei TAN,Zehui ZHAN,Yan FENG. Traffic scene perception algorithm with joint semantic segmentation and depth estimation[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(4): 684-695.
[10] Shaojie WEN,Ruigang WU,Chaowen FENG,Yingli LIU. Multimodal cascaded document layout analysis network based on Transformer[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(2): 317-324.
[11] Wei LUO,Zuotao YAN,Jiahao GUAN,Jian HAN. Solar cell defect segmentation model based on improved SegFormer[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(12): 2459-2468.
[12] Changzhen XIONG,Chuanxi GUO,Cong WANG. Target tracking algorithm based on dynamic position encoding and attention enhancement[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(12): 2427-2437.
[13] Yue LIU,Xueying ZHANG,Guijun CHEN,Lixia HUANG,Ying SUN. EEG-fNIRS emotion recognition based on multi-brain attention mechanism capsule fusion network[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(11): 2247-2257.
[14] Xiaofeng FU,Weiqi CHEN,Yao SUN,Yuze PAN. Bimodal software classification model based on bidirectional encoder representation from transformer[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(11): 2239-2246.
[15] Longxue LIANG,Chenglong HE,Xiaosuo WU,Haowen YAN. Remote sensing image semantic segmentation network based on global information extraction and reconstruction[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(11): 2270-2279.