Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2024, Vol. 58 Issue (11): 2247-2257    DOI: 10.3785/j.issn.1008-973X.2024.11.006
    
EEG-fNIRS emotion recognition based on multi-brain attention mechanism capsule fusion network
Yue LIU(),Xueying ZHANG*(),Guijun CHEN,Lixia HUANG,Ying SUN
College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan 030024, China
Download: HTML     PDF(1323KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

The multi-brain attention mechanism and capsule fusion module based on CapsNet (MBA-CF-cCapsNet) was proposed in order to improve the accuracy of emotion recognition. EEG-fNIRS signals were evoked by emotional video clips to construct TYUT3.0 dataset, and the features of EEG and fNIRS were extracted and mapped to the matrix. The features of EEG and fNIRS were fused by the multi-brain region attention mechanism, and different weights were given to the features of different brain regions in order to extract higher quality primary capsules. The capsule fusion module was used to reduce the number of capsules entering the dynamic routing mechanism and reduce the running time of the model. The MBA-CF-cCapsNet model was used to conduct experiment on the TYUT3.0 dataset. The accuracy of emotion recognition combined with the two signals increased by 1.53% and 14.35% compared with the results of single-modal EEG and fNIRS. The average recognition rate of the MBA-CF-cCapsNet model increased by 4.98% compared with the original CapsNet model, and was improved by 1%-5% compared with the current commonly used CapsNet emotion recognition model.



Key wordscapsule network      EEG      fNIRS      multi-brain attention mechanism      capsule fusion      emotion recognition     
Received: 10 July 2023      Published: 23 October 2024
CLC:  TP 393  
Fund:  国家自然科学基金资助项目(62271342,62201377);山西省回国留学人员科研资助项目(2022-072);山西省基础研究计划资助项目 (202203021211174).
Corresponding Authors: Xueying ZHANG     E-mail: liuyueofficial9935@163.com;zhangxy@tyut.edu.cn
Cite this article:

Yue LIU,Xueying ZHANG,Guijun CHEN,Lixia HUANG,Ying SUN. EEG-fNIRS emotion recognition based on multi-brain attention mechanism capsule fusion network. Journal of ZheJiang University (Engineering Science), 2024, 58(11): 2247-2257.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2024.11.006     OR     https://www.zjujournals.com/eng/Y2024/V58/I11/2247


基于多脑区注意力机制胶囊融合网络的EEG-fNIRS情感识别

为了提高情感识别的准确率,提出多脑区注意力机制和胶囊融合模块的胶囊网络模型(MBA-CF-cCapsNet). 通过情感视频片段诱发采集EEG-fNIRS信号,构建TYUT3.0数据集. 提取EEG和fNIRS的特征,将其映射到矩阵,通过多脑区注意力机制融合EEG和fNIRS的特征,给予不同脑区特征不同的权重,以提取质量更高的初级胶囊. 使用胶囊融合模块,减少进入动态路由机制的胶囊数量,减少模型运行的时间. 利用MBA-CF-cCapsNet模型在TYUT3.0情感数据集上进行实验,与单模态EEG和fNIRS识别结果相比,2种信号结合情感识别的准确率提高了1.53%和14.35%. MBA-CF-cCapsNet模型与原始CapsNet模型相比,平均识别率提高了4.98%,与当前常用的CapsNet情感识别模型相比提高了1%~5%.


关键词: 胶囊网络,  EEG,  fNIRS,  多脑区注意力机制,  胶囊融合,  情感识别 
Fig.1 EEG and fNIRS emotion recognition block diagram
Fig.2 EEG and fNIRS channel distribution map
Fig.3 Experimental paradigm for EEG-fNIRS emotion recognition
Fig.4 Matrix of EEG and fNIRS channel mapping
Fig.5 MBA-CF-cCapsNet model
Fig.6 Brain area distribution map
Fig.7 Fusion phase of multi brain attention mechanism
Fig.8 Transition stage of multibrain region attention mechanism
Fig.9 Capsule fusion module
Fig.10 Dynamic routing mechanism
模型参数尺寸
多脑区注意力机制Graph SAGEin_channels5
Graph SAGEout_channels5
MaxpoolingKernelC
FC1in_features30
FC1out_features20
FC2in_features20
FC2out_features6
ReLu
Sigmoid
卷积层Conv1_1Kernel3×3×128
Conv1_2Kernel5×5×128
初级胶囊模块Conv2_1Kernel3×3×128
Conv2_2Kernel5×5×128
Conv3Kernel1×1×256
胶囊融合模块MaxpoolingKernel8
ReLu
Tanh
分类胶囊模块动态路由机制Wij8×16
Tab.1 Experimental parameter of MBA-CF-cCapsNet model
t/sAcc/%
平均值SadHappyCalmFear
196.4196.3996.8494.9797.44
396.6796.5797.3195.0297.76
596.3296.3596.7994.8897.26
795.8995.9696.1694.7196.71
Tab.2 Emotion recognition result at different segment length
模型Acc/%Npte/s
平均值(标准差)SadHappyCalmFear
CapsNet91.69(5.45)91.2792.7190.3492.4228971551814
cCapsNet94.95(5.3)95.9694.2894.2795.2732704032278
MBA-cCapsNet96.30(3.04)96.4896.2694.9597.5232962273312
MBA-CF-cCapsNet96.67(2.68)96.5797.3195.0297.7621810431574
Tab.3 Ablation experiments of MBA-CF-cCapsNet model
模型F1
Macro-F1SadHappyCalmFear
CapsNet0.920.930.920.910.92
cCapsNet0.950.960.940.950.96
MBA-cCapsNet0.960.970.960.960.97
MBA-CF-cCapsNet0.970.970.970.960.98
Tab.4 F1 score for different model
Fig.11 Brain region weight distribution map
模型Acc/%
平均值(标准差)SadHappyCalmFear
MBA-CF-cCapsNet(EEG)95.14(3.95)95.6095.5293.1796.28
MBA-CF-cCapsNet(fNIRS)82.32(7.53)81.5783.0780.0584.58
MBA-CF-cCapsNet
(EEG-fNIRS)
96.67(2.68)96.5797.3195.0297.76
Tab.5 Sentiment classification performance for data with different modality
Fig.12 EEG sentiment classification confusion matrix
Fig.13 fNIRS sentiment classification confusion matrix
Fig.14 EEG-fNIRS sentiment classification confusion matrix
模型Acc/%
平均值(标准差)SadHappyCalmFear
SVM89.60(6.59)90.9889.1388.9689.31
2DCNN90.56(4.96)90.4890.2089.6391.93
gcForest81.91(8.63)82.7680.2782.4382.17
Transformer86.84(9.25)85.6289.3583.8488.54
GCN90.16(5.10)90.1590.7188.4591.33
MFM-CapsNet[10]92.74(3.14)92.193.5291.6193.73
MLF-CapsNet[11]94.65(3.80)94.4894.7993.4695.86
ST-CapsNet[12]94.01(2.95)93.5795.0293.0494.42
MBA-CF-cCapsNet96.67(2.68)96.5797.3195.0297.76
Tab.6 Contrast with other emotion that recognition models
[1]   吴朝晖 类脑研究: 为人类构建超级大脑[J]. 浙江大学学报: 工学版, 2020, 54 (3): 425- 426
WU Zhaohui Cybrain: building superbrain for humans[J]. Journal of Zhejiang University: Engineering Science, 2020, 54 (3): 425- 426
[2]   QIU L, ZHONG Y, XIE Q, et al Multi-modal integration of EEG-fNIRS for characterization of brain activity evoked by preferred music[J]. Frontiers in Neurorobotics, 2022, 16: 823435
doi: 10.3389/fnbot.2022.823435
[3]   MAJID R M, JONG H L EEG based emotion recognition from human brain using Hjorth parameters and SVM[J]. International Journal of Bio-Science and Bio-Technology, 2015, 7 (3): 23- 32
doi: 10.14257/ijbsbt.2015.7.3.03
[4]   LI T Y, FU B L, WU Z X, et al EEG-based emotion recognition using spatial-temporal-connective features via multi-scale CNN[J]. IEEE Access, 2023, 11: 41859- 41867
doi: 10.1109/ACCESS.2023.3270317
[5]   DU G L, SU J S, ZHANG L L, et al A multi-dimensional graph convolution network for EEG emotion recognition[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1- 11
[6]   LI C, HUANG X Y, SONG R C, et al EEG-based seizure prediction via Transformer guided CNN[J]. Measurement, 2022, 203: 111948
doi: 10.1016/j.measurement.2022.111948
[7]   CHENG J, CHEN M Y, LI C, et al Emotion recognition from multi-channel EEG via deep forest[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 25 (2): 453- 464
[8]   BANDARA D, VELIPASALAR S, BRATT S, et al Building predictive models of emotion with functional near-infrared spectroscopy[J]. International Journal of Human-Computer Studies, 2018, 110: 75- 85
doi: 10.1016/j.ijhcs.2017.10.001
[9]   HU X, ZHUANG C, WANG F, et al fNIRS evidence for recognizably different positive emotions[J]. Frontiers in Human Neuroscience, 2019, 13: 120
doi: 10.3389/fnhum.2019.00120
[10]   SUN Y, AYAZ H, AKANSU AN Multimodal affective state assessment using fNIRS+EEG and spontaneous facial expression[J]. Brain Sciences, 2020, 10 (2): 85- 104
doi: 10.3390/brainsci10020085
[11]   BECKER H, FLEUREAU J, GUILLOTEL P, et al Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources[J]. IEEE Transactions on Affective Computing, 2020, 11 (2): 244- 257
doi: 10.1109/TAFFC.2017.2768030
[12]   ZHE S, ZIHAO H, FENG D, et al A novel multimodal approach for hybrid brain–computer interface[J]. IEEE Access, 2020, 8: 89909- 89918
doi: 10.1109/ACCESS.2020.2994226
[13]   DELIGANI R J, BORGHEAI S B, MCLINDEN J, et al Multimodal fusion of EEG-fNIRS: a mutual information-based hybrid classification framework[J]. Biomedical Optics Express, 2021, 12 (3): 1635- 1650
[14]   KWAK Y C, SONG W J, KIM S E FGANet: fNIRS-guided attention network for hybrid EEG-fNIRS brain-computer interfaces[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022, 30: 329- 339
doi: 10.1109/TNSRE.2022.3149899
[15]   王颖, 高胜 轻量型胶囊网络语音情感识别方法[J]. 电子科技大学学报, 2023, 52 (3): 423- 429
WANG Ying, GAO Sheng A speech emotion recognition method based on lightweight capsule network[J]. Journal of University of Electronic Science and Technology of China, 2023, 52 (3): 423- 429
doi: 10.12178/1001-0548.2022086
[16]   杨巨成, 韩书杰, 毛磊, 等 胶囊网络模型综述[J]. 山东大学学报: 工学版, 2019, 49 (6): 1- 10
YANG Jucheng, HAN Shujie, MAO Lei, et al Review of capsule network[J]. Journal of Shandong University: Engineering Science, 2019, 49 (6): 1- 10
[17]   HINTON G E, OSINDERO S, TEH Y W A fast-learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18 (7): 1527- 1554
[18]   ZHANG Y, CHENG C, ZHANG Y Multimodal emotion recognition using a hierarchical fusion convolutional neural network[J]. IEEE Access, 2021, 9: 7943- 7951
doi: 10.1109/ACCESS.2021.3049516
[19]   谌鈫, 陈兰岚, 江润强 集成胶囊网络的脑电情绪识别[J]. 计算机工程与应用, 2022, 58 (8): 175- 184
CHEN Qin, CHEN Lanlan, JIANG Runqiang Emotion recognition of EEG based on ensemble CapsNet[J]. Computer Engineering and Applications, 2022, 58 (8): 175- 184
doi: 10.3778/j.issn.1002-8331.2010-0263
[20]   YU L, DING Y F, CHANG L, et al Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network[J]. Computers in Biology and Medicine, 2020, 123: 103927
doi: 10.1016/j.compbiomed.2020.103927
[21]   WANG Z H, CHEN C, LI J, et al ST-CapsNet: linking spatial and temporal attention with capsule network for P300 detection improvement[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 991- 1000
doi: 10.1109/TNSRE.2023.3237319
[22]   LI C, WANG B, ZHANG S L, et al Emotion recognition from EEG based on multi-task learning with capsule network and attention mechanism[J]. Computers in Biology and Medicine, 2022, 143: 105303
doi: 10.1016/j.compbiomed.2022.105303
[23]   张静, 张雪英, 陈桂军, 等 结合3D-CNN和频-空注意力机制的EEG情感识别[J]. 西安电子科技大学学报, 2022, 49 (3): 191- 198
ZHANG Jing, ZHANG Xueying, CHEN Guijun, et al EEG emotion recognition based on the 3D-CNN and spatial-frequency attention mechanism[J]. Journal of Xidian University, 2022, 49 (3): 191- 198
[24]   GUIDO N, EDGAR L G, ZHENG L, et al Mathematical relations between measures of brain connectivity estimated from electrophysiological recordings for gaussian distributed data[J]. Frontiers in Neuroscience, 2020, 14: 577574
doi: 10.3389/fnins.2020.577574
[25]   HAO M G, XING T X, JIANG J L, et al Attention mechanisms in computer vision: a survey[J]. Computational Visual Media, 2022, 8 (3): 331- 368
doi: 10.1007/s41095-022-0271-y
[26]   崔浩阳, 丁偕, 张敬谊 基于细胞图卷积的组织病理图像分类研究[J]. 计算机工程与应用, 2020, 56 (24): 223- 228
CUI Haoyang, DING Xie, ZHANG Jingyi Research on classification of histopathological image based on cell graph convolutional network[J]. Computer Engineering and Applications, 2020, 56 (24): 223- 228
doi: 10.3778/j.issn.1002-8331.2009-0364
[27]   GUANG B, KAI Y, LI T, et al Linking multi-layer dynamical GCN with style-based recalibration CNN for EEG-based emotion recognition[J]. Frontiers in Neurorobotics, 2022, 16: 834952
doi: 10.3389/fnbot.2022.834952
[28]   HAMILTON W, YING Z, LESKOVEC J. Inductive representation learning on large graphs [J]. Advances in Neural Information Processing Systems, 2017(12): 1024-1034.
[29]   LI Y, ZHENG W, WANG L, et al From regional to global brain: a novel hierarchical spatial-temporal neural network model for EEG emotion recognition[J]. IEEE Transactions on Affective Computing, 2019, 13 (2): 568- 578
[30]   MOON S E, JANG S B, LEE J S. Convolutional neural network approach for EEG-based emotion recognition using brain connectivity and its spatial information [C]// IEEE International Conference on Acoustics, Speech and Signal Processing. Calgary: IEEE, 2018: 2556–2560.
[1] Bo ZHONG,Pengfei WANG,Yiqiao WANG,Xiaoling WANG. Survey of deep learning based EEG data analysis technology[J]. Journal of ZheJiang University (Engineering Science), 2024, 58(5): 879-890.
[2] Hai-feng LI,Xue-ying ZHANG,Shu-fei DUAN,Hai-rong JIA,Hui-zhi LIANG. Fusing generative adversarial network and temporal convolutional network for Mandarin emotion recognition[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(9): 1865-1875.
[3] Qing ZHAO,Xue-ying ZHANG,Gui-jun CHEN,Jing ZHANG. EEG and fNIRS emotion recognition based on modality attention graph convolution feature fusion[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(10): 1987-1997.
[4] Guo-mei JIN,Qing-shan SHE,Min ZHANG,Yu-liang MA,Jian-hai ZHANG,Ming-xu SUN. Functional cortical muscle coupling method of multi-scale compensated transfer entropy[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(6): 1152-1158, 1256.
[5] Meng XU,Dan WANG,Zhi-yuan LI,Yuan-fang CHEN. IncepA-EEGNet: P300 signal detection method based on fusion of Inception network and attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 745-753, 782.
[6] Ya-jing WANG,Qun WANG,Bo-wen LI,Zhi-wen LIU,Yuan-yuan PIAO,Tao YU. Seizure prediction based on pre-ictal period selection of EEG signal[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(11): 2258-2265.
[7] Ying SUN,Yan-xiang HU,Xue-ying ZHANG,Shu-fei DUAN. Prediction of emotional dimensions PAD for emotional speech recognition[J]. Journal of ZheJiang University (Engineering Science), 2019, 53(10): 2041-2048.
[8] WANG Wei-xing, SUN Shou-qian, LI Chao, TANG Zhi-chuan. Recognition of upper limb motion intention of EEG signal based on convolutional neural network[J]. Journal of ZheJiang University (Engineering Science), 2017, 51(7): 1381-1389.
[9] TONG Ji-jun, LI Lin, LIN Qin-guang, ZHU Dan-hua. SSVEP brain-computer interface (BCI) system using smoothed pseudo Wigner-Ville distribution[J]. Journal of ZheJiang University (Engineering Science), 2017, 51(3): 598-604.
[10] SUN Ling-yun, HE Bo-wei, LIU Zheng, YANG Zhi-yuan. Speech emotion recognition based on information cell[J]. Journal of ZheJiang University (Engineering Science), 2015, 49(6): 1001-1009.
[11] CHEN Shi, ZHENG Kai Hong, SUN Ling Yun, LI Yan. Design and application of vibration emoticons in wearable computing devices[J]. Journal of ZheJiang University (Engineering Science), 2015, 49(12): 2298-2304.
[12] YANG Bang-hua, HE Mei-yan, LIU Li, LU Wen-yu. EEG classification based on batch incremental SVM in
brain computer interfaces
[J]. Journal of ZheJiang University (Engineering Science), 2013, 47(8): 1431-1436.
[13] ZHU Dan-hua, CHEN Da-jing, CHEN Yu-quan, PAN Min. Enhancement of steady-state visual evoked potentials using
parameter-tuned stochastic resonance
[J]. Journal of ZheJiang University (Engineering Science), 2012, 46(5): 918-922.
[14] SHU Dan-Hua, TONG Ji-Jun, BO Min, DENG. Electroencephalogram on pain relief of transcutaneous electrical acupoint stimulation[J]. Journal of ZheJiang University (Engineering Science), 2009, 43(12): 2319-2322.