Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2022, Vol. 56 Issue (4): 745-753, 782    DOI: 10.3785/j.issn.1008-973X.2022.04.014
    
IncepA-EEGNet: P300 signal detection method based on fusion of Inception network and attention mechanism
Meng XU1(),Dan WANG1,*(),Zhi-yuan LI1,Yuan-fang CHEN2
1. Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2. Beijing Institute of Machinery and Equipment, Beijing 100039, China
Download: HTML     PDF(1121KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

A novel EEGNet variation based on the fusion of the Inception and attention mechanism modules was proposed, called IncepA-EEGNet, in order to achieve more efficient P300 signal feature extraction. Convolutional layers with different receptive fields were connected in parallel. The network’s ability to extract and express EEG signals were enhanced. Then the attention mechanism was introduced to assign weights to the features of different filters, and important information was extracted from the P300 signal. The validation experiment was conducted on two subjects of BCI Competition III dataset II. Results showed that the IncepA-EEGNet recognition accuracy reached 75.5% after just 5 epochs compared with other deep learning models. The information transmission rate was up to 33.44 bits/min on subject B after 3 epochs. These experimental results demonstrate that the IncepA-EEGNet effectively improves the recognition accuracy of the P300 signal, reduces the time of repeated trials, and enhances the applicability of the P300 speller.



Key wordsattention mechanism      Inception network      EEGNet      P300 detection      character spelling     
Received: 30 July 2021      Published: 24 April 2022
CLC:  TP 391  
Fund:  国家自然科学基金资助项目(61672505)
Corresponding Authors: Dan WANG     E-mail: xumeng@emails.bjut.edu.cn;wangdan@bjut.edu.cn
Cite this article:

Meng XU,Dan WANG,Zhi-yuan LI,Yuan-fang CHEN. IncepA-EEGNet: P300 signal detection method based on fusion of Inception network and attention mechanism. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 745-753, 782.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2022.04.014     OR     https://www.zjujournals.com/eng/Y2022/V56/I4/745


IncepA-EEGNet: 融合Inception网络和注意力机制的P300信号检测方法

为了实现更高效的P300信号特征提取,提出融合Inception网络和注意力机制模块的卷积网络模型,即IncepA-EEGNet. 该模型使用不同感受野的卷积层进行并行连接,增强网络提取和表达脑电信号的能力. 引入注意力机制实现不同过滤器特征的权重分配,提取P300信号中的重要信息. 模型在BCI Competition III数据集II的2个受试者数据上进行验证. 与其他深度学习模型相比,IncepA-EEGNet的字符识别率在5个实验轮次后达到平均75.5%,在3个轮次后受试者B的信息传输速率达到33.44 bit/min. 实验结果表明,IncepA-EEGNet有效提高了P300信号的识别精度,减少了重复试验的时间,改善了P300拼写器的实用性.


关键词: 注意力机制,  Inception网络,  EEGNet,  P300检测,  字符拼写 
Fig.1 P300 speller Character matrix
Fig.2 P300 speller experiment paradigm flow
Fig.3 Inception-v1 module
Fig.4 Framework of IncepA-EEGNet
Fig.5 SE attention mechanism combined with Inception-v1 module
受试者 训练集样本数 测试集样本数
目标 非目标 目标 非目标
受试者A 2550 12750 3000 15000
受试者B 2550 12750 3000 15000
Tab.1 Datasets’s information of P300 speller
Fig.6 Overall framework for P300 speller character recognition
受试者 K 卷积核大小 Acc R P F1
A 8 (8/4/2,1) 0.7323 0.6630 0.3432 0.4523
A 16 (16/8/4,1) 0.7384 0.6547 0.3485 0.4548
A 32 (32/16/8,1) 0.7425 0.6486 0.3520 0.4564
A 64 (64/32/16,1) 0.7314 0.6677 0.3430 0.4532
A 128 (128/64/32,1) 0.7553 0.6457 0.3670 0.4679
A 160 (160/80/40,1) 0.7514 0.6210 0.3582 0.4543
B 8 (8/4/2,1) 0.7817 0.7034 0.4122 0.5149
B 16 (16/8/4,1) 0.7855 0.7186 0.4168 0.5276
B 32 (32/16/8,1) 0.7848 0.7143 0.4154 0.5253
B 64 (64/32/16,1) 0.7899 0.6993 0.4215 0.5260
B 128 (128/64/32,1) 0.7914 0.7250 0.4261 0.5367
B 160 (160/80/40,1) 0.7889 0.6987 0.4199 0.5245
Tab.2 Classification results using different convolution kernel parameters K on IncepA-EEGNet
受试者 r Acc R P F1
A 1 0.7458 0.6473 0.3557 0.4592
A 3 0.7553 0.6457 0.3670 0.4679
A 9 0.7425 0.6486 0.3520 0.4610
A 12 0.7314 0.6683 0.3430 0.4533
B 1 0.7869 0.7227 0.4193 0.5307
B 3 0.7914 0.7250 0.4261 0.5367
B 9 0.7954 0.6880 0.4291 0.5285
B 12 0.7829 0.7133 0.4125 0.5227
Tab.3 Classification results of different descending coefficients used by attention mechanism
Fig.7 Training loss and test accuracy on IncepA-EEGNet
添加模块 Acc
受试者 CNN-1 MCNN-1 MCNN-3 EEGNet
基础网络(Net) A 0.7037 0.6899 0.7038 0.7065
基础网络(Net) B 0.7065 0.6912 0.7037 0.7266
Net+Attention A 0.7092 0.6906 0.7091 0.7141
Net+Attention B 0.7185 0.7154 0.7192 0.7399
Net+Inception-v1 A 0.7100 0.6965 0.7103 0.7174
Net+Inception-v1 B 0.7222 0.7276 0.7203 0.7476
Net+Attention +Inception-v1 A 0.7186 0.7084 0.7258 0.7553
Net+Attention +Inception-v1 B 0.7454 0.7384 0.7478 0.7914
Tab.4 Impact of adding sub-modules to different CNN networks on classification accuracy
方法 受试者 Acc R P F1
CNN-1[10] A 0.7037 0.6737 0.3170 0.4311
CNN-1[10] B 0.7065 0.6783 0.4073 0.5090
MCNN-1[10] A 0.6899 0.6903 0.3085 0.4260
MCNN-1[10] B 0.6912 0.7340 0.3833 0.5034
MCNN-3[10] A 0.7038 0.6743 0.3172 0.4314
MCNN-3[10] B 0.7037 0.6923 0.4089 0.5141
EEGNet[20] A 0.7065 0.6460 0.3147 0.4232
EEGNet[20] B 0.7266 0.6950 0.4214 0.4587
BN3[17] A 0.7513 0.6133 0.3607 0.4605
BN3[17] B 0.7902 0.6947 0.4214 0.5246
IncepA-EEGNet A 0.7553 0.6456 0.3676 0.4679
IncepA-EEGNet B 0.7914 0.7250 0.4261 0.5367
Tab.5 Comparison of IncepA-EEGNet’s performance with other deep learning methods on P300 signal classification
Fig.8 Comparison of information transfer rate of IncepA-EEGNet model with other methods on subject A and subject B
方法 受试者 Pc/%
n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 n = 9 n = 10 n = 11 n = 12 n = 13 n = 14 n = 15
CNN-1[10] A 16 33 47 52 61 65 77 78 85 86 90 91 91 93 97
CNN-1[10] B 35 52 59 68 79 81 82 89 92 91 91 90 91 92 92
MCNN-1[10] A 18 31 50 54 61 68 76 76 79 82 89 92 91 93 97
MCNN-1[10] B 39 55 62 64 77 79 86 92 91 92 95 95 95 94 94
MCNN-3[10] A 17 35 50 55 63 67 78 79 84 85 91 90 92 94 97
MCNN-3[10] B 34 56 60 68 74 80 82 89 90 90 91 88 90 91 92
BN3[17] A 22 39 58 67 73 75 79 81 82 86 89 92 94 96 98
BN3[17] B 47 59 70 73 76 82 84 91 94 95 95 95 94 94 95
EEGNet[20] A 18 33 46 60 68 70 82 82 83 85 88 90 91 96 99
EEGNet[20] B 39 49 56 65 76 80 85 87 89 89 90 90 90 92 93
1D-CapsNet-64[18] A 21 32 45 53 60 68 76 83 85 84 82 88 94 96 98
1D-CapsNet-64[18] B 48 54 60 66 75 81 81 86 87 93 93 93 92 93 94
CM-CW-CNN-ESVM[19] A 22 32 55 59 64 70 74 78 81 86 86 90 91 94 99
CM-CW-CNN-ESVM[19] B 37 58 70 72 80 86 86 89 93 95 95 97 97 98 99
IncepA-EEGNet A 19 34 47 62 70 71 84 83 85 89 92 93 94 96 100
IncepA-EEGNet B 41 59 73 77 81 85 88 90 92 95 95 95 95 95 95
Tab.6 Character recognition rate of IncepA-EEGNet model and other methods
受试者 字符编号 期望字符 输出字符
A 16 P Q
B 24 Q P
B 39 V W
B 10 Z H
Tab.7 Confusion of character recognition on P300 speller
方法 参数量
EEGNet 5428
EEGNet+Attention 8969
EEGNet+Inception-v1 12742
IncepA-EEGNet 22970
Tab.8 Number of trainable parameters of EEGNet after adding different sub-modules
[1]   IKEGAMI S, TAKANO K, SAEKI N, et al Operation of a P300-based brain–computer interface by individuals with cervical spinal cord injury[J]. Clinical Neurophysiology, 2011, 122 (5): 991- 996
doi: 10.1016/j.clinph.2010.08.021
[2]   RICCIO A, SCHETTINI F, SIMIONE L, et al On the relationship between attention processing and P300-based brain computer interface control in amyotrophic lateral sclerosis[J]. Frontiers in Human Neuroscience, 2018, 12 (165): 1- 10
[3]   中国信通院. 脑机接口技术在医疗健康领域应用白皮书[EB/OL]. [2021-07-01]. http://www.caict.ac.cn/kxyj/qwfb/ztbg/202107/t20210715_380509.htm
[4]   FARWELL L A, DONCHIN E Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials[J]. Electroencephalography and Clinical Neurophysiology, 1988, 70 (6): 510- 523
doi: 10.1016/0013-4694(88)90149-6
[5]   黄育娇, 顾正晖 基于 P300 的交互式字符输入脑机接口系统[J]. 计算机工程与设计, 2014, 35 (4): 1385- 1389
HUANG Yu-jiao, GU Zheng-hui P300-based interactive character input brain-computer interface system[J]. Computer Engineering and Design, 2014, 35 (4): 1385- 1389
doi: 10.3969/j.issn.1000-7024.2014.04.051
[6]   张楠楠. 脑机接口中的视觉刺激设计与优化方法研究[D]. 长沙: 国防科技大学, 2019: 84.
ZHANG Nan-nan. Research on the design and optimization method of visual stimuli in brain-computer interface [D]. Changsha: National University of Defense Technology, 2019: 84.
[7]   XU M, CHEN L, ZHANG L, et al A visual parallel-BCI speller based on the time–frequency coding strategy[J]. Journal of Neural Engineering, 2014, 11 (2): 026014.1- 026014.11
[8]   RAKOTOMAMONJY A, GUIGUE V BCI competition III: dataset II-ensemble of SVMs for BCI P300 speller[J]. IEEE Transactions on Biomedical Engineering, 2008, 55 (3): 1147- 1154
doi: 10.1109/TBME.2008.915728
[9]   KRUSIENSKI D J, SELLERS E W, CABESTAING F, et al A comparison of classification techniques for the P300 speller[J]. Journal of Neural Engineering, 2006, 3 (4): 299.1- 299.13
[10]   CECOTTI H, GRSER A Convolutional neural networks for p300 detection with application to brain computer interfaces[J]. IEEE Transactions on Software Engineering, 2011, 33 (3): 433- 445
[11]   ZHANG C, KIM Y K, ESKANDARIAN A EEG-inception: an accurate and robust end-to-end neural network for EEG-based motor imagery classification[J]. Journal of Neural Engineering, 2021, 18 (4): 046014.1- 046014.16
[12]   AMIN S U, ALTAHERI H, MUHAMMAD G, et al. Attention based Inception model for robust EEG motor imagery classification [C]// 2021 IEEE International Instrumentation and Measurement Technology Conference. Glasgow: IEEE, 2021: 1-6.
[13]   LI Y, LIU Y, CUI W G, et al Epileptic seizure detection in EEG signals using a unified temporal-spectral squeeze-and-excitation network[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2020, 28 (4): 782- 794
doi: 10.1109/TNSRE.2020.2973434
[14]   贾子钰, 林友芳, 刘天航, 等 基于多尺度特征提取与挤压激励模型的运动想象分类方法[J]. 计算机研究与发展, 2020, 57 (12): 2481- 2489
JIA Zi-yu, LIN You-fang, LIU Tian-hang, et al Motor imagery classification based on multiscale feature extraction and squeeze-excitation model[J]. Journal of Computer Research and Development, 2020, 57 (12): 2481- 2489
doi: 10.7544/issn1000-1239.2020.20200723
[15]   RIVET B, SOULOUMIAC A, ATTINA V, et al xDAWN algorithm to enhance evoked potentials: application to brain–computer interface[J]. IEEE Transactions on Biomedical Engineering, 2009, 56 (8): 2035- 2043
doi: 10.1109/TBME.2009.2012869
[16]   XIAO X, XU M, JIN J, et al Discriminative canonical pattern matching for single-trial classification of ERP components[J]. IEEE Transactions on Biomedical Engineering, 2019, 67 (8): 2266- 2275
[17]   LIU M, WU W, GU Z, et al Deep learning based on batch normalization for P300 signal detection[J]. Neurocomputing, 2018, 275: 288- 297
doi: 10.1016/j.neucom.2017.08.039
[18]   LIU X, XIE Q, LV J, et al P300 event-related potential detection using one-dimensional convolutional capsule networks[J]. Expert Systems with Applications, 2021, 174 (15): 114701.1- 114701.12
[19]   KUNDU S, ARI S P300 based character recognition using convolutional neural network and support vector machine[J]. Biomedical Signal Processing and Control, 2020, 55: 101645.1- 101645.7
[20]   LAWHERN V J, SOLON A J, WAYTOWICH N R, et al EEGNet: a compact convolutional network for EEG-based brain-computer interfaces[J]. Journal of Neural Engineering, 2016, 15 (5): 056013.1- 056013.17
[21]   SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9.
[22]   HU J, SHEN L, SUN G. Squeeze-and-excitation networks [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132-7141.
[23]   WANG Q, WU B, ZHU P, et al. ECA-Net: efficient channel attention for deep convolutional neural networks [C]// CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 11531-11539.
[24]   XIAO J, LIN Q, YU T, et al. A BCI system for assisting visual fixation assessment in behavioral evaluation of patients with disorders of consciousness [C]// 2017 8th International IEEE/EMBS Conference on Neural Engineering. Shanghai: IEEE, 2017: 399-402.
[25]   LIU L, WU F X, WANG Y P, et al Multi-receptive-field CNN for semantic segmentation of medical images[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 24 (11): 3215- 3225
doi: 10.1109/JBHI.2020.3016306
[26]   WANG H, XU J, YAN R, et al A new intelligent bearing fault diagnosis method using SDP representation and SE-CNN[J]. IEEE Transactions on Instrumentation and Measurement, 2019, 69 (5): 2377- 2389
[27]   LUO Y, LU B L. EEG data augmentation for emotion recognition using a conditional Wasserstein GAN [C]// 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Honolulu: IEEE, 2018: 2535-2538.
[28]   LEE T, KIM M, KIM S P. Data augmentation effects using borderline-SMOTE on classification of a P300-based BCI [C]// 2020 8th International Winter Conference on Brain-Computer Interface. Gangwon: IEEE, 2020
[1] Xue-qin ZHANG,Tian-ren LI. Breast cancer pathological image classification based on Cycle-GAN and improved DPN network[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 727-735.
[2] Qiao-hong CHEN,Hao-lei PEI,Qi SUN. Image caption based on relational reasoning and context gate mechanism[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(3): 542-549.
[3] Yuan-jun NONG,Jun-jie WANG,Hong CHEN,Wen-han SUN,Hui GENG,Shu-yue LI. A image caption method of construction scene based on attention mechanism and encoding-decoding architecture[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(2): 236-244.
[4] Ying-li LIU,Rui-gang WU,Chang-hui YAO,Tao SHEN. Construction method of extraction dataset of Al-Si alloy entity relationship[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(2): 245-253.
[5] Xin WANG,Qiao-hong CHEN,Qi SUN,Yu-bo JIA. Visual question answering method based on relational reasoning and gating mechanism[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(1): 36-46.
[6] Zhi-chao CHEN,Hai-ning JIAO,Jie YANG,Hua-fu ZENG. Garbage image classification algorithm based on improved MobileNet v2[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(8): 1490-1499.
[7] Zi-ye YONG,Ji-chang GUO,Chong-yi LI. weakly supervised underwater image enhancement algorithm incorporating attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(3): 555-562.
[8] Han-juan CHEN,Fei-peng DA,Shao-yan GAI. Deep 3D point cloud classification network based on competitive attention fusion[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(12): 2342-2351.
[9] Yue-lin CHEN,Wen-jing TIAN,Xiao-dong CAI,Shu-ting ZHENG. Text matching model based on dense connection networkand multi-dimensional feature fusion[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(12): 2352-2358.
[10] Wen-bin XIN,Hui-min HAO,Ming-long BU,Yuan LAN,Jia-hai HUANG,Xiao-yan XIONG. Static gesture real-time recognition method based on ShuffleNetv2-YOLOv3 model[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(10): 1815-1824.
[11] Chuang LIU,Jun LIANG. Vehicle motion trajectory prediction based on attention mechanism[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(6): 1156-1163.
[12] Yan ZHANG,Bin GUO,Qian-ru WANG,Jing ZHANG,Zhi-wen YU. SeqRec: sequential-based recommendation model with long-term preference and instant interest[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(6): 1177-1184.
[13] Xiao-hu ZHAO,Liang-fei YIN,Cheng-long ZHAO. Image captioning based on global-local feature and adaptive-attention[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(1): 126-134.
[14] GUO Bao-zhen, ZUO Wan-li, WANG Ying. Double CNN sentence classification model with attention mechanism of word embeddings[J]. Journal of ZheJiang University (Engineering Science), 2018, 52(9): 1729-1737.