Please wait a minute...
浙江大学学报(工学版)  2022, Vol. 56 Issue (4): 745-753, 782    DOI: 10.3785/j.issn.1008-973X.2022.04.014
计算机技术、信息工程     
IncepA-EEGNet: 融合Inception网络和注意力机制的P300信号检测方法
许萌1(),王丹1,*(),李致远1,陈远方2
1. 北京工业大学 信息学部,北京 100124
2. 北京机械设备研究所,北京 100039
IncepA-EEGNet: P300 signal detection method based on fusion of Inception network and attention mechanism
Meng XU1(),Dan WANG1,*(),Zhi-yuan LI1,Yuan-fang CHEN2
1. Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2. Beijing Institute of Machinery and Equipment, Beijing 100039, China
 全文: PDF(1121 KB)   HTML
摘要:

为了实现更高效的P300信号特征提取,提出融合Inception网络和注意力机制模块的卷积网络模型,即IncepA-EEGNet. 该模型使用不同感受野的卷积层进行并行连接,增强网络提取和表达脑电信号的能力. 引入注意力机制实现不同过滤器特征的权重分配,提取P300信号中的重要信息. 模型在BCI Competition III数据集II的2个受试者数据上进行验证. 与其他深度学习模型相比,IncepA-EEGNet的字符识别率在5个实验轮次后达到平均75.5%,在3个轮次后受试者B的信息传输速率达到33.44 bit/min. 实验结果表明,IncepA-EEGNet有效提高了P300信号的识别精度,减少了重复试验的时间,改善了P300拼写器的实用性.

关键词: 注意力机制Inception网络EEGNetP300检测字符拼写    
Abstract:

A novel EEGNet variation based on the fusion of the Inception and attention mechanism modules was proposed, called IncepA-EEGNet, in order to achieve more efficient P300 signal feature extraction. Convolutional layers with different receptive fields were connected in parallel. The network’s ability to extract and express EEG signals were enhanced. Then the attention mechanism was introduced to assign weights to the features of different filters, and important information was extracted from the P300 signal. The validation experiment was conducted on two subjects of BCI Competition III dataset II. Results showed that the IncepA-EEGNet recognition accuracy reached 75.5% after just 5 epochs compared with other deep learning models. The information transmission rate was up to 33.44 bits/min on subject B after 3 epochs. These experimental results demonstrate that the IncepA-EEGNet effectively improves the recognition accuracy of the P300 signal, reduces the time of repeated trials, and enhances the applicability of the P300 speller.

Key words: attention mechanism    Inception network    EEGNet    P300 detection    character spelling
收稿日期: 2021-07-30 出版日期: 2022-04-24
CLC:  TP 391  
基金资助: 国家自然科学基金资助项目(61672505)
通讯作者: 王丹     E-mail: xumeng@emails.bjut.edu.cn;wangdan@bjut.edu.cn
作者简介: 许萌(1991—),男,博士生,从事脑机接口、快速序列视觉呈现的研究. orcid.org/0000-0002-3634-0547. E-mail: xumeng@emails.bjut.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
许萌
王丹
李致远
陈远方

引用本文:

许萌,王丹,李致远,陈远方. IncepA-EEGNet: 融合Inception网络和注意力机制的P300信号检测方法[J]. 浙江大学学报(工学版), 2022, 56(4): 745-753, 782.

Meng XU,Dan WANG,Zhi-yuan LI,Yuan-fang CHEN. IncepA-EEGNet: P300 signal detection method based on fusion of Inception network and attention mechanism. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 745-753, 782.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2022.04.014        https://www.zjujournals.com/eng/CN/Y2022/V56/I4/745

图 1  P300拼写器字符矩阵界面
图 2  P300拼写器的实验范式流程
图 3  Inception-v1模块
图 4  IncepA-EEGNet框架
图 5  结合Inception-v1模块的SE注意力机制
受试者 训练集样本数 测试集样本数
目标 非目标 目标 非目标
受试者A 2550 12750 3000 15000
受试者B 2550 12750 3000 15000
表 1  P300拼写实验数据信息
图 6  P300拼写器字符识别的整体框架
受试者 K 卷积核大小 Acc R P F1
A 8 (8/4/2,1) 0.7323 0.6630 0.3432 0.4523
A 16 (16/8/4,1) 0.7384 0.6547 0.3485 0.4548
A 32 (32/16/8,1) 0.7425 0.6486 0.3520 0.4564
A 64 (64/32/16,1) 0.7314 0.6677 0.3430 0.4532
A 128 (128/64/32,1) 0.7553 0.6457 0.3670 0.4679
A 160 (160/80/40,1) 0.7514 0.6210 0.3582 0.4543
B 8 (8/4/2,1) 0.7817 0.7034 0.4122 0.5149
B 16 (16/8/4,1) 0.7855 0.7186 0.4168 0.5276
B 32 (32/16/8,1) 0.7848 0.7143 0.4154 0.5253
B 64 (64/32/16,1) 0.7899 0.6993 0.4215 0.5260
B 128 (128/64/32,1) 0.7914 0.7250 0.4261 0.5367
B 160 (160/80/40,1) 0.7889 0.6987 0.4199 0.5245
表 2  IncepA-EEGNet使用不同卷积核参数K的分类结果
受试者 r Acc R P F1
A 1 0.7458 0.6473 0.3557 0.4592
A 3 0.7553 0.6457 0.3670 0.4679
A 9 0.7425 0.6486 0.3520 0.4610
A 12 0.7314 0.6683 0.3430 0.4533
B 1 0.7869 0.7227 0.4193 0.5307
B 3 0.7914 0.7250 0.4261 0.5367
B 9 0.7954 0.6880 0.4291 0.5285
B 12 0.7829 0.7133 0.4125 0.5227
表 3  注意力机制使用不同降维系数的分类结果
图 7  IncepA-EEGNet训练损失和测试准确率
添加模块 Acc
受试者 CNN-1 MCNN-1 MCNN-3 EEGNet
基础网络(Net) A 0.7037 0.6899 0.7038 0.7065
基础网络(Net) B 0.7065 0.6912 0.7037 0.7266
Net+Attention A 0.7092 0.6906 0.7091 0.7141
Net+Attention B 0.7185 0.7154 0.7192 0.7399
Net+Inception-v1 A 0.7100 0.6965 0.7103 0.7174
Net+Inception-v1 B 0.7222 0.7276 0.7203 0.7476
Net+Attention +Inception-v1 A 0.7186 0.7084 0.7258 0.7553
Net+Attention +Inception-v1 B 0.7454 0.7384 0.7478 0.7914
表 4  不同CNN网络添加子模块对分类准确率的影响
方法 受试者 Acc R P F1
CNN-1[10] A 0.7037 0.6737 0.3170 0.4311
CNN-1[10] B 0.7065 0.6783 0.4073 0.5090
MCNN-1[10] A 0.6899 0.6903 0.3085 0.4260
MCNN-1[10] B 0.6912 0.7340 0.3833 0.5034
MCNN-3[10] A 0.7038 0.6743 0.3172 0.4314
MCNN-3[10] B 0.7037 0.6923 0.4089 0.5141
EEGNet[20] A 0.7065 0.6460 0.3147 0.4232
EEGNet[20] B 0.7266 0.6950 0.4214 0.4587
BN3[17] A 0.7513 0.6133 0.3607 0.4605
BN3[17] B 0.7902 0.6947 0.4214 0.5246
IncepA-EEGNet A 0.7553 0.6456 0.3676 0.4679
IncepA-EEGNet B 0.7914 0.7250 0.4261 0.5367
表 5  在P300信号分类上IncepA-EEGNet与其他深度学习方法的比较
图 8  IncepA-EEGNet模型与其他方法在受试者A和受试者B上的信息传输速率对比
方法 受试者 Pc/%
n = 1 n = 2 n = 3 n = 4 n = 5 n = 6 n = 7 n = 8 n = 9 n = 10 n = 11 n = 12 n = 13 n = 14 n = 15
CNN-1[10] A 16 33 47 52 61 65 77 78 85 86 90 91 91 93 97
CNN-1[10] B 35 52 59 68 79 81 82 89 92 91 91 90 91 92 92
MCNN-1[10] A 18 31 50 54 61 68 76 76 79 82 89 92 91 93 97
MCNN-1[10] B 39 55 62 64 77 79 86 92 91 92 95 95 95 94 94
MCNN-3[10] A 17 35 50 55 63 67 78 79 84 85 91 90 92 94 97
MCNN-3[10] B 34 56 60 68 74 80 82 89 90 90 91 88 90 91 92
BN3[17] A 22 39 58 67 73 75 79 81 82 86 89 92 94 96 98
BN3[17] B 47 59 70 73 76 82 84 91 94 95 95 95 94 94 95
EEGNet[20] A 18 33 46 60 68 70 82 82 83 85 88 90 91 96 99
EEGNet[20] B 39 49 56 65 76 80 85 87 89 89 90 90 90 92 93
1D-CapsNet-64[18] A 21 32 45 53 60 68 76 83 85 84 82 88 94 96 98
1D-CapsNet-64[18] B 48 54 60 66 75 81 81 86 87 93 93 93 92 93 94
CM-CW-CNN-ESVM[19] A 22 32 55 59 64 70 74 78 81 86 86 90 91 94 99
CM-CW-CNN-ESVM[19] B 37 58 70 72 80 86 86 89 93 95 95 97 97 98 99
IncepA-EEGNet A 19 34 47 62 70 71 84 83 85 89 92 93 94 96 100
IncepA-EEGNet B 41 59 73 77 81 85 88 90 92 95 95 95 95 95 95
表 6  IncepA-EEGNet 模型与其他方法的字符识别率
受试者 字符编号 期望字符 输出字符
A 16 P Q
B 24 Q P
B 39 V W
B 10 Z H
表 7  P300拼写器的字符识别混淆
方法 参数量
EEGNet 5428
EEGNet+Attention 8969
EEGNet+Inception-v1 12742
IncepA-EEGNet 22970
表 8  EEGNet添加不同子模块后的网络训练参数量
1 IKEGAMI S, TAKANO K, SAEKI N, et al Operation of a P300-based brain–computer interface by individuals with cervical spinal cord injury[J]. Clinical Neurophysiology, 2011, 122 (5): 991- 996
doi: 10.1016/j.clinph.2010.08.021
2 RICCIO A, SCHETTINI F, SIMIONE L, et al On the relationship between attention processing and P300-based brain computer interface control in amyotrophic lateral sclerosis[J]. Frontiers in Human Neuroscience, 2018, 12 (165): 1- 10
3 中国信通院. 脑机接口技术在医疗健康领域应用白皮书[EB/OL]. [2021-07-01]. http://www.caict.ac.cn/kxyj/qwfb/ztbg/202107/t20210715_380509.htm
4 FARWELL L A, DONCHIN E Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials[J]. Electroencephalography and Clinical Neurophysiology, 1988, 70 (6): 510- 523
doi: 10.1016/0013-4694(88)90149-6
5 黄育娇, 顾正晖 基于 P300 的交互式字符输入脑机接口系统[J]. 计算机工程与设计, 2014, 35 (4): 1385- 1389
HUANG Yu-jiao, GU Zheng-hui P300-based interactive character input brain-computer interface system[J]. Computer Engineering and Design, 2014, 35 (4): 1385- 1389
doi: 10.3969/j.issn.1000-7024.2014.04.051
6 张楠楠. 脑机接口中的视觉刺激设计与优化方法研究[D]. 长沙: 国防科技大学, 2019: 84.
ZHANG Nan-nan. Research on the design and optimization method of visual stimuli in brain-computer interface [D]. Changsha: National University of Defense Technology, 2019: 84.
7 XU M, CHEN L, ZHANG L, et al A visual parallel-BCI speller based on the time–frequency coding strategy[J]. Journal of Neural Engineering, 2014, 11 (2): 026014.1- 026014.11
8 RAKOTOMAMONJY A, GUIGUE V BCI competition III: dataset II-ensemble of SVMs for BCI P300 speller[J]. IEEE Transactions on Biomedical Engineering, 2008, 55 (3): 1147- 1154
doi: 10.1109/TBME.2008.915728
9 KRUSIENSKI D J, SELLERS E W, CABESTAING F, et al A comparison of classification techniques for the P300 speller[J]. Journal of Neural Engineering, 2006, 3 (4): 299.1- 299.13
10 CECOTTI H, GRSER A Convolutional neural networks for p300 detection with application to brain computer interfaces[J]. IEEE Transactions on Software Engineering, 2011, 33 (3): 433- 445
11 ZHANG C, KIM Y K, ESKANDARIAN A EEG-inception: an accurate and robust end-to-end neural network for EEG-based motor imagery classification[J]. Journal of Neural Engineering, 2021, 18 (4): 046014.1- 046014.16
12 AMIN S U, ALTAHERI H, MUHAMMAD G, et al. Attention based Inception model for robust EEG motor imagery classification [C]// 2021 IEEE International Instrumentation and Measurement Technology Conference. Glasgow: IEEE, 2021: 1-6.
13 LI Y, LIU Y, CUI W G, et al Epileptic seizure detection in EEG signals using a unified temporal-spectral squeeze-and-excitation network[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2020, 28 (4): 782- 794
doi: 10.1109/TNSRE.2020.2973434
14 贾子钰, 林友芳, 刘天航, 等 基于多尺度特征提取与挤压激励模型的运动想象分类方法[J]. 计算机研究与发展, 2020, 57 (12): 2481- 2489
JIA Zi-yu, LIN You-fang, LIU Tian-hang, et al Motor imagery classification based on multiscale feature extraction and squeeze-excitation model[J]. Journal of Computer Research and Development, 2020, 57 (12): 2481- 2489
doi: 10.7544/issn1000-1239.2020.20200723
15 RIVET B, SOULOUMIAC A, ATTINA V, et al xDAWN algorithm to enhance evoked potentials: application to brain–computer interface[J]. IEEE Transactions on Biomedical Engineering, 2009, 56 (8): 2035- 2043
doi: 10.1109/TBME.2009.2012869
16 XIAO X, XU M, JIN J, et al Discriminative canonical pattern matching for single-trial classification of ERP components[J]. IEEE Transactions on Biomedical Engineering, 2019, 67 (8): 2266- 2275
17 LIU M, WU W, GU Z, et al Deep learning based on batch normalization for P300 signal detection[J]. Neurocomputing, 2018, 275: 288- 297
doi: 10.1016/j.neucom.2017.08.039
18 LIU X, XIE Q, LV J, et al P300 event-related potential detection using one-dimensional convolutional capsule networks[J]. Expert Systems with Applications, 2021, 174 (15): 114701.1- 114701.12
19 KUNDU S, ARI S P300 based character recognition using convolutional neural network and support vector machine[J]. Biomedical Signal Processing and Control, 2020, 55: 101645.1- 101645.7
20 LAWHERN V J, SOLON A J, WAYTOWICH N R, et al EEGNet: a compact convolutional network for EEG-based brain-computer interfaces[J]. Journal of Neural Engineering, 2016, 15 (5): 056013.1- 056013.17
21 SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9.
22 HU J, SHEN L, SUN G. Squeeze-and-excitation networks [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132-7141.
23 WANG Q, WU B, ZHU P, et al. ECA-Net: efficient channel attention for deep convolutional neural networks [C]// CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 11531-11539.
24 XIAO J, LIN Q, YU T, et al. A BCI system for assisting visual fixation assessment in behavioral evaluation of patients with disorders of consciousness [C]// 2017 8th International IEEE/EMBS Conference on Neural Engineering. Shanghai: IEEE, 2017: 399-402.
25 LIU L, WU F X, WANG Y P, et al Multi-receptive-field CNN for semantic segmentation of medical images[J]. IEEE Journal of Biomedical and Health Informatics, 2020, 24 (11): 3215- 3225
doi: 10.1109/JBHI.2020.3016306
26 WANG H, XU J, YAN R, et al A new intelligent bearing fault diagnosis method using SDP representation and SE-CNN[J]. IEEE Transactions on Instrumentation and Measurement, 2019, 69 (5): 2377- 2389
27 LUO Y, LU B L. EEG data augmentation for emotion recognition using a conditional Wasserstein GAN [C]// 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Honolulu: IEEE, 2018: 2535-2538.
28 LEE T, KIM M, KIM S P. Data augmentation effects using borderline-SMOTE on classification of a P300-based BCI [C]// 2020 8th International Winter Conference on Brain-Computer Interface. Gangwon: IEEE, 2020
[1] 张雪芹,李天任. 基于Cycle-GAN和改进DPN网络的乳腺癌病理图像分类[J]. 浙江大学学报(工学版), 2022, 56(4): 727-735.
[2] 陈巧红,裴皓磊,孙麒. 基于视觉关系推理与上下文门控机制的图像描述[J]. 浙江大学学报(工学版), 2022, 56(3): 542-549.
[3] 农元君,王俊杰,陈红,孙文涵,耿慧,李书悦. 基于注意力机制和编码-解码架构的施工场景图像描述方法[J]. 浙江大学学报(工学版), 2022, 56(2): 236-244.
[4] 刘英莉,吴瑞刚,么长慧,沈韬. 铝硅合金实体关系抽取数据集的构建方法[J]. 浙江大学学报(工学版), 2022, 56(2): 245-253.
[5] 董红召,方浩杰,张楠. 旋转框定位的多尺度再生物品目标检测算法[J]. 浙江大学学报(工学版), 2022, 56(1): 16-25.
[6] 王鑫,陈巧红,孙麒,贾宇波. 基于关系推理与门控机制的视觉问答方法[J]. 浙江大学学报(工学版), 2022, 56(1): 36-46.
[7] 陈智超,焦海宁,杨杰,曾华福. 基于改进MobileNet v2的垃圾图像分类算法[J]. 浙江大学学报(工学版), 2021, 55(8): 1490-1499.
[8] 雍子叶,郭继昌,李重仪. 融入注意力机制的弱监督水下图像增强算法[J]. 浙江大学学报(工学版), 2021, 55(3): 555-562.
[9] 陈涵娟,达飞鹏,盖绍彦. 基于竞争注意力融合的深度三维点云分类网络[J]. 浙江大学学报(工学版), 2021, 55(12): 2342-2351.
[10] 陈岳林,田文靖,蔡晓东,郑淑婷. 基于密集连接网络和多维特征融合的文本匹配模型[J]. 浙江大学学报(工学版), 2021, 55(12): 2352-2358.
[11] 辛文斌,郝惠敏,卜明龙,兰媛,黄家海,熊晓燕. 基于ShuffleNetv2-YOLOv3模型的静态手势实时识别方法[J]. 浙江大学学报(工学版), 2021, 55(10): 1815-1824.
[12] 刘创,梁军. 基于注意力机制的车辆运动轨迹预测[J]. 浙江大学学报(工学版), 2020, 54(6): 1156-1163.
[13] 张岩,郭斌,王倩茹,张靖,於志文. SeqRec:基于长期偏好和即时兴趣的序列推荐模型[J]. 浙江大学学报(工学版), 2020, 54(6): 1177-1184.
[14] 赵小虎,尹良飞,赵成龙. 基于全局?局部特征和自适应注意力机制的图像语义描述算法[J]. 浙江大学学报(工学版), 2020, 54(1): 126-134.
[15] 董月,冯华君,徐之海,陈跃庭,李奇. Attention Res-Unet: 一种高效阴影检测算法[J]. 浙江大学学报(工学版), 2019, 53(2): 373-381.