Please wait a minute...
浙江大学学报(工学版)  2026, Vol. 60 Issue (3): 633-642    DOI: 10.3785/j.issn.1008-973X.2026.03.019
计算机技术、控制工程     
基于时空注意力机制的轻量级脑纹识别算法
方芳(),严军,郭红想,王勇*()
中国地质大学(武汉) 机械与电子信息学院,湖北 武汉 430074
Lightweight brainprint recognition algorithm based on spatio-temporal attention mechanism
Fang FANG(),Jun YAN,Hongxiang GUO,Yong WANG*()
College of Mechanical Engineering and Electronic Information, China University of Geosciences (Wuhan), Wuhan 430074, China
 全文: PDF(813 KB)   HTML
摘要:

针对现有脑纹识别方法模型复杂、所需通道数多以及依赖长时间段信号等问题,提出基于时空注意力机制的轻量化卷积神经网络. 引入坐标注意力机制以增强空间特征提取能力,突出关键通道信息. 基于EEGNet网络,使用VOV-GSCSP模块替换EEGNet的第1层卷积,在不明显增加参数量的同时,提升模型对脑电信号的特征表达能力. 融合轻量级时间自注意力模块,在保持模型轻量化的同时,有效捕捉跨时间步的依赖关系,提升时序建模能力,使网络更具判别力. 利用该方法,在109人的PhysioNet数据集和32人的DEAP数据集上进行验证. 与基线EEGNet网络相比,在PhysioNet数据集的8通道条件下,基于运动想象、睁眼和闭眼3种状态的分类准确率分别提高了18.55%、23.61%、25.79%,在DEAP数据集5通道条件下的分类准确率提高了2.45%. 提出模型的参数量仅为0.29×106,低于大多数现有的深度模型,且在通道数更低、时间段更短的情况下识别效果更佳,证明了该方法在脑纹识别任务中的有效性和鲁棒性.

关键词: 脑电信号生物识别注意力机制轻量化卷积神经网络    
Abstract:

A lightweight convolutional neural network based on a spatiotemporal attention mechanism was proposed in order to address the problems of high model complexity, large number of required channels, and reliance on long-duration signals in existing brainprint recognition methods. A coordinate attention mechanism was introduced to enhance spatial feature extraction and highlight key channel information. The VOV-GSCSP module was used to replace the first layer convolution of EEGNet based on the EEGNet network in order to improve the feature expression ability of the model for EEG signals without significantly increasing the number of parameters. The lightweight temporal self-attention module was integrated to effectively capture the dependencies across time steps while keeping the model lightweight to enhance the temporal modeling ability and make the network have more discriminative power. The method was validated on the PhysioNet dataset of 109 people and the DEAP dataset of 32 people. The classification accuracies based on the three states of motor imagery, eyes open and eyes closed were improved by 18.55%, 23.61% and 25.79% in the 8-channel condition of the PhysioNet dataset and by 2.45% in the 5-channel condition of the DEAP dataset compared with the baseline EEGNet network. The number of parameters of the proposed model was only 0.29×106, which was lower than most of the existing depth models, and the recognition effect was better in the case of lower number of channels and shorter time period. The effectiveness and robustness of the method in the brain pattern recognition task were demonstrated.

Key words: electroencephalographic signal    biometrics    attention mechanism    lightweight convolutional neural network
收稿日期: 2025-04-09 出版日期: 2026-02-04
:  TP 391  
基金资助: 国家自然科学基金资助项目(61973283).
通讯作者: 王勇     E-mail: 2170869118@qq.com;yongwang_cug@163.com
作者简介: 方芳(2001—),女,硕士生,从事脑机接口算法的研究. orcid.org/0009-0008-6210-5541. E-mail:2170869118@qq.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
方芳
严军
郭红想
王勇

引用本文:

方芳,严军,郭红想,王勇. 基于时空注意力机制的轻量级脑纹识别算法[J]. 浙江大学学报(工学版), 2026, 60(3): 633-642.

Fang FANG,Jun YAN,Hongxiang GUO,Yong WANG. Lightweight brainprint recognition algorithm based on spatio-temporal attention mechanism. Journal of ZheJiang University (Engineering Science), 2026, 60(3): 633-642.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2026.03.019        https://www.zjujournals.com/eng/CN/Y2026/V60/I3/633

图 1  网络整体架构
图 2  坐标注意力结构
图 3  VOV-GSCSP网络的结构
实验参数参数选择ACC±std
Physionet-MIPhysionet-EOPhysionet-ECDEAP
卷积核数(16,64)
(64,128)
99.43±0.15
99.97±0.09
98.13±0.17
99.45±0.15
97.32±0.41
99.01±0.30
100.00±0.02
100.00±0.00
激活函数ELU
SiLU
GeLU
ReLU
99.95±0.11
99.96±0.10
99.97±0.10
99.97±0.09
98.64±0.23
98.70±0.21
99.15±0.17
99.45±0.15
98.26±0.39
98.34±0.36
97.35±0.45
99.01±0.30
99.92±0.08
100.00±0.01
100.00±0.00
100.00±0.00
注意力GCT
CA
99.95±0.13
99.97±0.09
99.12±0.16
99.45±0.15
98.34±0.33
99.01±0.30
100.00±0.01
100.00±0.00
池化层AvgPool2d
MaxPool2d
99.97±0.10
99.97±0.09
98.94±0.18
99.45±0.15
98.59±0.33
99.01±0.30
100.00±0.00
100.00±0.00
Dropout0.25
0.5
99.91±0.12
99.97±0.09
99.06±0.16
99.45±0.15
98.77±0.32
99.01±0.30
100.00±0.01
100.00±0.00
表 1  不同参数对2个数据集的不同状态数据的分类性能影响
slot/sN(ACC±std)/%slot/sN(ACC±std)/%
4389.07±1.401364.57±2.25
4493.54±1.281471.99±1.77
4898.34±0.411891.67±1.31
41599.30±0.2511598.52±0.61
41699.75±0.2211698.63±0.53
41999.81±0.2111998.86±0.32
43299.90±0.2013299.48±0.24
46499.97±0.0216499.97±0.09
表 2  Physionet-MI不同参数下的测试结果
图 4  不同参数下Physionet-MI数据的t-SNE可视化图
数据集slot/sN(ACC±std)/%(EER±std)/%Np/106
Physionet-EO23
8
16
64
90.60±2.50
96.47±1.45
98.70±0.73
99.45±0.15
0.179±0.091
0.014±0.010
0.002±0.008
0.000±0.001
0.29
0.29
0.30
0.33
Physionet-EC23
8
16
64
90.16±2.49
96.74±1.63
97.53±1.06
99.01±0.30
0.137±0.070
0.011±0.010
0.003±0.009
0.000±0.001
0.29
0.29
0.30
0.33
DEAP105
32
99.98±0.06
100.00±0.00
0.000±0.000
0.000±0.000
0.28
0.29
表 3  不同数据集在不同参数下的测试准确率
模型(ACC±std)/%
Physionet-MIPhysionet-EOPhysionet-ECDEAP
EEGNet79.79±2.4972.86±3.2170.95±3.4597.53±0.25
VEEGNet80.83±2.2174.18±3.0372.66±3.2299.89±0.20
CA+EEGNet81.00±2.0675.53±2.8973.28±2.9599.93±0.17
EEGNet+LTAE96.01±0.8989.28±1.9690.21±2.0199.54±0.18
CA+VEEGNet90.54±1.5280.76±2.4575.37±2.7899.93±0.15
VEEGNet+LTAE98.25±0.4894.85±1.5594.85±1.7299.97±0.08
CA+EEGNet+LTAE98.18±0.5291.63±1.7292.11±1.8999.96±0.09
Proposed98.34±0.4196.47±1.4596.74±1.6399.98±0.06
表 4  消融实验结果的比较
方法slot/sNACC/%EER/%Np/106
COH_CNN[21]
COH_CNN[21]
1
1
19
15
98.22
97.74


CNN-LSTM[8]1
1
16
64
99.58
99.58
0.410
0.410
1927.50
多任务对抗学习[22]16499.20
BGWO-SVM[23]12394.13
EPI-CGAN[24]16499.02
PCA+SVM[25]26499.91
DNN[26]46497.81
GCT-EEGNet[27]13298.900.0040.17
ResNet18[28]16499.5911.25
提出方法
提出方法
提出方法
提出方法
提出方法
1
1
1
1
1
15
16
19
32
64
98.52
98.63
98.86
99.48
99.97
0.001
0.001
0.001
0.001
0.000
0.29
0.29
0.29
0.30
0.32
表 5  Physionet数据集的运动想象数据比较
方法slot/sNEOECNp/106
ACC/%EER/%ACC/%EER/%
连通性网络[11]126496.904.40092.606.500
CNN-LSTM[29]
CNN-LSTM[29]
CNN-LSTM[29]
CNN-LSTM[29]
4
8
12
16
64
64
64
64
95.00
96.20
98.00
92.50



95.33
97.00
99.95
93.20






COR+GCN[13]
COR+GCN[13]
COR+GCN[13]
1
1
1
64
40
16
98.56
97.13
53.41








FDF+SVM_RBF[30]106497.22
RF[31]26498.1697.30
SVM[31]26497.6496.02
MCL+马氏距离分类器[32]106499.406.33098.8010.500
CNN+数据增强[9]12640.190300.82
PLV+Gamma[12]
PLV+Gamma[12]
4
4
56
21
99.40
96.00




ESTformer[33]106494.6133.70
Autoencoder-CNN[34]86499.4599.89
提出方法
提出方法
提出方法
提出方法
2
2
2
2
3
8
16
64
90.60
96.47
98.70
99.45
0.179
0.014
0.002
0.000
90.16
96.74
97.53
99.01
0.137
0.011
0.005
0.000
0.29
0.29
0.30
0.33
表 6  Physionet数据集静息状态数据的比较
方法slot/sNACC/%Np/106
CNN-GRU[35]
CNN-GRU[35]
10
10
5
32
99.17
99.90
1.89
MLP[36]
MLP[36]
1
3
32
32
95.00
99.00

EPI-CGAN[24]123299.88
提出方法
提出方法
10
10
5
32
99.98
100.00
0.28
0.29
表 7  DEAP数据集的比较
1 ZHANG S, SUN L, MAO X, et al Review on EEG-based authentication technology[J]. Computational Intelligence and Neuroscience, 2021, 2021 (1): 5229576
doi: 10.1155/2021/5229576
2 陈彬滨, 吴涛, 陈黎飞 基于缩放卷积注意力网络的跨多个体脑电情绪识别[J]. 中国生物医学工程学报, 2024, 43 (5): 550- 560
CHEN Binbin, WU Tao, CHEN Lifei Cross-subjects EEG emotion recognition based on scaled convolutional attention network[J]. Chinese Journal of Biomedical Engineering, 2024, 43 (5): 550- 560
3 刘国文, 唐佳佳, 金宣妤, 等 基于多尺度卷积和身份子空间的脑纹识别[J]. 杭州电子科技大学学报: 自然科学版, 2025, 45 (1): 82- 88
LIU Guowen, TANG Jiajia, JIN Xuanyu, et al Brainprint recognition based on multi-scale convolution and subspace[J]. Journal of Hangzhou Dianzi University: Natural Sciences, 2025, 45 (1): 82- 88
4 PALANIAPPAN R, MANDIC D P Biometrics from brain electrical activity: a machine learning approach[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29 (4): 738- 742
doi: 10.1109/TPAMI.2007.1013
5 FALLANI F D V, VECCHIATO G, TOPPI J, et al. Subject identification through standard EEG signals during resting states [C]//Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society. Boston: IEEE, 2011: 2331–2333.
6 ALBASRI A, ABDALI-MOHAMMADI F, FATHI A EEG electrode selection for person identification thru a genetic-algorithm method[J]. Journal of Medical Systems, 2019, 43 (9): 297
doi: 10.1007/s10916-019-1364-8
7 YANG S, DERAVI F On the usability of electroencephalographic signals for biometric recognition: a survey[J]. IEEE Transactions on Human-Machine Systems, 2017, 47 (6): 958- 969
doi: 10.1109/THMS.2017.2682115
8 SUN Y, LO F P W, LO B EEG-based user identification system using 1D-convolutional long short-term memory neural networks[J]. Expert Systems with Applications, 2019, 125: 259- 267
doi: 10.1016/j.eswa.2019.01.080
9 SCHONS T, MOREIRA G J P, SILVA P H L, et al. Convolutional network for EEG-based biometric [C]//Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications. Cham: Springer, 2018: 601–608.
10 WANG M, EL-FIQI H, HU J, et al Convolutional neural networks using dynamic functional connectivity for EEG-based person identification in diverse human states[J]. IEEE Transactions on Information Forensics and Security, 2019, 14 (12): 3259- 3272
doi: 10.1109/TIFS.2019.2916403
11 FRASCHINI M, HILLEBRAND A, DEMURU M, et al An EEG-based biometric system using eigenvector centrality in resting state brain networks[J]. IEEE Signal Processing Letters, 2015, 22 (6): 666- 670
doi: 10.1109/LSP.2014.2367091
12 KUMAR G P, DUTTA U, SHARMA K, et al. EEG-based biometrics: phase-locking value from gamma band performs well across heterogeneous datasets [C]//Proceedings of the International Conference of the Biometrics Special Interest Group. Darmstadt: IEEE, 2022: 1–6.
13 TIAN W, LI M, JU X, et al Applying multiple functional connectivity features in GCN for EEG-based human identification[J]. Brain Sciences, 2022, 12 (8): 1072
doi: 10.3390/brainsci12081072
14 MYUNG K, GU K, BUM P. Spectrogram and CNN based personal identification using EEG signal [C]//Proceedings of the IEEE International Conference on Consumer Electronics-Asia. Gangwon: IEEE, 2021: 1–4.
15 HOU Q, ZHOU D, FENG J. Coordinate attention for efficient mobile network design [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Nashville: IEEE, 2021: 13708–13717.
16 LAWHERN V J, SOLON A J, WAYTOWICH N R, et al EEGNet: a compact convolutional neural network for EEG-based brain–computer interfaces[J]. Journal of Neural Engineering, 2018, 15 (5): 056013
doi: 10.1088/1741-2552/aace8c
17 LI H, LI J, WEI H, et al Slim-neck by GSConv: a lightweight-design for real-time detector architectures[J]. Journal of Real-Time Image Processing, 2024, 21 (3): 62
doi: 10.1007/s11554-024-01436-6
18 GARNOT V S F, LANDRIEU L, GIORDANO S, et al. Satellite image time series classification with pixel-set encoders and temporal self-attention [C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 12322–12331.
19 GOLDBERGER A L, AMARAL L A, GLASS L, et al PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals[J]. Circulation, 2000, 101 (23): E215- E220
20 KOELSTRA S, MUHL C, SOLEYMANI M, et al DEAP: a database for emotion Analysis;Using physiological signals[J]. IEEE Transactions on Affective Computing, 2012, 3 (1): 18- 31
doi: 10.1109/T-AFFC.2011.15
21 ASHENAEI R, ASGHAR B A, YOUSEFI R T Stable EEG-based biometric system using functional connectivity based on time-frequency features with optimal channels[J]. Biomedical Signal Processing and Control, 2022, 77: 103790
doi: 10.1016/j.bspc.2022.103790
22 ZHOU M, FANG Y, XIAO Z. Personal identification with exploiting competitive tasks in EEG signals [C]//Biometric Recognition. Cham: Springer, 2021: 11–19.
23 ALYASSERI Z A A, AHMAD ALOMARI O, MAKHADMEH S N, et al EEG channel selection for person identification using binary grey wolf optimizer[J]. IEEE Access, 2022, 10: 10500- 10513
doi: 10.1109/ACCESS.2021.3135805
24 JIN R, WANG Y, BAI R, et al. EPI-CGAN: robust EEG-based person identification using conditional generative adversarial network [C]//Proceedings of the IEEE International Joint Conference on Biometrics. Abu Dhabi: IEEE, 2023: 1–9.
25 ORTEGA-RODRÍGUEZ J, GÓMEZ-GONZÁLEZ J F, PEREDA E Selection of the minimum number of EEG sensors to guarantee biometric identification of individuals[J]. Sensors, 2023, 23 (9): 4239
doi: 10.3390/s23094239
26 AKBARNIA Y, DALIRI M R EEG-based identification system using deep neural networks with frequency features[J]. Heliyon, 2024, 10 (4): e25999
doi: 10.1016/j.heliyon.2024.e25999
27 ALSHEHRI L, HUSSAIN M A lightweight GCT-EEGNet for EEG-based individual recognition under diverse brain conditions[J]. Mathematics, 2024, 12 (20): 3286
doi: 10.3390/math12203286
28 LI D, ZENG Z, HUANG N, et al Brain topographic map: a visual feature for multi-view fusion design in EEG-based biometrics[J]. Digital Signal Processing, 2025, 164: 105251
doi: 10.1016/j.dsp.2025.105251
29 DAS B B, KUMAR P, KAR D, et al A spatio-temporal model for EEG-based person identification[J]. Multimedia Tools and Applications, 2019, 78 (19): 28157- 28177
doi: 10.1007/s11042-019-07905-6
30 ALYASSERI Z A A, AL-BETAR M A, AWADALLAH M A, et al. EEG feature fusion for person identification using efficient machine learning approach [C]//Proceedings of the Palestinian International Conference on Information and Communication Technology. Gaza: IEEE, 2021: 97–102.
31 KAUR B, SINGH D. Neuro signals: a future biomertic approach towards user identification [C]//Proceedings of the 7th International Conference on Cloud Computing, Data Science and Engineering - Confluence. Noida: IEEE, 2017: 112–117.
32 YAHYAEI R, ESAT ÖZKURT T Mean curve length: an efficient feature for brainwave biometrics[J]. Biomedical Signal Processing and Control, 2022, 76: 103664
doi: 10.1016/j.bspc.2022.103664
33 LI D, ZENG Z, WANG Z, et al ESTformer: transformer utilising spatiotemporal dependencies for electroencephalogram super-resolution[J]. Knowledge-Based Systems, 2025, 317: 113345
doi: 10.1016/j.knosys.2025.113345
34 BANDANA D B, KUMAR R S, SATHYA B K, et al Person identification using autoencoder-CNN approach with multitask-based EEG biometric[J]. Multimedia Tools and Applications, 2024, 83 (35): 83205- 83225
doi: 10.1007/s11042-024-18693-z
35 WILAIPRASITPORN T, DITTHAPRON A, MATCHAPARN K, et al Affective EEG-based person identification using the deep learning approach[J]. IEEE Transactions on Cognitive and Developmental Systems, 2020, 12 (3): 486- 496
doi: 10.1109/TCDS.2019.2924648
[1] 陈文强,冯琳越,王东丹,顾玉磊,赵轩. 融合动态风险图与多变量注意力机制的车辆轨迹预测模型[J]. 浙江大学学报(工学版), 2026, 60(3): 455-467.
[2] 胡从裕,殷晨波,马伟,杨超,颜士宽. 基于改进CNN-LSTM的挖掘机作业对象识别[J]. 浙江大学学报(工学版), 2026, 60(3): 536-545.
[3] 李彬彬,张超,覃涛,陈昌盛,刘兴艳,杨靖. 面向光伏电站建设的移动端人体跌倒检测方法[J]. 浙江大学学报(工学版), 2026, 60(3): 546-555.
[4] 李国燕,李鹏辉,刘榕,梅玉鹏,张明辉. 融合多尺度分辨率和带状特征的遥感道路提取[J]. 浙江大学学报(工学版), 2026, 60(3): 585-593.
[5] 王爽,章熙泰,郭永存,孙守锁. 基于深度网络的可控混合式磁力耦合器退磁诊断[J]. 浙江大学学报(工学版), 2026, 60(2): 279-286.
[6] 李宪华,杜鹏飞,宋韬,邱洵,蔡钰. 基于多尺度滑窗注意力时序卷积网络的脑电信号分类[J]. 浙江大学学报(工学版), 2026, 60(2): 370-378.
[7] 闫光辉,黄霄,常文文. 基于脑电多尺度特征和图神经网络的紧急制动行为识别[J]. 浙江大学学报(工学版), 2026, 60(2): 404-414.
[8] 杨明辉,宋牧原,付大喜,郭炎伟,卢贤锥,张文聪,郑伟龙. 基于多头自注意力-Bi-LSTM模型的盾构掘进引发的土体沉降预测[J]. 浙江大学学报(工学版), 2026, 60(2): 415-424.
[9] 周思瑶,夏楠,江佳鸿. 姿态引导的双分支换装行人重识别网络[J]. 浙江大学学报(工学版), 2026, 60(1): 71-80.
[10] 孟璇,张雪英,孙颖,周雅茹. 基于电极排列和Transformer的脑电情感识别[J]. 浙江大学学报(工学版), 2025, 59(9): 1872-1880.
[11] 张学军,梁书滨,白万荣,张奉鹤,黄海燕,郭梅凤,陈卓. 基于异构图表征的源代码漏洞检测方法[J]. 浙江大学学报(工学版), 2025, 59(8): 1644-1652.
[12] 林宜山,左景,卢树华. 基于多头自注意力机制与MLP-Interactor的多模态情感分析[J]. 浙江大学学报(工学版), 2025, 59(8): 1653-1661.
[13] 翟亚红,陈雅玲,徐龙艳,龚玉. 改进YOLOv8s的轻量级无人机航拍小目标检测算法[J]. 浙江大学学报(工学版), 2025, 59(8): 1708-1717.
[14] 付家瑞,李兆飞,周豪,黄惟. 基于Convnextv2与纹理边缘引导的伪装目标检测[J]. 浙江大学学报(工学版), 2025, 59(8): 1718-1726.
[15] 杨荣泰,邵玉斌,杜庆治. 基于结构感知的少样本知识补全[J]. 浙江大学学报(工学版), 2025, 59(7): 1394-1402.