Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2022, Vol. 56 Issue (2): 263-270    DOI: 10.3785/j.issn.1008-973X.2022.02.006
    
Silent liveness detection algorithm based on multi classification and feature fusion network
Xin-yu HUANG1(),Fan YOU1,Pei ZHANG2,3,Zhao ZHANG2,3,Bai-li ZHANG1,*(),Jian-hua LV1,Li-zhen XU1
1. School of Computer Science and Engineering, Southeast University, Nanjing 211189, China
2. State Key Laboratory of Smart Grid Protection and Control, Nanjing 211189, China
3. Nanri Group Corporation, Nanjing 211189, China
Download: HTML     PDF(913KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

Difference between non-liveness attack types is neglected, and adverse impact of category imbalance between liveness and non-liveness samples on model training is not considered in existing studies of silent liveness detection. In this paper, non-liveness attacks were subdivided into two categories, print attack and display attack, which transformed silent liveness detection from traditional two-classification problem into multi-classification problem. And the cross-entropy was used as the loss function to train network model. Thus, the disadvantage of binary classification and category imbalance can be eliminated, common features of the non-liveness face samples were likely to be identified more accurately through model training, and the accuracy of the network model was improved for non-liveness recognition. Moreover, a two-stream feature fusion the network model was constructed to further improve the feature representation capacity of the network model, which adopted the attention mechanism to adaptively fuse the feature vectors extracted from RGB and YCrCb. Abundant comparative experiments were performed on four public datasets, CASIA-FASD, Replay-Attack, MSU-MFSD and OULU-NPU. Experimental results indicate that silent liveness detection model adopting multi-classification strategy and feature fusion can effectively reduce the classification error and improve over-generalization ability.



Key wordsface liveness detection      multi classification      class imbalance      cross-entropy loss      feature fusion     
Received: 19 July 2021      Published: 03 March 2022
CLC:  TP 399  
Corresponding Authors: Bai-li ZHANG     E-mail: 2639239697@qq.com;220191827@seu.edu.cn
Cite this article:

Xin-yu HUANG,Fan YOU,Pei ZHANG,Zhao ZHANG,Bai-li ZHANG,Jian-hua LV,Li-zhen XU. Silent liveness detection algorithm based on multi classification and feature fusion network. Journal of ZheJiang University (Engineering Science), 2022, 56(2): 263-270.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2022.02.006     OR     https://www.zjujournals.com/eng/Y2022/V56/I2/263


基于多分类及特征融合的静默活体检测算法

现有的静默活体检测研究忽略不同非活体攻击方式之间的差异,以及不考虑活体和非活体样本类别不均衡对模型学习的不利影响. 本研究将非活体攻击类别细分成打印攻击和展示攻击,将静默活体检测由传统的二分类问题转变为多分类问题,并提出采取交叉熵作为损失函数对网络模型进行训练的方案,用以克服二分类和类别不均衡问题,使得模型训练中能更准确发现和抽象出非活体人脸样本共同的欺诈特征,提高网络模型对非活体识别的精准度. 构建双流特征融合网络模型,采取注意力机制对从RGB和YCrCb这2种不同色彩空间提取到的特征向量进行自适应加权融合,以进一步提升网络模型的特征表示能力. 在CASIA-FASD、Replay-Attack、MSU-MFSD和OULU-NPU 4个公开数据集进行大量的对比实验,实验结果表明,采取多分类策略以及特征融合的静默活体检测模型能够有效降低分类错误率并提升泛化能力.


关键词: 人脸活体检测,  多分类,  类别不均衡,  交叉熵损失,  特征融合 
Fig.1 Three types of samples from OULU-NPU dataset
Fig.2 Sample ratio based on binary classification
Fig.3 Sample ratio based on multi classification
Fig.4 BaseNet network model
Fig.5 Two-stream feature fusion network
网络结构 方法 EER/%
CASIA-FASD MSU-MFSD
ResNet18 二分类 2.7778 10.0000
ResNet18 +FL 4.0741 6.6667
ResNet18 多分类 1.4815 7.5000
BaseNet 二分类 6.6667 5.8333
BaseNet +FL 3.7037 2.5000
BaseNet 多分类 5.5556 2.5000
Tab.1 Result of intra-testing on CASIA-FASD and MSU-MFSD dataset
网络结构 方法 APCER/% BPCER/% ACER/%
ResNet18 二分类 12.7778 6.6667 9.7222
ResNet18 +FL 10.5556 3.0556 6.8056
ResNet18 多分类 7.5000 4.1667 5.8333
BaseNet 二分类 11.6667 2.5000 7.0833
BaseNet +FL 5.8333 5.0000 5.4167
BaseNet 多分类 5.6944 5.0000 5.3472
Tab.2 Result of intra-testing of OULU-NPU dataset
网络结构 方法 EER/% HTER/%
ResNet18 二分类 0.0000 2.0000
ResNet18 +FL 0.0000 2.7875
ResNet18 多分类 0.0000 0.7500
BaseNet 二分类 0.0000 1.0000
BaseNet +FL 6.0000 2.8750
BaseNet 多分类 0.0000 0.3750
Tab.3 Result of intra-testing of Replay-Attack dataset
方法 Replay-Attack CASIA-FASD
EER/% HTER/% EER/%
LBP-TOP[29] 7.900 7.600 10.000
CNN[14] 6.100 2.100 7.400
IDA[6] ? 7.400 ?
Motion+LBP[30] 4.500 5.110 ?
Color-LBP[10] 0.400 2.900 6.200
MSR-Attention[18] 0.210 0.389 3.145
BaseNet-Fusion 1.000 0.500 2.961
Tab.4 Performance comparison of different methods on CASIA-FASD and Replay-Attack dataset
方法 APCER/% BPCER/% ACER/%
MixedFASNet[1] 9.7000 2.5000 6.1000
DeepPixBiS[31] 11.4000 0.6000 6.0000
MSR-Attention[18] 7.6000 2.2000 4.9000
BaseNet-Fusion 6.6667 2.5000 4.5833
Tab.5 Performance comparison of different methods on OULU-NPU dataset
网络结构 方法 EER/%
训练: CASIA
测试: Replay
训练: Replay
测试: CASIA
ResNet-18 二分类 40.8750 48.3333
ResNet-18 多分类 36.7500 47.5926
BaseNet 二分类 47.8750 56.6670
BaseNet 多分类 33.2500 45.3704
Tab.6 Cross-testing between CASIA-FASD and Replay-Attack dataset by binary classification and multiple classification
方法 EER/%
训练: CASIA
测试: Replay
训练: Replay
测试: CASIA
LBP-TOP[29] 49.700 60.6000
CNN[14] 48.500 39.6000
Color-LBP[10] 47.000 39.6000
Color-Texture[32] 30.300 37.7000
FaceDs[33] 28.500 41.1000
MSR-Attention[18] 36.200 34.7000
BaseNet-Fusion 27.875 38.5185
Tab.7 Cross-testing of different methods under Replay-Attack and CASIA-FASD dataset
Fig.6 Feature distribution on test set of CASIA-FASD based on binary classification
Fig.7 Feature distribution on test set of CASIA-FASD based on multi classification
[1]   RAMACHANDRA R, BUSCH C Presentation attack detection methods for face recognition systems: a comprehensive survey[J]. ACM Computing Surveys (CSUR), 2017, 50 (1): 1- 37
doi: 10.1145/3009967
[2]   卢子谦, 陆哲明, 沈冯立, 等 人脸反欺诈活体检测综述[J]. 信息安全学报, 2020, 5 (2): 18- 27
LU Zi-qian, LU Zhe-ming, SHEN Feng-li, et al A survey of face anti-spoofing[J]. Journal of Cyber Security, 2020, 5 (2): 18- 27
[3]   ZHANG Z W, YAN J J, LIU S F, et al. A face antispoofing database with diverse attacks[C]// 2012 5th IAPR International Conference on Biometrics (ICB). Phuket: IEEE, 2012: 26-31.
[4]   BOULKENAFET Z, AKHTAR Z, FENG X Y, et al Face anti-spoofing in biometric systems[J]. Biometric Security and Privacy, 2017, 299- 321
[5]   GALBALLY J, MARCEL S. Face anti-spoofing based on general image quality assessment[C]// 2014 22nd International Conference on Pattern Recognition (ICPR). Columbia: IEEE, 2014: 1173-1178.
[6]   DI W, HU H, JAIN A K Face spoof detection with image distortion analysis[J]. IEEE Transactions on Information Forensics and Security, 2015, 10 (4): 746- 761
doi: 10.1109/TIFS.2015.2400395
[7]   MAATTA J Face spoofing detection from single images using texture and local shape analysis[J]. IET Biometrics, 2012, 1 (1): 3- 10
doi: 10.1049/iet-bmt.2011.0009
[8]   RAGHAVENDRA R, RAJA K B, BUSCH C Presentation attack detection for face recognition using light field camera[J]. IEEE Transactions on Image Processing, 2015, 24 (3): 1060- 1075
doi: 10.1109/TIP.2015.2395951
[9]   CHINGOVSKA I, ANJOS A, MARCEL S. On the effectiveness of local binary patterns in face anti-spoofing[C]// Proceedings of International Conference of Biometrics Special Interest Group (BIOSIG). Darmstadt: IEEE, 2012: 1-7.
[10]   BOULKENAFET Z, KOMULAINEN J, HADID A. Face anti-spoofing based on color texture analysis[C]// 2015 IEEE International Conference on Image Processing (ICIP). Quebec City: IEEE, 2015: 2636-2640.
[11]   GRAGNANIELLO D, POGGI G, SANSONE C, et al An investigation of local descriptors for biometric spoofing detection[J]. IEEE Transactions on Information Forensics and Security, 2015, 10 (4): 849- 863
doi: 10.1109/TIFS.2015.2404294
[12]   BOULKENAFET Z, KOMULAINEN J, HADID A Face antispoofing using speeded-up robust features and fisher vector encoding[J]. IEEE Signal Processing Letters, 2016, 24 (2): 141- 145
[13]   PATEL K, HAN H, JAIN A K Secure face unlock: spoof detection on smartphones[J]. IEEE Transactions on Information Forensics and Security, 2016, 11 (10): 2268- 2283
doi: 10.1109/TIFS.2016.2578288
[14]   YANG J, LEI Z, LI S Z Learn convolutional neural network for face anti-spoofing[J]. Computer Sicence, 2014, 9281: 373- 384
[15]   LI L, FENG X Y, BOULKENAFET Z, et al. An original face anti-spoofing approach using partial convolutional neural network[C]// 2016 6th International Conference on Image Processing Theory, Tools and Applications (IPTA). Oulu: IEEE, 2016.
[16]   ATOUM Y, LIU Y J, JOURABLOO A, et al. Face anti-spoofing using patch and depth-based CNNs[C]// 2017 IEEE International Joint Conference on Biometrics (IJCB). Denver: IEEE, 2017: 319-328.
[17]   LIU Y J, JOURABLOO A, LIU X M. Learning deep models for face anti-spoofing: binary or auxiliary supervision[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City: IEEE, 2018: 389-398.
[18]   CHEN H, HU G, LEI Z, et al Attention-based two-stream convolutional networks for face spoofing detection[J]. IEEE Transactions on Information Forensics and Security, 2019, 15: 578- 593
[19]   龙敏, 佟越洋 应用卷积神经网络的人脸活体检测算法研究[J]. 计算机科学与探索, 2018, 12 (10): 1658- 1670
LONG Ming, TONG Yue-yang Research on face liveness detection algorithm using convolutional neural network[J]. Journal of Frontiers of Computer Science and Technology, 2018, 12 (10): 1658- 1670
doi: 10.3778/j.issn.1673-9418.1801009
[20]   SHAO R, LAN X, LI J, et al. Multi-adversarial discriminative deep domain generalization for face presentation attack detection[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 10023-10031.
[21]   WANG Z, ZHAO C, QIN Y, et al. Exploiting temporal and depth information for multi-frame face anti-spoofing[EB/OL]. [2021-07-01]. https://arxiv.org/abs/1811.05118v3.
[22]   WANG Z Z, YU Z T, ZHAO C X, et al. Deep spatial gradient and temporal depth learning for face anti-spoofing[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 5042-5051.
[23]   YU Z T, ZHAO C X, WANG Z Z, et al. Searching central difference convolutional networks for face anti-spoofing[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE, 2020: 5295-5305.
[24]   SHEN T, HUANG Y Y, TONG Z J. Facebagnet: bag-of-local-features model for multi-modal face anti-spoofing[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Long Beach: IEEE, 2019: 1611-1616.
[25]   ZHANG S, LIU A, WAN J, et al Casia-surf: a large-scale multi-modal benchmark for face anti-spoofing[J]. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2020, 2 (2): 182- 193
doi: 10.1109/TBIOM.2020.2973001
[26]   皮家甜, 杨杰之, 杨琳希, 等 基于多模态特征融合的轻量级人脸活体检测方法[J]. 计算机应用, 2020, 40 (12): 3658- 3665
PI Jia-tian, YANG Jie-zhi, YANG Lin-xi, at el Lightweight face liveness detection method based on multi-modal feature fusion[J]. Journal of Computer Applications, 2020, 40 (12): 3658- 3665
[27]   BOULKENAFET Z, KOMULAINEN J, LI L, et al. Oulu-npu: a mobile face presentation attack database with real-world variations[C]// 2017 12th IEEE International Conference on Automatic Face and Gesture Recognition. Washington: IEEE, 2017: 612-618.
[28]   LIN T Y, GOYAL P, GIRSHICK R, et al. Focal loss for dense object detection[C]// Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice: IEEE, 2017: 2980-2988.
[29]   PEREIRA T D, ANJOS A, DE MARTINO J M, et al. Can face anti-spoofing countermeasures work in a real world scenario?[C]// 2013 International Conference on Biometrics (ICB). Madrid: IEEE, 2013: 1-8.
[30]   KOMULAINEN J, HADID A, PIETIKAINEN M, et al. Complementary countermeasures for detecting scenic face spoofing attacks[C]// 2013 International Conference on Biometrics (ICB). Madrid: IEEE, 2013: 1-7.
[31]   GEORGE A, MARCEL S. Deep pixel-wise binary supervision for face presentation attack detection[C]// 2019 International Conference on Biometrics (ICB). Crete: IEEE, 2019: 1-8.
[32]   BOULKENAFET Z, KOMULAINEN J, HADID A Face spoofing detection using colour texture analysis[J]. IEEE Transactions on Information Forensics and Security, 2016, 11 (8): 1818- 1830
doi: 10.1109/TIFS.2016.2555286
[1] Na ZHANG,Xu-lei QI,Xiao-an BAO,Biao WU,Xiao-mei TU,Yu-ting JIN. Single-stage object detection algorithm based on optimizing position prediction[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(4): 783-794.
[2] Zhi-chao CHEN,Hai-ning JIAO,Jie YANG,Hua-fu ZENG. Garbage image classification algorithm based on improved MobileNet v2[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(8): 1490-1499.
[3] Yue-lin CHEN,Wen-jing TIAN,Xiao-dong CAI,Shu-ting ZHENG. Text matching model based on dense connection networkand multi-dimensional feature fusion[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(12): 2352-2358.
[4] Xue-yun CHEN,Jin XIA,Ke DU. Overhead transmission line detection based on multiple linear-feature enhanced detector[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(12): 2382-2389.
[5] Pu ZHENG,Hong-yang BAI,Wei LI,Hong-wei GUO. Small target detection algorithm in complex background[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(9): 1777-1784.
[6] Deng-wen ZHOU,Jin-yue TIAN,Lu-yao MA,Xiu-xiu SUN. Lightweight image semantic segmentation based on multi-level feature cascaded network[J]. Journal of ZheJiang University (Engineering Science), 2020, 54(8): 1516-1524.
[7] Hong LIN,Yao-yao LU. Scale differentiated text detection method focusing on hard examples[J]. Journal of ZheJiang University (Engineering Science), 2019, 53(8): 1506-1516.
[8] Xiang-hao CHENG,Fei-peng DA,Liang WANG. Feature fusion based constrained local model for three-dimensional facial landmark localization[J]. Journal of ZheJiang University (Engineering Science), 2019, 53(4): 770-776.
[9] Wan-liang WANG,Sheng-lan YANG,Yan-wei ZHAO,Zhuo-rong LI. Estimation of river surface flow velocity based on conditional boundary equilibrium generative adversarial network[J]. Journal of ZheJiang University (Engineering Science), 2019, 53(11): 2118-2128.
[10] BAO Bi-sai, WU Jian-rong , LOU Xiao-jun, LIU Hai-tao. Feature fusion algorithm based on two-dimensional
feature matrix
[J]. Journal of ZheJiang University (Engineering Science), 2012, 46(11): 2081-2088.