Please wait a minute...
Front. Inform. Technol. Electron. Eng.  2014, Vol. 15 Issue (7): 525-536    DOI: 10.1631/jzus.C1300190
    
Contact-free and pose-invariant hand-biometric-based personal identification system using RGB and depth data
Can Wang, Hong Liu, Xing Liu
Engineering Laboratory on Intelligent Perception for Internet of Things (ELIP) and MOE Key Laboratory of Machine Perception, Shenzhen Graduate School, Peking University, Shenzhen 518055, China
Download:   PDF(0KB)
Export: BibTeX | EndNote (RIS)      

Abstract  Hand-biometric-based personal identification is considered to be an effective method for automatic recognition. However, existing systems require strict constraints during data acquisition, such as costly devices, specified postures, simple background, and stable illumination. In this paper, a contactless personal identification system is proposed based on matching hand geometry features and color features. An inexpensive Kinect sensor is used to acquire depth and color images of the hand. During image acquisition, no pegs or surfaces are used to constrain hand position or posture. We segment the hand from the background through depth images through a process which is insensitive to illumination and background. Then finger orientations and landmark points, like finger tips or finger valleys, are obtained by geodesic hand contour analysis. Geometric features are extracted from depth images and palmprint features from intensity images. In previous systems, hand features like finger length and width are normalized, which results in the loss of the original geometric features. In our system, we transform 2D image points into real world coordinates, so that the geometric features remain invariant to distance and perspective effects. Extensive experiments demonstrate that the proposed hand-biometric-based personal identification system is effective and robust in various practical situations.

Key wordsHand biometric      Contact free      Pose invariant      Identification system      Multiple features     
Received: 13 July 2013      Published: 10 July 2014
CLC:  TP391.4  
Cite this article:

Can Wang, Hong Liu, Xing Liu. Contact-free and pose-invariant hand-biometric-based personal identification system using RGB and depth data. Front. Inform. Technol. Electron. Eng., 2014, 15(7): 525-536.

URL:

http://www.zjujournals.com/xueshu/fitee/10.1631/jzus.C1300190     OR     http://www.zjujournals.com/xueshu/fitee/Y2014/V15/I7/525


基于人手生物测量信息并利用颜色和深度数据的身份识别系统

研究目的:利用人手生物测量信息被认为是身份自动识别的一种有效方法。我们利用深度信息和颜色信息提取多种线索,以增加身份识别的精度。
创新要点:在已有方法基于颜色、纹理特征的基础上,使用深度传感器提供的深度信息,充分运用人手轮廓的曲率特征提取人手几何特征,并利用人手轮廓特征和手掌平面拟合对不同姿态的人手进行姿态统一化。
方法提亮:首先利用深度信息在图像分割上的鲁棒性提取人手区域,然后利用人手轮廓的几何信息矫正人手姿态。对于矫正后的人手区域,分别提取基于深度的几何特征和基于颜色和纹理的特征,并结合之前利用颜色信息提取人手生物信息的经典特征,描述人手的生物特征。充分利用了深度信息在人手姿态矫正上的优势和人手轮廓等集合信息。基于颜色和纹理的信息可用很多经典方法提到的特征,并可用高清相机采集颜色信息。
重要结论:大量实验证实,融合多种线索描述人手的生物特征,提升了基于传统特征提取人手特征和识别身份的精度,在实际应用中有效且鲁棒。

关键词: 颜色和深度数据,  RGB-D,  生物测量,  身份识别 
[1] Rong-Feng Zhang , Ting Deng , Gui-Hong Wang , Jing-Lun Shi , Quan-Sheng Guan . A robust object tracking framework based on a reliable point assignment algorithm[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(4): 545-558.
[2] Yuan-ping Nie, Yi Han, Jiu-ming Huang, Bo Jiao, Ai-ping Li. Attention-based encoder-decoder model for answer selection in question answering[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(4): 535-544.
[3] Yue-ting Zhuang, Fei Wu, Chun Chen, Yun-he Pan. Challenges and opportunities: from big data to knowledge in AI 2.0[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(1): 3-14.
[4] Le-kui Zhou, Si-liang Tang, Jun Xiao, Fei Wu, Yue-ting Zhuang. Disambiguating named entities with deep supervised learning via crowd labels[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(1): 97-106.
[5] M. F. Kazemi, M. A. Pourmina, A. H. Mazinan. Level-direction decomposition analysis with a focus on image watermarking framework[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(11): 1199-1217.
[6] Guang-hui Song, Xiao-gang Jin, Gen-lang Chen, Yan Nie. Two-level hierarchical feature learning for image classification[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(9): 897-906.
[7] Jia-yin Song, Wen-long Song, Jian-ping Huang, Liang-kuan Zhu. Segmentation and focus-point location based on boundary analysis in forest canopy hemispherical photography[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(8): 741-749.
[8] Gao-li Sang, Hu Chen, Ge Huang, Qi-jun Zhao. Unseen head pose prediction using dense multivariate label distribution[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(6): 516-526.
[9] Xi-chuan Zhou, Fang Tang, Qin Li, Sheng-dong Hu, Guo-jun Li, Yun-jian Jia, Xin-ke Li, Yu-jie Feng. Global influenza surveillance with Laplacian multidimensional scaling[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(5): 413-421.
[10] Chu-hua Huang, Dong-ming Lu, Chang-yu Diao. A multiscale-contour-based interpolation framework for generating a time-varying quasi-dense point cloud sequence[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(5): 422-434.
[11] Xiao-hu Ma, Meng Yang, Zhao Zhang. Local uncorrelated local discriminant embedding for face recognition[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(3): 212-223.
[12] Fu-xiang Lu, Jun Huang. Beyond bag of latent topics: spatial pyramid matching for scene category recognition[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(10): 817-828.
[13] Yu Liu, Bo Zhu. Deformable image registration with geometric changes[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(10): 829-837.
[14] Zheng-wei Huang, Wen-tao Xue, Qi-rong Mao. Speech emotion recognition with unsupervised feature learning[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(5): 358-366.
[15] Xun Liu, Yin Zhang, San-yuan Zhang, Ying Wang, Zhong-yan Liang, Xiu-zi Ye. Detection of engineering vehicles in high-resolution monitoring images[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(5): 346-357.