Please wait a minute...
J4  2013, Vol. 47 Issue (12): 2243-2252    DOI: 10.3785/j.issn.1008-973X.2013.12.026
计算机技术、轻工业     
基于平面颜色分布的增强现实自然特征注册算法
谢天, 解利军, 宋广华, 郑耀
浙江大学 工程与科学计算研究中心,空天信息技术研究所,浙江 杭州 310027
Augmented reality registration from nature features ased on planar color distribution
XIE Tian, XIE Li-jun, SONG Guang-hua, ZHENG Yao
Center for Engineering and Scientific Computation, Institute of Aerospace Information Technology, Zhejiang University, Hangzhou 310027, China
 全文: PDF  HTML
摘要:

针对增强现实中自然特征注册算法其自然特征的复杂和无规律而难以兼顾算法速度和精度的问题,提出一种基于平面颜色分布的自然特征注册算法.算法提取彩色连通域作为局部不变特征,计算简单的色调、几何信息作为特征描述,结合颜色分布在视角变换下的几何限制进行全局的匹配优化.该算法无须追踪连续帧间的运动特性,可以在独立帧上完成.算法采用Mikolajczyk标准库验证其注册效果,在800×600的图像尺寸下实现15帧/s的实时注册.与加速鲁棒特征(SURF)算法的对比表明:本算法能满足更苛刻的注册条件,并能保持较好的注册精度.动态视频的注册结果也表明算法面对动态模糊也十分鲁棒.

Abstract:

Aiming at the problem that augmented reality nature feature (or markerless) registration is difficult to guarantee both high accuracy and real-time speed because of the complexity and irregular of nature features, an algorithm based on planar color distribution was proposed. The algorithm takes colorful connected areas as invariant features, calculates their descriptors simply by hue and geometry information, and matches them by global optimization based on geometric constraints on color distribution under undefined view transformations. The algorithm does not rely on recursive tracking between frames. It achieves registration in every frame independently. Mikolajczyk’s dataset was applied to test the algorithm, and the real-time registration for 800×600 image sets over 15 frames per second was achieved. The comparison against the SURF shows that the algorithm can keep good registration precision even facing much harder conditions. And the live video registration results demonstrate that the algorithm is robust to motion blur.

出版日期: 2013-12-01
:  TP 391.4  
基金资助:

国家自然科学基金资助项目(11272285, 10876036, 11172267);浙江省自然科学基金资助项目(LY12F02026);浙江省科技厅资助项目(2009C31112).

通讯作者: 宋广华,男,副教授.     E-mail: ghsong@zju.edu.cn
作者简介: 谢天(1986—),男,博士生,从事增强现实、计算机视觉研究. E-mail: rickyskyxie@zju.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  

引用本文:

谢天, 解利军, 宋广华, 郑耀. 基于平面颜色分布的增强现实自然特征注册算法[J]. J4, 2013, 47(12): 2243-2252.

链接本文:

http://www.zjujournals.com/eng/CN/Y2013/V47/I12/2243

[1] AZUMA R, BAILLOT Y, BEHRINGER R, et al. Recent advances in augmented reality [J]. IEEE Computer Graphics and Applications, 2001, 21(3): 447.

[2] KATO H, BILLINGHURST M. Marker tracking and HMD calibration for a video-based augmented reality conferencing system [C]∥ Proceedings of 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR′99). USA: IEEE Computer Society, 1999: 85-94.

[3] FIALA M. ARTag, a fiducial marker system using digital techniques[C]∥ IEEE Conference on Computer Vision and Pattern Recognition, 2005. San Diego, USA: IEEE, 2005, 2: 590-596.

[4] LEPETIT V, FUA P. Monocular model-based 3D tracking of rigid objects: a survey [J]. Foundations and Trends in Computer Graphics and Vision, 2005, 1(1): 191.

[5] ZHOU F, DUH H, BILLINGHURST M. Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR[C]∥IEEE International Symposium on Mixed and Augmented Reality(ISMAR’08). Cambridge, UK: IEEE, 2008, 7: 193-202.

[6] 全红艳,王长波,林俊隽.基于视觉的增强现实技术研究综述[J].机器人,2008,30(4): 379-384.

QUAN Hong-yan, WANG Chang-bo, LIN Jun-juan. Survey of vision-based augmented reality technologies [J]. Robot, 2008, 30(4): 379-384.

[7] BARANDIARAN I, PALOC C, GRAA M. Real-time optical markerless tracking for augmented reality application[J]. Journal of Real-Time Image Processing, 2010, 5(2): 129-138.

[8] TAYLOR S, ROSTEN E, DRUMMOND T. Robust feature matching in 2.3 μs [C]∥ IEEE Conference on Computer Vision and Pattern Recognition (CVPR’09). Miami USA: IEEE Computer Society, 2009: 15-22.

[9] LOWE D G. Distinctive image features from scale-invariant keypoints [J]. International Journal of Computer Vision, 2004, 60(2): 91-110.

[10] CALONDER M, LEPETIT V, STRECHA C. BRIEF: Binary robust independent elementary features [C]∥ Proceedings of European Conference on Computer Vision (ECCV’10). Herakion, Greece: Springer, 2010, 6314: 778-792.

[11] ALAHI A, ORTIZ R, VANDERGHEYNST P. FREAK: Fast retina keypoint [C]∥ IEEE Conference on Computer Vision and Pattern Recognition (CVPR’12). Providence, USA: IEEE Computer Society, 2012: 510-517.

[12] LEPETIT V, LAGGER P, FUA P. Randomized trees for real-time keypoint recognition[C]∥ IEEE Conference on Computer Vision and Pattern Recognition, 2005. San Diego, USA: IEEE, 2005, 2: 775-781.

[13] OZUYSAL M, FUA P, LEPETIT V. Fast keypoint recognition in ten lines of code. Computer Vision and Pattern Recognition[C]∥ IEEE Conference on Computer Vision and Pattern Recognition (CVPR’07). Minneapolis, USA: IEEE Computer Society, 2007, 1: 18.

[14] 林开颜,吴军辉,徐立鸿.彩色图像分割方法综述[J].中国图象图形学报,2005,10(1):110.

LIN Kai-yan, WU Jun-hui, XU Li-hong. A survey on color image segmentation techniques [J]. Journal of Image and Graphics, 2005, 10(1): 110.

[15] 陈佳鑫,贾英民.一种基于漫水填充法的实时彩色目标识别方法 [J].计算机仿真,2012,29(3): 49.

CHEN Jia-xin, JIA Ying-min. Real-time color object recognition method based on flood fill algorithm [J]. Computer Simulation, 2012, 29(3): 49.

[16] HU M K. Visual pattern recognition by moment invariants [J]. IRE Transactions on Information Theory, 1962, 8(2): 179-187.

[17] BRADSKI G, KAEHLER A. Learning OpenCV [M]. Beijing: Tsinghua University Press, 2009.

[18] 夏永泉,刘正东,杨静宇.不变矩方法在区域匹配中的应用[J].计算机辅助设计与图形学学报,2005,17(10): 2152-2156.

XIA Yong-quan, LIU Zheng-dong, YANG Jing-yu. Application of moment invariant approach in region matching [J]. Journal of Computer Aided Design & Computer Graphics, 2005, 17(10): 2152-2156.

[19] BAY H, ESS A, TUYTELAARS T, et al. Speeded-up robust features (SURF) [J]. Computer Vision and Image Understanding, 2008, 110(3): 346-359.

[20] MIKOLAJCZYK K, TUYTELAARS T, SCHMID C, et al. A comparison of affine region detectors [J]. International Journal of Computer Vision, 2005, 65(1/2): 43-72.

[21] UCHIYAMA H, MARCHAND E. Object detection and pose tracking for augmented reality: Recent approaches [C]∥ proceedings of 18th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV’12). Kawasaki, Japan: The Institute of Electrical Engineers of Japan, 2012.

[22] LIEBERKNECHT S, BENHIMANE S, MEIER P, et al. A dataset and evaluation methodology for template-based tracking algorithms [C]∥ IEEE International Symposium on Mixed and Augmented Reality (ISMAR’09). Orlando, USA: IEEE,2009: 145-151.

[1] 徐嵩,孙秀霞,何衍. 利用直线段成像特性的摄像机畸变迭代标定方法[J]. J4, 2014, 48(3): 404-413.
[2] 杨帮华, 何美燕, 刘丽, 陆文宇. 脑机接口中基于BISVM的EEG分类[J]. J4, 2013, 47(8): 1431-1436.
[3] 杨冰, 许端清, 杨鑫, 赵磊, 唐大伟. 基于艺术风格相似性规则的绘画图像分类[J]. J4, 2013, 47(8): 1486-1492.
[4] 楼晓俊, 孙雨轩, 刘海涛. 聚类边界过采样不平衡数据分类方法[J]. J4, 2013, 47(6): 944-950.
[5] 孟子博, 姜虹, 陈婧, 袁波, 王立强. 基于特征剪裁的AdaBoost算法及在人脸检测中的应用[J]. J4, 2013, 47(5): 906-911.
[6] 何智翔, 丁晓青, 方驰, 文迪. 基于LBP和CCS-AdaBoost的多视角人脸检测[J]. J4, 2013, 47(4): 622-629.
[7] 刘晓芳,叶修梓,张三元,张引. 并行磁共振图像的非二次正则化保边性重建[J]. J4, 2012, 46(11): 2035-2043.
[8] 张远辉,韦巍. 在线角速度估计的乒乓球机器人视觉测量方法[J]. J4, 2012, 46(7): 1320-1326.
[9] 施锦河, 沈继忠, 王攀. 四类运动想象脑电信号特征提取与分类算法[J]. J4, 2012, 46(2): 338-344.
[10] 张大尉, 朱善安. 基于核邻域保持判别嵌入的人脸识别[J]. J4, 2011, 45(10): 1842-1847.
[11] 舒振宇, 汪国昭. 基于张量投票的快速网格分割算法[J]. J4, 2011, 45(6): 999-1005.
[12] 徐舒畅, 张三元, 张引. 基于彩色图像的皮肤色素浓度提取算法[J]. J4, 2011, 45(2): 253-258.
[13] 佘青山, 孟明, 罗志增, 马玉良. 基于多核学习的下肢肌电信号动作识别[J]. J4, 2010, 44(7): 1292-1297.
[14] 薛凌云, 段会龙, 向学勤, 范影乐. 基于FitzHughNaguno神经元随机共振机制的图像复原[J]. J4, 2010, 44(6): 1103-1107.
[15] 张远辉, 韦巍, 虞旦. 基于实时图像的乒乓机器人Kalman跟踪算法[J]. J4, 2009, 43(09): 1580-1584.