Please wait a minute...
J4  2014, Vol. 48 Issue (2): 279-284    DOI: 10.3785/j.issn.1008-973X.2014.02.014
电信技术     
融合光流与特征点匹配的单目视觉里程计
郑驰1,2, 项志宇1,2, 刘济林1,2
1. 浙江大学 信息与电子工程学系,浙江 杭州 310027;2. 浙江省综合信息网技术重点实验室,浙江 杭州 310027
Monocular vision odometry based on the fusion of optical flow and feature points matching
ZHENG Chi1,2, XIANG Zhi-yu1,2, LIU Ji-lin1,2
1. Department of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China;
2. Zhejiang Provincial Key Laboratory of Information Network Technology, Hangzhou 310027, China
 全文: PDF(2019 KB)  
摘要:

针对城市平坦路面准确实时定位的问题,提出将光流跟踪法与特征点匹配进行卡尔曼融合的单目视觉里程计方法.基于平面假设,利用光流跟踪法进行帧间小位移定位,同时利用传统的加速鲁棒特征点(SURF)进行帧间大位移匹配来矫正光流法结果.通过卡尔曼滤波更新机器人的位置和姿态.结果表明,融合算法克服了光流法定位精度差和特征点匹配法处理速度慢的缺点,突出了光流法实时性和特征点匹配定位准确性的优点,该方法能够提供较准确的实时定位输出,并对光照变化和路面纹理较少的情况有一定的鲁棒性.

关键词: 单目视觉里程计光流特征点匹配卡尔曼滤波    
Abstract:

For the problem of real-time precise localization on the urban flat surface, a monocular vision odometry based on the Kalman fusion of optical flow and feature points matching has been proposed. Based on the assumption of flat plane, the method of optical flow tracking was applied for localization between two frames in small movement. Meanwhile, the traditional SURF feature points matching between two frames in long distance was applied for refining the output of the optical flow method. The position and posture of the robot was updated through Kalman filter. The results demonstrate that the fusion algorithm overcomes the shortcomings of poor positioning accuracy of the optical flow and the low processing speed of the feature matching method, highlighting the advantages of real-time performance of optical flow and high accuracy of the feature matching. The fusion algorithm is robust to the circumstances such as illumination change and low road texture, producing a good localization results in real-time.

Key words: monocular vision odometry    optical flow    feature points matching    kalman filter
出版日期: 2014-03-03
:  TN 919  
基金资助:

国家自然科学基金资助项目(61071219).

通讯作者: 项志宇,男,副教授.     E-mail: xiangzy@zju.edu.cn
作者简介: 郑驰(1986—),男,硕士生,从事机器视觉研究工作.E-mail:21031143@zju.edu.cn
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  

引用本文:

郑驰, 项志宇, 刘济林. 融合光流与特征点匹配的单目视觉里程计[J]. J4, 2014, 48(2): 279-284.

ZHENG Chi, XIANG Zhi-yu, LIU Ji-lin. Monocular vision odometry based on the fusion of optical flow and feature points matching. J4, 2014, 48(2): 279-284.

链接本文:

http://www.zjujournals.com/xueshu/eng/CN/10.3785/j.issn.1008-973X.2014.02.014        http://www.zjujournals.com/xueshu/eng/CN/Y2014/V48/I2/279

\
[1\] ROYER E,LHUILLIER M,DHOME M,et al. Monocular vision for mobile robot localization and autonomous navigation \
[J\]. International Journal of Computer Vision, 2007, 74(3): 237-260.
\
[2\] SONG Xiao-jing,SONG Zi-bin,SENEVIRATNE L D,et al. Optical flow-based slip and velocity estimation technique for unmanned skid-steered vehicles \
[C\]∥ Intelligent Robots and Systems (IROS), 2008 IEEE/RSJ International Conference on. Nice: IEEE, 2008: 101-106.
\
[3\] NISTER D,NARODITSKY O,BERGEN J. Visual odometry \
[C\]∥ Computer Vision and Pattern Recognition (CVPR),2004 IEEE Computer Society Conference on. Washington, DC: IEEE, 2004, 1(1): 652-659.
\
[4\] HOWARD A. Real-time stereo visual odometry for autonomous ground vehicles \
[C\]∥ Intelligent Robots and Systems (IROS), 2008 IEEE/RSJ International Conference on. Nice: IEEE, 2008: 3946-3952.
\
[5\] KITT B,REHDER J,CHAMBERS A,et al. Monocular visual odometry using a planar road model to solve scale ambiguity \
[C\]∥ Proceeding of European Conference on Mobile Robots.  Orebro: \
[s. n.\], 2011: 43-48.
\
[6\] SUN D, ROTH S, BLACK M J. Secrets of optical flow estimation and their principles \
[C\]∥ Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. San Francisco: IEEE, 2010: 2432-2439.
\
[7\] LUCAS B D,KANAKE T. An iterative image registration technique with an application to stereo vision  \
[C\]∥ International Joint Conference on Artificial Intelligence (IJCIA). Vancouver: AAAI,1981: 674-679.
\
[8\] BOUGUET J. Pyramidal Implementation of the affine Lucas Kanade feature tracker description of the algorithm \
[J\]. Intel Corporation,2001, 5.
\
[9\] BAY H, ESS A, TUYTELAARS T, et al.SURF: speeded up robust features \
[J\]. Computer Vision and Image Understanding (CVIU), 2008, 110(3): 346-359.

[1] 姜波, 解仑, 刘欣, 韩晶, 王志良. 光流模值估计的微表情捕捉[J]. 浙江大学学报(工学版), 2017, 51(3): 577-583.
[2] 刘景明, 黄平捷, 侯迪波, 张光新, 张宏建. 河流突发污染的污染物浓度动态校正方法[J]. 浙江大学学报(工学版), 2017, 51(12): 2459-2465.
[3] 叶肖伟, 刘坦, 董传智, 陈斌. 基于卡尔曼滤波和中性轴位置的结构损伤识别[J]. 浙江大学学报(工学版), 2017, 51(10): 2012-2018.
[4] 宋开臣,曾瑶,叶凌云. 基于多传感器信息融合的涡街信号处理方法[J]. 浙江大学学报(工学版), 2016, 50(7): 1307-1312.
[5] 莫元富, 于德新, 宋军, 郭亚娟. 基于信道负载阈值的车联网信标消息生成策略[J]. 浙江大学学报(工学版), 2016, 50(1): 21-26.
[6] 马子昂,项志宇. 光流测距全向相机的标定与三维重构[J]. 浙江大学学报(工学版), 2015, 49(9): 1651-1657.
[7] 朱光明, 蒋荣欣, 周凡, 田翔, 陈耀武. 带测量偏置估计的鲁棒卡尔曼滤波算法[J]. 浙江大学学报(工学版), 2015, 49(7): 1343-1349.
[8] 付兴伟, 吴功平, 周鹏, 于娜. 基于卡尔曼滤波的巡视机器人能耗估计[J]. 浙江大学学报(工学版), 2015, 49(4): 670-675.
[9] 陆国生,李立言,赵民建. 全球导航卫星系统矢量载波环的设计与分析[J]. 浙江大学学报(工学版), 2015, 49(1): 20-26.
[10] 魏建华,国凯,熊义. 大型装备多轴电液执行器同步控制[J]. J4, 2013, 47(5): 755-760.
[11] 杨飞,朱株,龚小谨,刘济林. 基于三维激光雷达的动态障碍实时检测与跟踪[J]. J4, 2012, 46(9): 1565-1571.
[12] 刘涛, 赵巨峰, 徐之海, 冯华君, 陈慧芳. 基于卡尔曼滤波的红外图像增强算法[J]. J4, 2012, 46(8): 1534-1539.
[13] 路丹晖, 周文晖, 龚小谨, 刘济林. 视觉和IMU融合的移动机器人运动解耦估计[J]. J4, 2012, 46(6): 1021-1026.
[14] 孟捷 ,刘华锋 ,岳茂雄 ,胡红杰. 生物力学模型导引的心肌运动与材料参数对偶估计[J]. J4, 2012, 46(5): 912-917.
[15] 陈岁生,卢建刚,楼晓春. 基于MDS-MAP和非线性滤波的WSN定位算法[J]. J4, 2012, 46(5): 866-872.