Please wait a minute...
浙江大学学报(工学版)
计算机技术﹑电信技术     
基于视差空间V-截距的障碍物检测
曹腾,项志宇,刘济林
浙江大学 信息与电子工程学系 浙江省综合信息网重点实验室,浙江 杭州 310027
Obstacle detection based on V-intercept in disparity space
CAO Teng, XIANG Zhi-yu, LIU Ji-lin
Department of Information Science and Electronic Engineering, Zhejiang Provincial Key Laboratory of Information Network Technology, Zhejiang University, Hangzhou 310027, China
 全文: PDF(2996 KB)   HTML
摘要:

为了更加高效地完成对基于双目立体视觉的驾驶系统的环境分析,提出新的应用于视差空间中分析障碍的“V-截距”方法.与传统在三维空间中的障碍检测方法不同,该方法直接在视差空间中进行检测,通过将障碍坡度信息转换为视差空间中V轴的截距实现检测.推导视差空间中的坡度-截距转换关系,划定合理的阈值区间.整个检测算法具有快速高效的特点,在原理上不受平面道路的假设约束,具有很强的实际应用价值.多种环境下的实验证明:该方法的障碍物检测效果可靠、稳定,检测速度是基于三维空间的方法的3.9倍.

Abstract:

A new analysis method of obstacles in the disparity space with “V-intercept” concept was proposed in order to analyze the environment around stereo vision based  driving systems more effectively,  Unlike the traditional methods which detected the obstacles in 3D space, this method worked directly in the disparity space, converting the obstacles slope information into intercept on V axis in disparity space to achieve detection. Conversion relationship between slope and intercept was derived in the disparity space, and  reasonable threshold interval was determined. The whole detection algorithm was fast and efficient, and was not affected by the assumption of flat road from the principle, giving a strong practical value. Experiments in multiple environments showed that  result of this method was  reliable and stable, which was 3.9 times faster than that of 3D space based method.

出版日期: 2015-08-28
:  TP 242.6  
基金资助:

国家自然科学基金资助项目(61071219)

通讯作者: 项志宇,男,副教授     E-mail: xiangzy@zju.edu.cn
作者简介: 曹腾(1988-),男,博士生,从事立体视觉与路径规划研究. E-mail: teng.cao@foxmail.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  

引用本文:

曹腾,项志宇,刘济林. 基于视差空间V-截距的障碍物检测[J]. 浙江大学学报(工学版), 10.3785/j.issn.1008-973X.2015.03.003.

CAO Teng, XIANG Zhi-yu, LIU Ji-lin. Obstacle detection based on V-intercept in disparity space. JOURNAL OF ZHEJIANG UNIVERSITY (ENGINEERING SCIENCE), 10.3785/j.issn.1008-973X.2015.03.003.

链接本文:

http://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2015.03.003        http://www.zjujournals.com/eng/CN/Y2015/V49/I3/409

[1] FRANKE U, GAVRILA D, GORZIG S, et al. Autonomous driving goes downtown [J]. Intelligent Systems and Their Applications, IEEE, 1998, 13(6): 40-48.
[2] BERTOZZI M, BROGGI A. GOLD: a parallel real-time stereo vision system for generic obstacle and lane detection [J].  IEEE Transactions on Image Processing, 1998, 7(1): 62-81.
[3] LABAYRADE R, AUBERT D, TAREL J P. Real time obstacle detection in stereovision on non flat road geometry through “V-disparity” representation [C]∥ Intelligent Vehicle Symposium. Versailles: IEEE, 2002, 2: 646-651.
[4] LABAYRADE R, AUBERT D. A single framework for vehicle roll, pitch, yaw estimation and obstacles detection by stereovision[C]∥Intelligent Vehicles Symposium. Columbus: IEEE, 2003: 31-36.
[5] HU Z, UCHIMURA K. U-V-disparity: an efficient algorithm for stereovision based scene analysis [C]∥ Intelligent Vehicles Symposium. Nevada: IEEE, 2005: 48-54.
[6] DEMIRDJIAN D, DARRELL T. Motion estimation from disparity images[C]∥ 8th IEEE International Conference on Computer Vision. Vancouver: IEEE, 2001, 1:213-218.
[7] PFEIFFER D, FRANKE U. Modeling dynamic 3D environments by means of the stixel world [J]. Intelligent Transportation Systems Magazine. 2011, 3(3): 24-36.
[8] PFEIFFER D, FRANKE U, DAIMLER A G. Towards a global optimal multi-layer stixel representation of dense 3D data [C]∥ Proceedings of the British Machine Vision Conference. Dundee: BMVA Press, 2011: 112.
[9] PFEIFFER D, ERBS F, FRANKE U. Pixels, stixels, and objects [C]∥ 12th European Conference on Computer Vision Workshops and Demonstrations. Berlin Heidelberg: Springer, 2012: 110.
[10] GEIGER A, LENZ P, STILLER C, et al. Vision meets robotics: the KITTI dataset [J]. International Journal of Robotics Research, 2013, 32(11): 1231-1237.
[11] HIRSCHMULLER H. Stereo processing by semiglobal matching and mutual information [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(2): 328-341.

[1] 贾松敏,卢迎彬,王丽佳,李秀智,徐涛. 分层特征移动机器人行人跟踪[J]. 浙江大学学报(工学版), 2016, 50(9): 1677-1683.
[2] 江文婷, 龚小谨, 刘济林. 基于增量计算的大规模场景致密语义地图构建[J]. 浙江大学学报(工学版), 2016, 50(2): 385-391.
[3] 马子昂,项志宇. 光流测距全向相机的标定与三维重构[J]. 浙江大学学报(工学版), 2015, 49(9): 1651-1657.
[4] 王立军,黄忠朝,赵于前. 基于超像素分割的空间相关主题模型及场景分类方法[J]. 浙江大学学报(工学版), 2015, 49(3): 402-408.
[5] 卢维, 项志宇, 于海滨, 刘济林. 基于自适应多特征表观模型的目标压缩跟踪[J]. 浙江大学学报(工学版), 2014, 48(12): 2132-2138.
[6] 陈明芽, 项志宇, 刘济林. 单目视觉自然路标辅助的移动机器人定位方法[J]. J4, 2014, 48(2): 285-291.
[7] 林颖, 龚小谨, 刘济林. 基于单位视球的鱼眼相机标定方法[J]. J4, 2013, 47(8): 1500-1507.
[8] 王会方, 朱世强, 吴文祥. 谐波驱动伺服系统的改进自适应鲁棒控制[J]. J4, 2012, 46(10): 1757-1763.
[9] 欧阳柳,徐进,龚小谨,刘济林. 基于不确定性分析的视觉里程计优化[J]. J4, 2012, 46(9): 1572-1579.
[10] 马丽莎, 周文晖, 龚小谨, 刘济林. 基于运动约束的泛化Field D*路径规划[J]. J4, 2012, 46(8): 1546-1552.
[11] 路丹晖, 周文晖, 龚小谨, 刘济林. 视觉和IMU融合的移动机器人运动解耦估计[J]. J4, 2012, 46(6): 1021-1026.
[12] 徐进,沈敏一,杨力,王炜强,刘济林. 基于双目光束法平差的机器人定位与地形拼接[J]. J4, 2011, 45(7): 1141-1146.
[13] 陈家乾,柳玉甜,何衍,蒋静坪. 基于栅格模型和样本集合的动态环境地图创建[J]. J4, 2011, 45(5): 794-798.
[14] 陈家乾, 何衍, 蒋静坪. 基于权值平滑的改良FastSLAM算法[J]. J4, 2010, 44(8): 1454-1459.
[15] 徐生林, 刘艳娜. 两足机器人的SimMechanics建模[J]. J4, 2010, 44(7): 1361-1367.