Please wait a minute...
J4  2014, Vol. 48 Issue (2): 354-359    DOI: 10.3785/j.issn.1008-973X.2014.02.025
    
Salient region detection based on stereo vision
LIU Zhong, CHEN Wei-hai, WU Xing-ming, ZOU Yu-hua, WANG Jian-hua
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
Download:   PDF(1663KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

Traditional pixel-based saliency model has some deficiencies, such as poorly defined borders and low performance in low contrast situation. A stereo vision based salient region detection approach was proposed. Simple linear iterative clustering (SLIC) method was adopted to perform superpixel segmentation. Superpixels were merged to construct segmentation image. Depth cue was computed by measuring the distance of region shifts in given stereo pair. For each region, color contrast and depth contrast were computed separately, and then fused to get   saliency value. Experimental result shows that saliency map has clear contour and sharp edge and regions at close range or with high depth contrast get more saliency. The proposed method is consistent with human visual perception and suitable for obstacle detection and scene recognition in mobile robot.



Published: 01 February 2014
CLC:  TP 391  
Cite this article:

LIU Zhong, CHEN Wei-hai, WU Xing-ming, ZOU Yu-hua, WANG Jian-hua. Salient region detection based on stereo vision. J4, 2014, 48(2): 354-359.

URL:

http://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2014.02.025     OR     http://www.zjujournals.com/eng/Y2014/V48/I2/354


基于双目视觉的显著性区域检测

针对传统基于像素的显著性模型存在的边缘模糊、不适于低对比度环境等问题,提出一种基于双目视觉信息的显著性区域检测方法. 采用简单线性迭代聚类(SLIC)方法对图像进行超像素分割,将生成的超像素区域进行合并.通过计算各区域在左右视图的相对移动距离获取物体深度信息,以区域为单位分别计算颜色对比度及深度对比度,进行合成得到区域的显著性值.结果表明,生成的显著性图轮廓清晰,边缘锐利,同等条件下近处及深度变化显著的区域能够获得更高的显著性.该方法符合人类视觉感知特征,适用于移动机器人障碍物检测及场景识别.

[1] KOCH C, ULLMAN S. Shifts in selective visual attention: towards the underlying neural circuitry [J]. Human Neurobiology, 1985, 4(4): 219-227.
[2] ITTI L, KOCH C. Computational modeling of visual attention. Nature reviews [J]. Neuroscience, 2001, 2(3): 194-203.
[3] ACHANTA R, ESTRADA F, WILS P. et al. Salient region detection and segmentation [C]∥ International Conference on Computer Vision Systems. Santorini: Springer Lecture Notes in Computer Science, 2008: 66-75.
[4] MA Y F, ZHANG H J. Contrast-based image attention analysis by using fuzzy growing [C]∥ACM International Conference on Multimedia. New York: ACM, 2003: 374-381.
[5] BRUCE N, TSOTSOS J. Saliency, attention, and visual search: an information theoretic approach [J]. Vision, 2009, 9(3): 124.
[6] BRUCE N, TSOTSOS J. Saliency based on information maximization [J]. Advances in Neural Information Processing Systems, 2009,18: 155-162.
[7] 张国敏,殷建平,祝恩,等. 基于近似高斯金字塔的视觉注意模型快速算法[J]. 软件学报, 2009,20(12): 32413253.
ZHANG Guo-min, YIN Jian-ping, ZHU En, et al. Fast visual attention model algorithm based on approximate Gaussian pyramids [J]. Journal of Software, 2009, 20(12): 3241-3253.
[8] HOU X, ZHANG L. Saliency detection: a spectral residual approach [C]∥ IEEE Conference on Computer Vision and Pattern Recognition. Minneapolis: IEEE, 2007: 18.
[9] GUO C, ZHANG L. A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression [J]. IEEE Trans. Image Processing, 2010,19(1): 185-198.
[10] CHENG M M, ZHANG G, MITRA N J, et al. Global contrast based salient region detection[C].∥ IEEE Conference on Computer Vision and Pattern Recognition. Providence: IEEE, 2011: 409-416.
[11] 彭海,赵巨峰,冯华君,等. 基于区域显著性的双波段图像融合方法[J]. 浙江大学学报:工学版, 2012, 46(11): 2109-2115.
PENG Hai, ZHAO Ju-feng, FENG Hua-jun, et al. Dual band image fusion method based on region saliency [J]. Journal of Zhejiang University: Engineering Science, 2012, 46(11): 2109-2115.
[12] 曾明, 孟庆浩, 王湘晖, 等. 视觉注意机制在图像增强中的应用研究[J].光子学报,2009, 38(5): 1283-1287.
ZENG Ming, MENG Qing-hao, WANG Xiang-hui, et al. Image enhancement based on visual attention mechanisms [J]. Acta Photonica Sinica, 2009, 38(5): 12831287.
[13] 刘伟,张宏,童勤业. 视觉注意计算模型及其在自然图像压缩中的应用[J]. 浙江大学学报:工学版,2007,41(4): 650-654.
LIU Wei, ZHANG Hong, TONG Qin-ye. Visual attention computational model and its application in natural image compression [J]. Journal of Zhejiang University: Engineering Science, 2007,41(4): 650-654.
[14] JOST T, OUERHANI N, WARTBURG R, et al. Contribution of depth to visual attention: comparison of a computer model and human [J]. Early Cognitive Vision Workshop, 2004.
[15] VAN E, RKELENS C. Anisotropy in werner’s binocular depth-contrast effect [J]. Vision Research, 1996, 36: 2253-2262.
[16] ACHANTA R, SHAJI A, SMITH K, et al. SLIC Superpixels [R]. Lausanne: EPFL, 2010.
[17] REN C Y, REID I. gSLIC: a real-time implementation of SLIC superpixel segmentation[R]. Oxford: University of Oxford, Department of Engineering Science, 2011.
[18] AZIZ M Z, MERTSCHING B. Pre-Attentive detection of depth saliency using stereo vision [C]∥Applied Imagery Pattern Recognition Workshop. Washington: IEEE, 2010: 17.
[19] SURAL S, QIAN G, PRAMANIK S. Segmentation and histogram generation using the HSV color space for image retrieval [C]∥Proceedings of IEEE International Conference on Image Processing. Rochester: IEEE, 2002: 589-592.
[20] ZHANG Y X, GUO F H, ZHAO G L, et al. A comprehensive review of medical image enhancement technologies[J]. Computer Aided Drafting, Design and Manufacturing (CADDM), 2012, 22(3): 1-11.

[1] ZHAO Jian-jun, WANG Yi, YANG Li-bin. Threat assessment method based on time series forecast[J]. J4, 2014, 48(3): 398-403.
[2] ZHANG Tian-yu, FENG Hua-jun, XU Zhi-hai, LI Qi, CHEN Yue-ting. Sharpness metric based on histogram of strong edge width[J]. J4, 2014, 48(2): 312-320.
[3] CUI Guang-mang, ZHAO Ju-feng,FENG Hua-jun, XU Zhi-hai,LI Qi, CHEN Yue-ting. Construction of fast simulation model for degraded image by inhomogeneous medium[J]. J4, 2014, 48(2): 303-311.
[4] WANG Xiang-bing,TONG Shui-guang,ZHONG Wei,ZHANG Jian. Study on  scheme design technique for hydraulic excavator's structure performance based on extension reuse[J]. J4, 2013, 47(11): 1992-2002.
[5] WANG Jin, LU Guo-dong, ZHANG Yun-long. Quantification-I theory based IGA and its application[J]. J4, 2013, 47(10): 1697-1704.
[6] LIU Yu, WANG Guo-jin. Designing  developable surface pencil through  given curve as its common asymptotic curve[J]. J4, 2013, 47(7): 1246-1252.
[7] HU Gen-sheng, BAO Wen-xia, LIANG Dong, ZHANG Wei. Fusion of panchromatic image and multi-spectral image based on
SVR and Bayesian method 
[J]. J4, 2013, 47(7): 1258-1266.
[8] WU Jin-liang, HUANG Hai-bin, LIU Li-gang. Texture details preserving seamless image composition[J]. J4, 2013, 47(6): 951-956.
[9] CHEN Xiao-hong,WANG Wei-dong. A HDTV video de-noising algorithm based on spatial-temporal filtering[J]. J4, 2013, 47(5): 853-859.
[10] ZHU Fan , LI Yue, JIANG Kai, YE Shu-ming, ZHENG Xiao-xiang. Decoding of rat’s primary motor cortex by partial least square[J]. J4, 2013, 47(5): 901-905.
[11] WU Ning, CHEN Qiu-xiao, ZHOU Ling, WAN Li. Multi-level method of optimizing vector graphs converted from remote sensing images[J]. J4, 2013, 47(4): 581-587.
[12] JI Yu, SHEN Ji-zhong, SHI Jin-he. Automatic ocular artifact removal based on blind source separation[J]. J4, 2013, 47(3): 415-421.
[13] WANG Xiang, DING Yong. Full reference image quality assessment based on Gabor filter[J]. J4, 2013, 47(3): 422-430.
[14] TONG Shui-guang, WANG Xiang-bing, ZHONG Wei, ZHANG Jian. Dynamic optimization design for rigid landing leg of crane
based on BP-HGA
[J]. J4, 2013, 47(1): 122-130.
[15] LIU Fang, SUN Yun, YANG Geng, LIN Hai. Visualization of social network based on particle swarm optimization[J]. J4, 2013, 47(1): 37-43.