Please wait a minute...
Front. Inform. Technol. Electron. Eng.  2014, Vol. 15 Issue (8): 593-606    DOI: 10.1631/jzus.C1300379
    
Development of a monocular vision system for robotic drilling
Wei-dong Zhu, Biao Mei, Guo-rui Yan, Ying-lin Ke
Department of Mechanical Engineering, Zhejiang University, Hangzhou 310027, China; The State Key Lab of Fluid Power Transmission and Control, Zhejiang University, Hangzhou 310027, China
Download:   PDF(0KB)
Export: BibTeX | EndNote (RIS)      

Abstract  Robotic drilling for aerospace structures demands a high positioning accuracy of the robot, which is usually achieved through error measurement and compensation. In this paper, we report the development of a practical monocular vision system for measurement of the relative error between the drill tool center point (TCP) and the reference hole. First, the principle of relative error measurement with the vision system is explained, followed by a detailed discussion on the hardware components, software components, and system integration. The elliptical contour extraction algorithm is presented for accurate and robust reference hole detection. System calibration is of key importance to the measurement accuracy of a vision system. A new method is proposed for the simultaneous calibration of camera internal parameters and hand-eye relationship with a dedicated calibration board. Extensive measurement experiments have been performed on a robotic drilling system. Experimental results show that the measurement accuracy of the developed vision system is higher than 0.15 mm, which meets the requirement of robotic drilling for aircraft structures.

Key wordsVision system      Robotic drilling      Error measurement      Elliptical contour extraction      Hand-eye calibration     
Received: 29 December 2013      Published: 06 August 2014
CLC:  TP391.4  
Cite this article:

Wei-dong Zhu, Biao Mei, Guo-rui Yan, Ying-lin Ke. Development of a monocular vision system for robotic drilling. Front. Inform. Technol. Electron. Eng., 2014, 15(8): 593-606.

URL:

http://www.zjujournals.com/xueshu/fitee/10.1631/jzus.C1300379     OR     http://www.zjujournals.com/xueshu/fitee/Y2014/V15/I8/593


用于机器人制孔的单目视觉系统开发

研究目的:在航空结构件机器人制孔中,通常通过误差测量和补偿保证制孔位置精度。本文旨在开发一种低成本、高精度的视觉系统,通过集成鲁棒的椭圆特征提取算法、实用的视觉系统标定方法,实现对刀具和工件之间相对误差的精确测量。
创新要点:视觉测量相关研究中缺乏对视觉系统工作原理的深入阐释;视觉系统误差测量原理的精确阐述,为视觉系统的开发和测量精度的提高提供了理论基础。机器人制孔环境中存在大量噪声和环境干扰因素,视觉系统中集成的特征提取算法应具有较高的鲁棒性;基于显著性的椭圆轮廓提取算法可实现鲁棒、精确的基准孔检测。工业应用通常要求视觉系统的标定方法兼具实用性和精确性;本文方法提供了一种实用、精确、可同时实现相机和手眼关系标定的标定方法。
方法提亮:精确阐释了视觉系统的工作原理,为后续相机刀具中心点和视觉系统标定方法的确定提供了理论依据。通过集成显著性计算、投票方法和Snake模型,开发了一种鲁棒、精确的椭圆轮廓特征提取算法(图8)。提出了一种基于专用标定板的、可同时实现相机内参数和手眼关系标定的视觉系统标定方法,该方法可避免测量过程引入Abbe误差(图12)。
重要结论:本文研究了视觉测量系统的工作原理,并结合提出的椭圆轮廓提取算法、视觉系统标定方法,开发了一种低成本、满足机器人制孔精度要求的单目视觉测量系统。在机器人制孔系统平台上的实验证实,本文提出的视觉系统满足航空工业对制孔精度的要求,特征提取算法和标定方法鲁棒、有效。

关键词: 视觉系统,  机器人制孔,  误差测量,  椭圆轮廓提取,  手眼关系标定 
[1] Yuan-ping Nie, Yi Han, Jiu-ming Huang, Bo Jiao, Ai-ping Li. Attention-based encoder-decoder model for answer selection in question answering[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(4): 535-544.
[2] Rong-Feng Zhang , Ting Deng , Gui-Hong Wang , Jing-Lun Shi , Quan-Sheng Guan . A robust object tracking framework based on a reliable point assignment algorithm[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(4): 545-558.
[3] Yue-ting Zhuang, Fei Wu, Chun Chen, Yun-he Pan. Challenges and opportunities: from big data to knowledge in AI 2.0[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(1): 3-14.
[4] Le-kui Zhou, Si-liang Tang, Jun Xiao, Fei Wu, Yue-ting Zhuang. Disambiguating named entities with deep supervised learning via crowd labels[J]. Front. Inform. Technol. Electron. Eng., 2017, 18(1): 97-106.
[5] M. F. Kazemi, M. A. Pourmina, A. H. Mazinan. Level-direction decomposition analysis with a focus on image watermarking framework[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(11): 1199-1217.
[6] Guang-hui Song, Xiao-gang Jin, Gen-lang Chen, Yan Nie. Two-level hierarchical feature learning for image classification[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(9): 897-906.
[7] Jia-yin Song, Wen-long Song, Jian-ping Huang, Liang-kuan Zhu. Segmentation and focus-point location based on boundary analysis in forest canopy hemispherical photography[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(8): 741-749.
[8] Gao-li Sang, Hu Chen, Ge Huang, Qi-jun Zhao. Unseen head pose prediction using dense multivariate label distribution[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(6): 516-526.
[9] Xi-chuan Zhou, Fang Tang, Qin Li, Sheng-dong Hu, Guo-jun Li, Yun-jian Jia, Xin-ke Li, Yu-jie Feng. Global influenza surveillance with Laplacian multidimensional scaling[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(5): 413-421.
[10] Chu-hua Huang, Dong-ming Lu, Chang-yu Diao. A multiscale-contour-based interpolation framework for generating a time-varying quasi-dense point cloud sequence[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(5): 422-434.
[11] Xiao-hu Ma, Meng Yang, Zhao Zhang. Local uncorrelated local discriminant embedding for face recognition[J]. Front. Inform. Technol. Electron. Eng., 2016, 17(3): 212-223.
[12] Fu-xiang Lu, Jun Huang. Beyond bag of latent topics: spatial pyramid matching for scene category recognition[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(10): 817-828.
[13] Yu Liu, Bo Zhu. Deformable image registration with geometric changes[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(10): 829-837.
[14] Xun Liu, Yin Zhang, San-yuan Zhang, Ying Wang, Zhong-yan Liang, Xiu-zi Ye. Detection of engineering vehicles in high-resolution monitoring images[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(5): 346-357.
[15] Xiao-fang Huang, Shou-qian Sun, Ke-jun Zhang, Tian-ning Xu, Jian-feng Wu, Bin Zhu. A method of shadow puppet figure modeling and animation[J]. Front. Inform. Technol. Electron. Eng., 2015, 16(5): 367-379.