Pervasive Computing and Computer Human Interaction
Perception enhanced intelligent robotic arm system
YANG Sha, YE Zhen yu, WANG Shu gang, TAO Hai, LI Shi jian
1. College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China;
2. College of Information Science and Technology, Zhejiang Shuren University, Hangzhou 310015, China
A perception enhanced model for devices like robotic arm was proposed in order to improve the service capacity and intelligence of service robot. The model enhanced the perception and cognition of robotic arm by fusing perceptive and cognitive ability from different sensing passages. Taking visual and tactile sense for example, scaleinvariant feature transform algorithm was applied on visual system to recognize and locate target object, principal component analysis algorithm was applied on tactile system to reduce dimensions of data collected by tactile sensors, and support vector machine algorithm was applied to obtain a classified model. In cognition system, the robotic arm was able to plan the trajectory adaptively and choose grab modes according to the object’s information when grabbing objects. Grasping and sorting experiments was performed in a mixed circumstance with many kinds of objects, of which the results has verified the availability of the perception enhanced robotic arm system.
YANG Sha, YE Zhen yu, WANG Shu gang, TAO Hai, LI Shi jian. Perception enhanced intelligent robotic arm system. JOURNAL OF ZHEJIANG UNIVERSITY (ENGINEERING SCIENCE), 2016, 50(6): 1155-1159.
[1] ROMANO J M, HSIAO K, NIEMEYER G, et al. HumanInspired robotic grasp control with tactile sensing[J]. IEEE Transactions on Robotics, 2011, 27(6):1067-1079.
[2] HARA I, ASANO F, ASOH H, et al. Robust speech interface based on audio and video information fusion for humanoid HRP2[J]. Intelligent Robots and Systems, 2004: 2404-2410.
[3] 彭飞,魏衡华.基于单目仿人机器人的障碍物测距方法[J].计算机系统应用,2013, 22(8): 88-90.
PENG Fei, WEI HengHua. Study on obstacle distance detection based on monocular humanoid robot[J]. Computer Systems and Applications, 2013, 22(8): 88-90.
[4] 厉茂海,洪炳镕,罗荣华,等.基于单目视觉的移动机器人全局定位[J].机器人,2007, 29(2):140-144.
LI Maohai, HONG Bingrong, LUO Ronghua, et al. Monocularvisionbased mobile robot global localization [J]. Robot, 2007, 29(2): 140-144.
[5] 萧伟,孙富春,刘华平.机器人灵巧手的触觉分析与建模[J].机器人, 2013, 35(4): 394-401.
XIAO Wei, SUN Fuchun, LIU Huaping. Tactile analysis and modeling of dextrous robotic hand [J]. Robot, 2013, 35(4): 394-401.
[6] JAMALI N, SAMMUT C. Majority voting: material classification by tactile sensing using surface texture [J]. IEEE Transactions on Robotics, 2011, 27(3):508-521.
[7] 吴春生,王丽江,刘清君,等.嗅觉传导机理及仿生嗅觉传感器的研究进展[J].科学通报,2007, 52(12): 1362-1371.
WU Chunsheng, WANG Lijiang, LIU Qingjun, et al. Research progress of olfactory transduction mechanism and bionic olfactory sensor [J]. Chinese Science Bulletin, 2007, 52(12): 1362-1371.
[8] 唐科,宗光华.熊猫机器人听觉传感器的设计与实现[J],传感器世界,2008, 14(10): 1720.
TANG ke, ZONG Guanghua. Design and realization of robot’s auditory sensor [J]. Sensor World, 2008, 14(10): 1720.
[9] GARCIA J G, ROBERTSSON A, ORTEGA J G, et al. Sensor fusion for compliant robot motion control [J]. IEEE Transactions on Robotics, 2008, 24(2): 430-441.
[10] KAM M, ZHU X X, KALATA P. Sensor fusion for mobile robot navigation [J]. Proceedings of the IEEE, 1997, 85(1): 108-119.
[11] XU D F, LOEB G E, FISHEL J A. Tactile identification of objects using bayesian exploration [J], Robotics and Automation(ICRA), 2013: 3056-3061.
[12] LOWE D G, Distinctive image features from scaleinvariant keypoints [J]. International Journal of Computer Vision, 2004, 60(2): 91-110.
[13] MOORE A W. An introductory tutorial on KDtrees [D]. UK: University of Cambridge, 1991.
[14] FISCHLER M A, BOLLES R C. Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography [J]. Communication of the ACM, 1981, 24(6): 381-395.
[15] 于仕琪,刘瑞祯.学习Opencv(中文版)[M].北京:清华大学出版社, 2012: 414-416.
[16] 周志华.机器学习[M].北京:清华大学出版社,2016: 229-232.
[17] 李航.统计学习方法[M].北京:清华大学出版社,2012: 95-115.