|
|
Robot target following based on adaptive follower mechanism |
Hong-xin CHEN(),Bei ZHANG,Chun-xiang WANG*(),Ming YANG |
Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, China |
|
|
Abstract A pedestrian following method based on adaptive follower mechanism was proposed focusing on the problem that robots lose targets easily with fixed sensors. Based on task requirements, the field of view evaluation metrics of the follower perception mechanism were designed. On the basis of traditional planning strategies, an improved planning strategy derived from chassis direction and a depth weighting based adaptive angle planning strategy were proposed to improve the moving target following performance of the follower mechanism. To improve pedestrian position tracking with follower RGB-D sensor, the YOLOv3 algorithm was used for target detection, combined with 3D coordinate solving and position matching to achieve real-time tracking of multiple targets. Gazebo simulation platform and RoboMaster robot were used to implement robot's pedestrian following function. The proposed planning strategy is shown to achieve comprehensive optimal metrics and stable trajectory following to moving pedestrian targets, which proves the effectiveness of the target following method.
|
Received: 14 March 2022
Published: 30 June 2022
|
|
Fund: 国家自然科学基金资助项目(61873165,62173228,62103261) |
Corresponding Authors:
Chun-xiang WANG
E-mail: angelochen@sjtu.edu.cn;wangcx@sjtu.edu.cn
|
基于自适应随动机构的机器人目标跟随
针对机器人使用固定传感器容易丢失跟随目标的问题,提出基于自适应随动机构的行人跟随方法.基于任务需求设计随动式感知机构视野评价指标;在传统规划策略的基础上,提出结合底盘方向的改进策略和基于视野深度加权的自适应角度规划策略,改进随动机构的运动目标跟随效果. 为了提高随动RGB-D传感器的行人位置跟踪效果,使用YOLOv3算法进行目标检测,结合三维坐标解算与位置度量匹配,实现多目标位置的实时跟踪. 基于Gazebo仿真环境与RoboMaster机器人,实现机器人行人跟随功能. 所提规划策略能够取得综合最优的评分指标,并实现机器人对运动行人目标稳定的轨迹跟随. 实验结果证明了所提目标跟随方法的有效性.
关键词:
行人跟随,
移动机器人,
RGB-D传感器,
目标跟踪,
自适应角度规划
|
|
[1] |
谢嘉, 桑成松, 王世明, 等 智能跟随移动机器人的研究与应用前景综述[J]. 制造业自动化, 2020, 42 (10): 49- 55 XIE Jia, SANG Cheng-song, WANG Shi-ming, et al Overview of research and application prospect of intelligent following mobile robots[J]. Manufacturing Automation, 2020, 42 (10): 49- 55
doi: 10.3969/j.issn.1009-0134.2020.10.012
|
|
|
[2] |
DAM A, VERMA A, PANGI C T, et al. Person following mobile robot using pedestrian dead-reckoning with inertial data of smartphones [C]// 2020 11th International Conference on Computing, Communication and Networking Technologies. Kharagpur: IEEE, 2020: 1-4.
|
|
|
[3] |
TEE K T M, LAU B T, SISWOYO JO H An improved indoor robot human-following navigation model using depth camera, active IR marker and proximity sensors fusion[J]. Robotics, 2018, 7 (1): 4
doi: 10.3390/robotics7010004
|
|
|
[4] |
AQUINO R J C, BELTRAN C K C, FAJARDO J W A, et al. Image processing based human following cart using 360° camera [C]// 2020 International Conference on Electronics and Sustainable Communication Systems. Coimbator: IEEE, 2020: 375-380.
|
|
|
[5] |
张亚斌. 基于ROS的轮式移动机器人行人目标视觉感知与跟随系统研究[D]. 徐州: 中国矿业大学, 2019: 19-25. Zhang Ya-bin. Study on visual perception and following system for pedestrian target of wheeled mobile robot based on ROS [D]. Xuzhou: China University of Mining and Technology, 2019: 19-25.
|
|
|
[6] |
CHENG X, JIA Y, SU J, et al. Person-following for telepresence robots using web cameras [C]// 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems. Macau: IEEE, 2019: 2096-2101.
|
|
|
[7] |
TSUN M T K, THENG L B, JO H S. Pathfinding decision-making using proximity sensors, depth camera and active IR marker tracking data fusion for human following companion robot [C]// Proceedings of the 2017 International Conference on Information Technology. Singapore: ACM Press, 2017: 12-16.
|
|
|
[8] |
LU Y, YANG M, WANG C, et al. Pedestrian tracking based on laser and image data fusion [C]// 2019 IEEE International Conference on Real-Time Computing and Robotics. Irkutsk: IEEE, 2019: 231-236.
|
|
|
[9] |
ZHANG W, WANG J, GUO X, et al Two-stream RGB-D human detection algorithm based on RFB network[J]. IEEE Access, 2020, 8: 123175- 123181
doi: 10.1109/ACCESS.2020.3007611
|
|
|
[10] |
JAFARI O H, MITZEL D, LEIBE B. Real-time RGB-D based people detection and tracking for mobile robots and head-worn cameras [C]// 2014 IEEE International Conference on Robotics and Automation. Hong Kong: IEEE, 2014: 5636-5643.
|
|
|
[11] |
YEKKEHFALLAH M, YANG M, CAI Z, et al Accurate 3D localization using RGB-TOF camera and IMU for industrial mobile robots[J]. Robotica, 2021, 39 (10): 1816- 1833
doi: 10.1017/S0263574720001526
|
|
|
[12] |
ZHANG B, YANG M, YUAN W, et al. A novel system for guiding unmanned vehicles based on human gesture recognition [C]// 2020 IEEE International Conference on Real-time Computing and Robotics. Asahikawa: IEEE, 2020: 345-350.
|
|
|
[13] |
REDMON J, FARHADI A. YOLOv3: an incremental improvement [EB/OL]. [2021-05-31]. http://arxiv.org/abs/1804.02767.
|
|
|
[14] |
HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778.
|
|
|
[15] |
BEWLEY A, GE Z, OTT L, et al. Simple online and realtime tracking [C]// 2016 IEEE International Conference on Image Processing. Phoenix: IEEE, 2016: 3464-3468.
|
|
|
[16] |
WOJKE N, BEWLEY A, PAULUS D. Simple online and realtime tracking with a deep association metric [C]// 2017 IEEE International Conference on Image Processing. Beijing: IEEE, 2017: 3645-3649.
|
|
|
[17] |
ZHANG Y, WANG C, WANG X, et al FairMOT: on the fairness of detection and re-identification in multiple object tracking[J]. International Journal of Computer Vision, 2021, 129 (11): 3069- 3087
doi: 10.1007/s11263-021-01513-4
|
|
|
[18] |
PERILLE D, TRUONG A, XIAO X, et al. Benchmarking metric ground navigation [C]// 2020 IEEE International Symposium on Safety, Security, and Rescue Robotics. Abudhabi: IEEE, 2020: 116-121.
|
|
|
[19] |
WEN J, ZHANG X, BI Q, et al. MRPB 1.0: a unified benchmark for the evaluation of mobile robot local planning approaches [C]// 2021 IEEE International Conference on Robotics and Automation. Xi’an: IEEE, 2021: 8238-8244.
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|