Please wait a minute...
工程设计学报  2026, Vol. 33 Issue (1): 65-75    DOI: 10.3785/j.issn.1006-754X.2026.05.247
机器人与机构设计     
面向泥浆遮挡的涂抹机器人双目视觉定位方法研究
左鹏1,2(),郑正鼎1,2(),高全杰1,2,吴林飞1,2,王红霞3
1.武汉科技大学 冶金装备及其控制教育部重点实验室, 湖北 武汉 430081
2.武汉科技大学 机械传动与制造工程湖北省重点实验室, 湖北 武汉 430081
3.湖北汽车工业学院, 湖北 十堰 442002
Stereo visual localization method for plastering robots under mud occlusion
PENG ZUO1,2(),Zhengding ZHENG1,2(),Quanjie GAO1,2,Linfei WU1,2,Hongxia WANG3
1.Key Laboratory of Metallurgical Equipment and Control Technology of Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China
2.Hubei Key Laboratory of Mechanical Transmission and Manufacturing Engineering, Wuhan University of Science and Technology, Wuhan 430081, China
3.Hubei University of Automotive Technology, Shiyan 442002, China
 全文: PDF(4848 KB)   HTML
摘要:

为了解决砌筑施工现场泥浆飞溅导致的砖块涂抹面局部遮挡以致涂抹机器人难以快速、精准定位的问题,提出了一种适用于复杂工况的基于机器人双目视觉的砖块涂抹面定位方法。利用YOLOv11实例分割模型提取涂抹面轮廓信息;将轮廓特征与双目深度图像结合,构建融合置信度分层与梯度预测的深度修复模型,用以解决泥浆遮挡引起的深度信息缺失、双目成像导致的轮廓边缘模糊等问题;将修复后的深度图像生成高质量点云,并通过平面拟合与最小外接矩形提取涂抹面角点,实现涂抹面三维空间的高精度定位。实验结果表明:在无遮挡工况下,深度修复后涂抹面XYZ向的平均定位误差分别减少17.8%、16.1%和12.6%;在轻度和中度遮挡工况下,三轴平均定位误差分别减少23.8%、21.2%和25.1%;当遮挡率不超过25%时,最大误差均控制在5 mm以内,满足机器人末端操作的精度要求。所提出的方法在相机多姿态变化与中度遮挡工况下具有快速、精准定位的优势,具备良好的工程应用前景,可为涂抹自动化作业提供可靠的视觉感知技术支撑。

关键词: 双目视觉深度修复局部遮挡涂抹机器人    
Abstract:

This study aims to address the challenge of rapid and accurate localization of brick plastered surfaces under partial occlusion caused by mud splashes in construction environments. A stereo-vision-based localization method suitable for complex working conditions was proposed. The contour information of the plastered surface was extracted using the YOLOv11 instance segmentation model. By combining the contour features with the stereo depth images, a depth restoration model combining confidence-layered mapping and gradient prediction was developed to mitigate the loss of depth information caused by mud occlusion, as well as contour edge blurring resulting from stereo imaging. The restored depth map was then converted into a high-quality point cloud, from which the plastered surface three-dimensional space was precisely localized by extracting the corner points of the plastered surface through plane fitting and minimum bounding rectangle. The experimental results showed that under no occlusion working condition, the average localization errors in the X, Y, and Z directions of the plastered surface after depth restoration decreased by 17.8%, 16.1%, and 12.6%, respectively. Under mild and moderate occlusion working conditions, the average localization errors of the three axes decreased by 23.8%, 21.2%, and 25.1% respectively. When the occlusion rate was less than 25%, the maximum error was controlled within 5 mm, meeting the precision requirements for the robotic end operation. The proposed method has the advantages of rapid and accurate localization under the working conditions of multiple camera postures and moderate occlusion, showing strong potential for engineering applications and providing reliable visual perception technology support for automated plastering operations.

Key words: stereo vision    depth restoration    partial occlusion    plastering robot
收稿日期: 2025-09-20 出版日期: 2026-03-01
CLC:  TP 242.2  
基金资助: 湖北省自然科学基金科技创新群体项目(2024AFA026)
通讯作者: 郑正鼎     E-mail: 13707478343@163.com;zdzheng@wust.edu.cn
作者简介: 左 鹏(2001—),男,硕士生,从事自动化技术应用、机器视觉研究,E-mail: 13707478343@163.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  
左鹏
郑正鼎
高全杰
吴林飞
王红霞

引用本文:

左鹏,郑正鼎,高全杰,吴林飞,王红霞. 面向泥浆遮挡的涂抹机器人双目视觉定位方法研究[J]. 工程设计学报, 2026, 33(1): 65-75.

PENG ZUO,Zhengding ZHENG,Quanjie GAO,Linfei WU,Hongxia WANG. Stereo visual localization method for plastering robots under mud occlusion[J]. Chinese Journal of Engineering Design, 2026, 33(1): 65-75.

链接本文:

https://www.zjujournals.com/gcsjxb/CN/10.3785/j.issn.1006-754X.2026.05.247        https://www.zjujournals.com/gcsjxb/CN/Y2026/V33/I1/65

图1  砖块涂抹面定位实验平台
参数数值
焦距f/mm4.8
主点坐标(cx,?cy)/像素319.5,?339.5
基线长度T/mm50.1
表 1  双目相机主要参数
图2  双目相机成像原理示意
图3  手眼标定(眼在手上)
图4  深度修复与定位算法结构
图5  泥浆遮挡工况
图6  像素点置信度分层
图7  深度信息优化效果
图8  涂抹面三维空间定位步骤
参数数值
训练数量/轮200
批处理尺寸16
学习率0.01
权重衰减度0.000 5
输入图像尺寸/像素1 280×720
预训练权重yolo11n-seg.pt
表2  YOLOv11实例分割模型训练参数
图9  模型识别精度均值
遮挡工况F1
无遮挡0.964
轻度遮挡0.932
中度遮挡0.915
重度遮挡0.882
表3  不同遮挡工况下识别性能参数
图10  不同遮挡工况下涂抹面轮廓分割效果
图11  涂抹面定位场景及实验
图12  不同深度修复方法定位误差对比
方法平均运行时间/s
AD4.47
JBF6.61
MRF8.61
HBF8.53
NPF5.68
本文方法2.82
表4  不同深度修复方法的平均运行时间
图13  角度-距离二维定位误差热力图
图14  泥浆遮挡工况下深度图像与特征点提取
图15  不同泥浆遮挡率下三轴定位误差
  
[1] 章严. 传统抹灰与抹灰机器人的应用分析[J]. 绿色建造与智能建筑, 2025(1): 96-98.
ZHANG Y. Application analysis of traditional plastering and plastering robots[J]. Green Construction and Intelligent Building, 2025(1): 96-98.
[2] 张广川. 可实现自动砌墙的建筑机器人设计[J]. 机械, 2011, 38(8): 79-80.
ZHANG G C. Design of building robot that can realize automatic wall-built[J]. Machinery, 2011, 38(8): 79-80.
[3] LI H, ZENG Q F, ZHUANG T D, et al. Accurate pose estimation of the texture-less objects with known CAD models via point cloud matching[J]. IEEE Sensors Journal, 2023, 23(21): 26259-26268.
[4] 张旭辉, 杨骏豪, 杨文娟, 等. 基于双目视觉的掘进装备定位系统移站自主标定方法[J]. 工程设计学报, 2025, 32(1): 1-10.
ZHANG X H, YANG J H, YANG W J, et al. Automatic calibration method for station transfer in tunneling equipment positioning system based on binocular vision[J]. Chinese Journal of Engineering Design, 2025, 32(1): 1-10.
[5] 张宇廷, 王宗彦, 王曦, 等. 改进K-means聚类算法的自适应Canny算子工件边缘检测[J]. 组合机床与自动化加工技术, 2022(5): 1-5.
ZHANG Y T, WANG Z Y, WANG X, et al. Edge detection of workpiece based on improved K-means clustering algorithm and adaptive canny algorithm[J]. Modular Machine Tool & Automatic Manufacturing Technique, 2022(5): 1-5.
[6] CHEN J L, WEI X L, LIANG X Q, et al. High precision 3D reconstruction and target location based on the fusion of visual features and point cloud registration[J]. Measurement, 2025, 243: 116455.
[7] BIGLIA A, ZAMAN S, GAY P, et al. 3D point cloud density-based segmentation for vine rows detection and localisation[J]. Computers and Electronics in Agriculture, 2022, 199: 107166.
[8] ZHANG X L, LI H L, MENG F M, et al. Segmenting beyond the bounding box for instance segmentation[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2022, 32(2): 704-714.
[9] JIN T T, HAN X Z, WANG P G, et al. Enhanced deep learning model for apple detection, localization, and counting in complex orchards for robotic arm-based harvesting[J]. Smart Agricultural Technology, 2025, 10: 100784.
[10] AARTHI R, RISHMA G. A vision based approach to localize waste objects and geometric features exaction for robotic manipulation[J]. Procedia Computer Science, 2023, 218: 1342-1352.
[11] LIU X, WANG H, XUE S B, et al. SEMPose: a single end-to-end network for multi-object pose estimation[J]. Neurocomputing, 2025, 654: 131194.
[12] NI P Y, ANG M H. Graduated non-convex feature-metric-based 6D object pose refinement via deep reinforcement learning[J]. Robotics and Autonomous Systems, 2025, 194: 105177.
[13] ZHAO H W, MA W H, JI P R, et al. Modeling the measurement accuracy for binocular stereo vision system[J]. Optics Express, 2025, 33(9): 19051-19066.
[14] BI X, YANG B, ZENG J, et al. A novel holes filling method based on layered depth map and patch sparsity for complex-scene images[J]. Microelectronics Journal, 2021, 114: 105140.
[15] LIU W, CHEN X G, YANG J, et al. Robust color guided depth map restoration[J]. IEEE Transactions on Image Processing, 2017, 26(1): 315-327.
[16] WANG Z Y, HU J H, WANG S Z, et al. Trilateral constrained sparse representation for Kinect depth hole filling[J]. Pattern Recognition Letters, 2015, 65: 95-102.
[17] 叶锦华, 林旭敏, 吴海彬. 基于DBSCAN的改进RANSAC点云平面拟合算法[J]. 湖南大学学报(自然科学版), 2025, 52(2): 76-87.
YE J H, LIN X M, WU H B. Improved RANSAC point cloud plane fitting algorithm based on DBSCAN[J]. Journal of Hunan University (Natural Sciences), 2025, 52(2): 76-87.
[18] 徐岩松, 张爱军, 杨萌. 基于局部梯度统计的自适应各向异性扩散[J]. 激光杂志, 2023, 44(8): 60-64.
XU Y S, ZHANG A J, YANG M. Adaptive anisotropic diffusion based on local gradient statistics[J]. Laser Journal, 2023, 44(8): 60-64.
[19] RUHELA R, GUPTA B, SINGH LAMBA S. An efficient approach for texture smoothing by adaptive joint bilateral filtering[J]. The Visual Computer, 2023, 39(5): 2035-2049.
[20] WU Z R, LIN D H, TANG X O. Deep Markov random field for image modeling[C]//Computer Vision-ECCV 2016. Cham: Springer, 2016: 295-312.
[21] 万琴, 朱晓林, 陈国泉, 等. 分层联合双边滤波的深度图修复算法研究[J]. 计算机工程与应用, 2021, 57(6): 184-190.
WAN Q, ZHU X L, CHEN G Q, et al. Research on depth map restoration algorithm based on hierarchical joint bilateral filter[J]. Computer Engineering and Applications, 2021, 57(6): 184-190.
[1] 舒申,王家梁,胡俊峰,张宇,楚凯,周浩,蔡铭炜. 基于双目视觉的软体机械臂几何参数测量方法[J]. 工程设计学报, 2025, 32(5): 613-622.
[2] 王晨学, 平雪良, 徐超. 基于视觉辅助定位的机械臂运动学参数辨识研究[J]. 工程设计学报, 2018, 25(1): 27-34.