Please wait a minute...
Journal of ZheJiang University (Engineering Science)  2026, Vol. 60 Issue (2): 341-350    DOI: 10.3785/j.issn.1008-973X.2026.02.012
    
Lightweight surface reconstruction method for building point clouds based on deep Hough voting
Jiazhou CHEN1(),Xiaohang ZHU1,Yanghui XU1,Yin GAO2,3,Yihui LU4,Zhen MAO4,Shenglong LI4,Chaoquan ZHANG2
1. College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
2. Moganshan Geospatial Information Laboratory, Deqing 313200, China
3. National Geomatics Center of China, Beijing 100830, China
4. Shandong Provincial Institute of Land Surveying and Mapping, Jinan 250102, China
Download: HTML     PDF(1954KB) HTML
Export: BibTeX | EndNote (RIS)      

Abstract  

To address missing structures, data redundancy and noise in real-world 3D scenes, a lightweight surface reconstruction method for building point clouds was proposed that reconstructs polygonal mesh models. An efficient framework for building-dataset generation was proposed, automatically producing 5 500 labeled building models. To ease the plane extraction for building point clouds, the building planes were predicted using deep Hough voting, and a face-based non-maximal suppression algorithm (F-NMS) was used to efficiently remove the predicted duplicate and erroneous surfaces. A building plane adjacency prediction module was designed to predict the adjacency of the building planes after the F-NMS. Quantitative experimental results demonstrate that, compared to traditional methods such as PolyFit, the proposed approach exhibits significant advantages in both fitting accuracy and scene adaptability. The polygonal mesh models reconstructed by the proposed method retain the main structural features of the input building point clouds, with storage requirements reduced to less than 1% of the original point cloud data.



Key words3D point cloud      building simplification      3D reconstruction      Hough voting      mesh model     
Received: 17 July 2025      Published: 03 February 2026
CLC:  TP 391.41  
Fund:  国家自然科学基金资助项目(62172367);浙江省尖兵领雁计划研发攻关计划项目(2025C01073).
Cite this article:

Jiazhou CHEN,Xiaohang ZHU,Yanghui XU,Yin GAO,Yihui LU,Zhen MAO,Shenglong LI,Chaoquan ZHANG. Lightweight surface reconstruction method for building point clouds based on deep Hough voting. Journal of ZheJiang University (Engineering Science), 2026, 60(2): 341-350.

URL:

https://www.zjujournals.com/eng/10.3785/j.issn.1008-973X.2026.02.012     OR     https://www.zjujournals.com/eng/Y2026/V60/I2/341


基于深度霍夫投票的建筑点云轻量级表面重建

针对实景三维场景中建筑物结构缺失、数据冗余、噪声多等问题,提出新的建筑点云轻量级表面重建方法,进行建筑的多边形网格模型重建. 构建高效的建筑数据集生成框架,自动生成包含5 500个带标签的建筑模型数据. 针对建筑点云中平面提取困难的问题,使用深度霍夫投票预测建筑平面,采用基于面的非极大值抑制算法(F-NMS)有效去除预测的重复面以及错误面. 设计建筑平面相邻关系预测模块,对经过非极大值抑制后的建筑平面进行相邻关系的预测. 定量实验结果表明,与如PolyFit的传统方法相比,所提方法在拟合精度与场景适应性方面均具有显著优势. 使用所提方法重建的建筑多边形网格模型保留了输入建筑点云的主要结构特征,存储量不到原始点云的1%.


关键词: 三维点云,  建筑简化,  三维重建,  霍夫投票,  网格模型 
Fig.1 Workflow for polygon mesh reconstruction
Fig.2 Structure of planar prediction network based on deep Hough voting
Fig.3 Input and output of face-based non-maximal suppression algorithm
Fig.4 Paired face attention module
Fig.5 Transformation process of building wireframe model based on plane and adjacency relationship
Fig.6 Polygonal mesh model of building
Fig.7 Results and procedural examples of building point cloud reconstruction
方法$ P $$ R $$ {\mathrm{A}\mathrm{c}\mathrm{c}}_{\mathrm{n}\mathrm{r}} $HD↓/m
均方误差的投票点权重0.940.950.9870.33
面积自适应权重0.950.960.9920.29
Tab.1 Ablation experiment with different weights for voting points loss function (σ=0.01)
模块$ P $$ R $$ {\mathrm{A}\mathrm{c}\mathrm{c}}_{\mathrm{n}\mathrm{r}} $HD ↓/m
最大池化+MLP0.950.950.9850.31
PFA0.950.960.9920.29
Tab.2 Ablation experiment on paired face attention module (σ=0.01)
$ {T}_{\mathrm{c}\mathrm{o}\mathrm{n}} $$ {T}_{\mathrm{s}\mathrm{i}\mathrm{m}} $$ P $$ R $F1↑
0.850.850.9390.9470.943
0.900.9230.9500.936
0.950.8610.9540.905
0.900.850.9400.9420.940
0.900.9350.9510.943
0.950.8750.9530.912
0.950.850.9400.9390.939
0.900.9350.9420.938
0.950.8930.9490.920
Tab.3 Ablation experiments on similarity threshold of facet-based non-maximal suppression algorithm (σ=0.02)
方法$ {N}_{\mathrm{p}} $$ {N}_{\mathrm{f}} $S/MBHD/mCD/m$ {r}_{{\mathrm{e}}} $/%
原始2048000112
本研究499834340.3490.300.154.6
PolyFit[9]1470837820.9390.300.143.8
City3D[11]2252265201.5600.930.5936.4
PolyGNN[14]1822444400.6430.330.152.6
Tab.4 Building simplified performance comparison and evaluation form (σ=0.02)
Fig.8 Visual comparison of different methods for 3D building lightweight reconstruction (σ=0.02)
Fig.9 Visual comparison building simplified noise resistance
方法基于学习CD↓/mHD↓/m$ {r}_{{\mathrm{e}}} $↓/%
σ=0.02σ=0.03σ=0.02σ=0.03σ=0.02σ=0.03
PolyFit[9]×0.140.190.300.423.817.0
City3D[11]×0.590.750.931.1036.435.2
PolyGNN[14]0.150.240.330.502.615.2
本研究0.150.170.300.364.64.2
Tab.5 Quantitative evaluation results of building simplified noise resistance
[1]   LUO H, ZHANG J, LIU X, et al Large-scale 3D reconstruction from multi-view imagery: a comprehensive review[J]. Remote Sensing, 2024, 16 (5): 773
doi: 10.3390/rs16050773
[2]   于海洋, 封顺天, 崔立鹏 面向城市数字孪生的多尺度三维建模方法研究[J]. 电子技术应用, 2022, 48 (7): 78- 80
YU Haiyang, FENG Shuntian, CUI Lipeng Research on multi-scale 3D modeling method for urban digital twin[J]. Application of Electronic Technique, 2022, 48 (7): 78- 80
[3]   SCHNABEL R, WAHL R, KLEIN R Efficient RANSAC for point-cloud shape detection[J]. Computer Graphics Forum, 2007, 26 (2): 214- 226
doi: 10.1111/j.1467-8659.2007.01016.x
[4]   RABBANI T, VAN DEN HEUVEL F A, VOSSELMANN G Segmentation of point clouds using smoothness constraint[J]. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2006, 36 (5): 248- 253
[5]   QI C R, YI L, SU H, et al. PointNet++: deep hierarchical feature learning on point sets in a metric space [EB/OL]. (2017−06−07)[2024−03−14]. https://arxiv.org/pdf/1706.02413.
[6]   WANG R, HUANG S, YANG H. Building3D: an urban-scale dataset and benchmarks for learning roof structures from point clouds [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Paris: IEEE, 2024: 20019–20029.
[7]   YANG G, XUE F, ZHANG Q, et al. UrbanBIS: a large-scale benchmark for fine-grained urban building instance segmentation [C]// Proceedings of the ACM SIGGRAPH 2023 Conference Proceedings. Los Angeles: ACM, 2023: 1–11.
[8]   PETERS R, DUKAI B, VITALIS S, et al Automated 3D reconstruction of LoD2 and LoD1 models for all 10 million buildings of the Netherlands[J]. Photogrammetric Engineering and Remote Sensing, 2022, 88 (3): 165- 170
doi: 10.14358/PERS.21-00032R2
[9]   NAN L, WONKA P. PolyFit: polygonal surface reconstruction from point clouds [C]// Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2372–2380.
[10]   BOUZAS V, LEDOUX H, NAN L Structure-aware building mesh polygonization[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2020, 167: 432- 442
doi: 10.1016/j.isprsjprs.2020.07.010
[11]   HUANG J, STOTER J, PETERS R, et al City3D: large-scale building reconstruction from airborne LiDAR point clouds[J]. Remote Sensing, 2022, 14 (9): 2254
doi: 10.3390/rs14092254
[12]   BAUCHET J P, LAFARGE F City reconstruction from airborne lidar: a computational geometry approach[J]. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2019, 4: 19- 26
[13]   CHEN Z, SHI Y, XIONG Z, et al. Polyhedron-based graph neural network for compact building model reconstruction [C]// Proceedings of 2023 IEEE International Geoscience and Remote Sensing Symposium. Pasadena: IEEE, 2023: 923–926.
[14]   CHEN Z, SHI Y, NAN L, et al PolyGNN: polyhedron-based graph neural network for 3D building reconstruction from point clouds[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2024, 218: 693- 706
doi: 10.1016/j.isprsjprs.2024.09.031
[15]   HE X, LV C, HUANG P, et al. WindPoly: polygonal mesh reconstruction via winding numbers [C]// Computer Vision – ECCV 2024. [S.l.]: Springer, 2024: 294–311.
[16]   HUANG S, WANG R, GUO B, et al. PBWR: parametric-building-wireframe reconstruction from aerial LiDAR point clouds [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2024: 27778–27787.
[17]   LI L, SONG N, SUN F, et al Point2Roof: end-to-end 3D building roof modeling from airborne LiDAR point clouds[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2022, 193: 17- 28
doi: 10.1016/j.isprsjprs.2022.08.027
[18]   JAVEED M A, GHAFFAR M A, ASHRAF M A, et al Lane line detection and object scene segmentation using otsu thresholding and the fast Hough transform for intelligent vehicles in complex road conditions[J]. Electronics, 2023, 12 (5): 1079
doi: 10.3390/electronics12051079
[19]   SYED M H, KUMAR S. Road lane line detection based on ROI using Hough transform algorithm [C]// Proceedings of Third International Conference on Computing, Communications, and Cyber-Security. Singapore: Springer, 2023: 567–580.
[20]   MATARNEH S, ELGHAISH F, AL-GHRAIBAH A, et al An automatic image processing based on Hough transform algorithm for pavement crack detection and classification[J]. Smart and Sustainable Built Environment, 2025, 14 (1): 1- 22
doi: 10.1108/SASBE-01-2023-0004
[21]   QI C R, LITANY O, HE K, et al. Deep Hough voting for 3D object detection in point clouds [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Seoul: IEEE, 2019: 9276–9285.
[22]   ZHANG J Q, DUAN H B, CHEN J L, et al HoughLaneNet: lane detection with deep Hough transform and dynamic convolution[J]. Computers and Graphics, 2023, 116: 82- 92
[23]   ZHAO H, JIANG L, JIA J, et al. Point transformer [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE, 2022: 16239–16248.
[24]   LI J, ZHOU J, XIONG Y, et al. An adjustable farthest point sampling method for approximately-sorted point cloud data [C]// Proceedings of the IEEE Workshop on Signal Processing Systems. Rennes: IEEE, 2022: 1–6.
[25]   HU J, SHEN L, SUN G. Squeeze-and-excitation networks [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 7132–7141.
[26]   Republic of Estonia Land and Spatial Development Board. Building 3D model data [EB/OL]. [2024−04−15]. https://geoportaal.maaamet.ee/eng/Download-3D-data-p837.html.
[27]   DONG L, XIAO Y, LI Y, et al A collision detection algorithm based on sphere and EBB mixed hierarchical bounding boxes[J]. IEEE Access, 2024, 12: 62719- 62729
[28]   WU M, SUN M, ZHANG F, et al A fault detection method of electric vehicle battery through Hausdorff distance and modified Z-score for real-world data[J]. Journal of Energy Storage, 2023, 60: 106561
[1] Jingyao HE,Pengfei LI,Chengzhi WANG,Zhenming LV,Ping MU. Dynamic 3D reconstruction method using binocular vision and improved YOLOv8[J]. Journal of ZheJiang University (Engineering Science), 2025, 59(7): 1443-1450.
[2] Huan LAN,Jian-bo YU. Steel surface defect detection based on deep learning 3D reconstruction[J]. Journal of ZheJiang University (Engineering Science), 2023, 57(3): 466-476.
[3] Fan ZHONG,Zheng-yao BAI. 3D point cloud super-resolution with dynamic residual graph convolutional networks[J]. Journal of ZheJiang University (Engineering Science), 2022, 56(11): 2251-2259.
[4] Han-juan CHEN,Fei-peng DA,Shao-yan GAI. Deep 3D point cloud classification network based on competitive attention fusion[J]. Journal of ZheJiang University (Engineering Science), 2021, 55(12): 2342-2351.
[5] CHEN Wen-zhuo, CHEN Yan, ZHANG Wei-ming, HE Shao-wei, LI Bo, JIANG Jun-ze. Numerical simulation for dynamic air spray painting of arc surfaces[J]. Journal of ZheJiang University (Engineering Science), 2018, 52(12): 2406-2413.
[6] MA Zi ang, XIANG Zhi yu. Calibration and 3D reconstruction with omnidirectional ranging by optic flow camera[J]. Journal of ZheJiang University (Engineering Science), 2015, 49(9): 1651-1657.
[7] WANG Wei-Jiang, XU Jin, DU Xin, LIU Ji-Lin. Long-range 3D reconstruction based on wide-baseline stereo vision[J]. Journal of ZheJiang University (Engineering Science), 2010, 44(6): 1073-1078.