Please wait a minute...
浙江大学学报(工学版)  2025, Vol. 59 Issue (6): 1219-1232    DOI: 10.3785/j.issn.1008-973X.2025.06.013
土木工程、交通工程     
基于车辆图像特征的前车距离与速度感知
徐慧智(),王秀青
东北林业大学 土木与交通学院,黑龙江 哈尔滨 150000
Perception of distance and speed of front vehicle based on vehicle image features
Huizhi XU(),Xiuqing WANG
College of Civil Engineering and Transportation, Northeast Forestry University, Harbin 150000, China
 全文: PDF(2489 KB)   HTML
摘要:

针对驾驶场景中的前车检测与运行状态感知任务,提出融合车辆图像特征的前车距离与速度的多模态感知方法. 通过改进的SW-YOLOv8n模型检测图像中车辆的位置特征,结合几何算法计算相对前车的横纵距离特征. 设计特征提取网络提取车辆特征,通过串联拼接融合车辆图像特征向量,并建立车辆测距神经网络. 通过集成多特征融合模块与车辆测距神经网络,构建前车距离感知模型与车辆跟踪测速模型,同步输出精确的距离估计和速度跟踪结果. 实验结果表明,在实验数据集上,SW-YOLOv8n相比于YOLOv8n模型,mAP50、mAP50?95分别提高1.6、2.3个百分点,SW-YOLOv8n 模型的检测速度为260.11 帧/s;在横向9.5 m与纵向50 m的范围内,在前车未被遮挡的条件下,前车距离感知模型的预测距离与实际距离的平均相对误差为1.87%,遮挡条件下的平均相对误差为2.02%;车辆跟踪测速模型的速度测定结果具有稳定性,适用于前车距离与速度感知任务.

关键词: 深度学习目标检测车辆测速车辆测距状态感知    
Abstract:

A multimodal perception method for distance and speed of front vehicle integrating vehicle image features was proposed for front vehicle detection and operational state perception in driving scenarios. The position features of vehicles in images were detected by an improved SW-YOLOv8n model, and the relative lateral and longitudinal distances to the front vehicle were calculated using geometric algorithms. A feature extraction network was designed to extract vehicle features, where image feature vectors were fused through serial concatenation, and a neural network for vehicle distance measurement was established. The multi-feature fusion module was integrated with the distance measurement neural network to construct an end-to-end front vehicle distance perception model and a vehicle tracking-based speed estimation model, which synchronously output precise distance estimations and stable speed tracking results. Experimental results demonstrated that on the test dataset, the SW-YOLOv8n model achieved improvements of 1.6 percentage points in mAP50 and 2.3 percentage points in mAP50?95 compared to the baseline YOLOv8n, while maintaining a detection speed of 260.11 frames per second. Within a lateral range of 9.5 m and a longitudinal range of 50 m, under unobstructed conditions, the preceding vehicle distance perception model exhibited an average relative error of 1.87% between predicted and actual distances, while under occluded conditions, the average relative error was 2.02%. The speed measurement results of the tracking-based model exhibited significant stability, confirming the method’s effectiveness for front vehicle distance and speed perception tasks.

Key words: deep learning    object detection    vehicle speed measurement    vehicle distance measurement    state perception
收稿日期: 2024-04-30 出版日期: 2025-05-30
CLC:  U 495  
基金资助: 国家自然科学基金资助项目(62371170).
作者简介: 徐慧智(1977—),男,副教授,高级工程师,博士,从事交通环境感知理论与方法研究. orcid.org/0000-0002-9911-6024. E-mail:stedu@126.com
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
作者相关文章  
徐慧智
王秀青

引用本文:

徐慧智,王秀青. 基于车辆图像特征的前车距离与速度感知[J]. 浙江大学学报(工学版), 2025, 59(6): 1219-1232.

Huizhi XU,Xiuqing WANG. Perception of distance and speed of front vehicle based on vehicle image features. Journal of ZheJiang University (Engineering Science), 2025, 59(6): 1219-1232.

链接本文:

https://www.zjujournals.com/eng/CN/10.3785/j.issn.1008-973X.2025.06.013        https://www.zjujournals.com/eng/CN/Y2025/V59/I6/1219

图 1  SW-YOLOv8n 车辆目标检测模型
图 2  SW-YOLOv8n-Bytetrack 跟踪原理
图 3  相机成像原理
图 4  车辆横纵距离几何模型
图 5  基于车辆图像特征的前方车辆测距模型流程
图 6  车辆测距神经网络构造
图 7  车辆跟踪测速模型流程
图 8  基于车辆图像特征的前车距离与速度感知流程
图 9  目标检测数据集
模型结构Recall/%mAP50/%mAP50?95/%FPS
YOLOv8n83.1091.2084.30226.48
YOLOv8n+WIoU84.4092.3085.40229.60
YOLOv8n+Smallobject85.3092.5086.10258.10
YOLOv8n+Smallobject+
WIoU
87.5092.8086.60260.11
表 1  SW-YOLOv8n模型的消融实验
图 10  消融实验模型性能指标
模型Recall/%mAP50/%mAP50?95/%FPSM/MB
YOLOv583.8091.0081.00346.783.9
YOLOv783.9091.4084.60116.2774.8
Faster-RCNN79.7890.2072.6026.67521.0
SW-YOLOv8n87.5092.8086.60260.116.3
表 2  SW-YOLOv8n模型的对比实验结果
图 11  对比实验性能指标
图 12  相机标定
图 13  视频帧跟踪结果示例
图 14  相机相对车辆实际横纵距离与几何算法计算结果的误差图
图 15  车辆测距神经网络的训练损失
No.uvhwsCD/mAD/mED/mAE/mRE/%RE[24]/%
TDLD
1962.00629.00170.00206.035020.000.013012.659112.120011.30550.81456.724.45
2981.00574.5085.00102.008670.000.306624.948925.130025.20940.07940.320.71
3962.00551.0058.0076.004408.000.036135.321135.110035.68060.57061.6320.60
41323.00660.50162.00269.0043578.003.031712.008912.500012.48650.01350.110.91
51132.75547.5088.00116.5010252.002.959424.948925.360025.52880.16880.670.93
61061.25538.5048.0065.503144.003.103845.338245.170044.20350.96652.140.61
71698.50655.50167.50364.0060970.006.392512.383113.780012.84730.93276.741.13
81281.50561.0074.00114.008436.006.686130.013230.860031.56240.70242.270.36
91149.50534.5042.0064.002688.006.493449.861450.530050.24770.28230.560.49
101451.25566.0071.50157.5012836.259.539027.915029.920029.66300.25700.861.40
111323.50547.0058.00102.005916.009.582238.008939.440039.96970.52971.340.61
121233.50534.0047.0072.003384.009.544150.491051.060050.23550.82451.610.64
131321.50736.00242.00364.0088088.002.07588.24148.33008.29310.03690.442.03
141128.00605.00109.00161.0017549.002.083418.057618.200018.65370.45372.490.12
151471.00627.00137.00257.0035209.005.360615.054415.900015.91640.01640.100.51
均值0.44331.871.03
表 3  无遮挡场景下前车距离感知模型的测定距离及误差
No.uvhwsCD/mAD/mED/mAE/mRE/%RE[24]/%
TDLD
1982.48849.75332.68395.27131495.870.07505.58795.10005.42990.32996.088.74
2951.33597.19118.05143.4616935.640.165319.433320.090020.31010.22011.083.38
3952.24539.0556.3870.223958.950.351844.784645.060045.52510.46511.020.61
41477.91712.81239.07395.0694447.793.29919.121510.650010.44460.20541.979.80
51222.03598.12129.93187.9424419.363.495319.225125.360021.16280.74283.513.15
61062.70539.3259.1275.404457.703.092844.514945.200045.78360.58361.271.29
71619.22631.37175.29330.6157951.786.704914.572116.550016.66690.11690.703.18
81280.96562.5485.78134.8911571.086.524029.334130.740030.68010.05990.202.29
91775.10612.01137.74283.9039105.059.665016.979220.740020.04670.69333.466.16
101299.34545.3446.9871.443356.839.773048.834951.220050.74880.47120.932.84
均值0.38822.024.28
表 4  有遮挡条件下前车距离感知模型测定距离及误差
vc=10 km/hvc=20 km/hvc=30 km/h
LDD/mn?s/mMS/ (km·h?1)AS/ (km·h?1)LDD/mn?s/mMS/ (km·h?1)AS/ (km·h?1)LDD/mn?s/mMS/(km·h?1)AS/ (km·h?1)
0101.163012.560412.27860101.928020.822420.43310102.764229.853430.0877
1.147812.39621.916220.69502.759729.8056
1.099711.87671.826119.72192.833730.6040
010.114912.4092010.180319.4724010.280230.2612
2.0101.065411.503611.81963.5101.906720.592420.67233.5102.876531.066231.4204
1.085911.72771.957221.13782.872831.0262
1.132212.22771.878420.28672.978632.1689
2.010.107011.55603.510.200021.60003.510.289931.3092
6.5100.942710.181210.70317.0101.944521.000621.38267.0102.816230.419629.7883
0.934210.96421.949221.05142.745429.6503
1.105210.96412.045922.09572.712529.2950
6.510.102711.09167.010.198621.44887.010.273329.5164
表 5  车辆跟踪测速模型的测速结果
1 WANG Z, ZHAN J, DUAN C, et al A review of vehicle detection techniques for intelligent vehicles[J]. IEEE Transactions on Neural Networks and Learning Systems, 2023, 34 (8): 3811- 3831
doi: 10.1109/TNNLS.2021.3128968
2 JIN M, SUN C, HU Y An intelligent traffic detection approach for vehicles on highway using pattern recognition and deep learning[J]. Soft Computing, 2023, 27 (8): 5041- 5052
doi: 10.1007/s00500-022-07375-3
3 WANG F, WANG H, QIN Z, et al UAV target detection algorithm based on improved YOLOv8[J]. IEEE Access, 2023, 11: 116534- 116544
doi: 10.1109/ACCESS.2023.3325677
4 ZHANG L J, FANG J J, LIU Y X, et al CR-YOLOv8: multiscale object detection in traffic sign images[J]. IEEE Access, 2023, 12: 219- 228
5 MA S, LU H, LIU J, et al LAYN: lightweight multi-scale attention YOLOv8 network for small object detection[J]. IEEE Access, 2024, 12: 29294- 29307
doi: 10.1109/ACCESS.2024.3368848
6 张长弓, 杨海涛, 王晋宇, 等 基于深度学习的视觉单目标跟踪综述[J]. 计算机应用研究, 2021, 38 (10): 2888- 2895
ZHANG Changgong, YANG Haitao, WANG Jinyu, et al Survey on visual single object tracking based on deep learning[J]. Application Research of Computers, 2021, 38 (10): 2888- 2895
7 BERTINETTO L, VALMADRE J, HENRIQUES J F, et al. Fully-convolutional Siamese networks for object tracking [C]// Computer Vision-ECCV 2016 Workshops. Amsterdam: Springer International Publishing, 2016: 850–865.
8 聂源, 赖惠成, 高古学 改进YOLOv7+Bytetrack的小目标检测与追踪[J]. 计算机工程与应用, 2024, 60 (12): 189- 202
NIE Yuan, LAI Huicheng, GAO Guxue Improved small target detection and tracking with YOLOv7+Bytetrack[J]. Computer Engineering and Applications, 2024, 60 (12): 189- 202
doi: 10.3778/j.issn.1002-8331.2311-0372
9 ZHENG Z, LI J, QIN L YOLO-BYTE: an efficient multi-object tracking algorithm for automatic monitoring of dairy cows[J]. Computers and Electronics in Agriculture, 2023, 209: 107857
doi: 10.1016/j.compag.2023.107857
10 PANDHARIPANDE A, CHENG C H, DAUWELS J, et al Sensing and machine learning for automotive perception: a review[J]. IEEE Sensors Journal, 2023, 23 (11): 11097- 11115
doi: 10.1109/JSEN.2023.3262134
11 《中国公路学报》编辑部 中国汽车工程学术研究综述·2023[J]. 中国公路学报, 2023, 36 (11): 1- 192
Editorial Department of China Journal of Highway and Transport Review on China’s automotive engineering research progress: 2023[J]. China Journal of Highway and Transport, 2023, 36 (11): 1- 192
12 DIRGANTARA F M, ROHMAN A S, YULIANTI L. Object distance measurement system using monocular camera on vehicle [C]// 6th International Conference on Electrical Engineering, Computer Science and Informatics. Bandung: IEEE, 2019: 122–127.
13 SONG Z, LU J, ZHANG T, et al. End-to-end learning for inter-vehicle distance and relative velocity estimation in ADAS with a monocular camera [C]// IEEE International Conference on Robotics and Automation. Paris: IEEE, 2020: 11081–11087.
14 LIU J, ZHANG R Vehicle detection and ranging using two different focal length cameras[J]. Journal of Sensors, 2020, 2020 (1): 4372847
15 LIU Q, CHEN B, WANG F, et al. Vehicle distance estimation based on monocular vision and CNN [C]// International Conference on Computer Information Science and Artificial Intelligence. Kunming: IEEE, 2021: 638–641.
16 GAO W, CHEN Y, LIU Y, et al Distance measurement method for obstacles in front of vehicles based on monocular vision[J]. Journal of Physics: Conference Series, 2021, 1815 (1): 012019
doi: 10.1088/1742-6596/1815/1/012019
17 CZAJEWSKI W, IWANOWSKI M. Vision-based vehicle speed measurement method [C]// Computer Vision and Graphics. Berlin, Heidelberg: Springer, 2010: 308–315.
18 ARENADO M I, ORIA J M P, TORRE-FERRERO C, et al Monovision-based vehicle detection, distance and relative speed measurement in urban traffic[J]. IET Intelligent Transport Systems, 2014, 8 (8): 655- 664
doi: 10.1049/iet-its.2013.0098
19 YANG L, LI M, SONG X, et al Vehicle speed measurement based on binocular stereovision system[J]. IEEE Access, 2019, 7: 106628- 106641
doi: 10.1109/ACCESS.2019.2932120
20 YANG L, LUO J, SONG X, et al Robust vehicle speed measurement based on feature information fusion for vehicle multi-characteristic detection[J]. Entropy, 2021, 23 (7): 910
doi: 10.3390/e23070910
21 TONG Z, CHEN Y, XU Z, et al. Wise-IoU: bounding box regression loss with dynamic focusing mechanism [EB/OL]. [2023-12-16]. https://doi.org/10.48550/arXiv.2301.10051.
22 ZHANG Y, SUN P, JIANG Y, et al. ByteTrack: multi-object tracking by associating every detection box [C]// European Conference on Computer Vision. Cham: Springer, 2022: 1–21.
23 ZHANG Z A flexible new technique for camera calibration[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, 22 (11): 1330- 1334
doi: 10.1109/34.888718
24 徐慧智, 蒋时森, 王秀青, 等 基于深度学习的车载图像车辆目标检测和测距[J]. 吉林大学学报: 工学版, 2025, 55 (1): 185- 197
XU Huizhi, JIANG Shisen, WANG Xiuqing, et al Vehicle target detection and ranging in vehicle image based on deep learning[J]. Journal of Jilin University: Engineering and Technology Edition, 2025, 55 (1): 185- 197
25 WANG C Y, BOCHKOVSKIY A, LIAO H M. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors [C]// IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023: 7464–7475.
[1] 蔡永青,韩成,权巍,陈兀迪. 基于注意力机制的视觉诱导晕动症评估模型[J]. 浙江大学学报(工学版), 2025, 59(6): 1110-1118.
[2] 王立红,刘新倩,李静,冯志全. 基于联邦学习和时空特征融合的网络入侵检测方法[J]. 浙江大学学报(工学版), 2025, 59(6): 1201-1210.
[3] 陈赞,李冉,冯远静,李永强. 基于时间维超分辨率的视频快照压缩成像重构[J]. 浙江大学学报(工学版), 2025, 59(5): 956-963.
[4] 马莉,王永顺,胡瑶,范磊. 预训练长短时空交错Transformer在交通流预测中的应用[J]. 浙江大学学报(工学版), 2025, 59(4): 669-678.
[5] 李沈崇,曾新华,林传渠. 基于轴向注意力的多任务自动驾驶环境感知算法[J]. 浙江大学学报(工学版), 2025, 59(4): 769-777.
[6] 陈巧红,郭孟浩,方贤,孙麒. 基于跨模态级联扩散模型的图像描述方法[J]. 浙江大学学报(工学版), 2025, 59(4): 787-794.
[7] 顾正宇,赖菲菲,耿辰,王希明,戴亚康. 基于知识引导的缺血性脑卒中梗死区分割方法[J]. 浙江大学学报(工学版), 2025, 59(4): 814-820.
[8] 姚明辉,王悦燕,吴启亮,牛燕,王聪. 基于小样本人体运动行为识别的孪生网络算法[J]. 浙江大学学报(工学版), 2025, 59(3): 504-511.
[9] 梁礼明,龙鹏威,金家新,李仁杰,曾璐. 基于改进YOLOv8s的钢材表面缺陷检测算法[J]. 浙江大学学报(工学版), 2025, 59(3): 512-522.
[10] 王浚银,文斌,沈艳军,张俊,王子豪. 基于改进YOLOv7-tiny的铝型材表面缺陷检测方法[J]. 浙江大学学报(工学版), 2025, 59(3): 523-534.
[11] 杨凯博,钟铭恩,谭佳威,邓智颖,周梦丽,肖子佶. 基于半监督学习的多场景火灾小规模稀薄烟雾检测[J]. 浙江大学学报(工学版), 2025, 59(3): 546-556.
[12] 何永福,谢世维,于佳禄,陈思宇. 考虑跨层特征融合的抛洒风险车辆检测方法[J]. 浙江大学学报(工学版), 2025, 59(2): 300-309.
[13] 董红召,林少轩,佘翊妮. 交通目标YOLO检测技术的研究进展[J]. 浙江大学学报(工学版), 2025, 59(2): 249-260.
[14] 陈智超,杨杰,李凡,冯志成. 基于深度学习的列车运行环境感知关键算法研究综述[J]. 浙江大学学报(工学版), 2025, 59(1): 1-17.
[15] 刘登峰,陈世海,郭文静,柴志雷. 基于轻量残差网络的高效半色调算法[J]. 浙江大学学报(工学版), 2025, 59(1): 62-69.