|
|
Content-guided attention-based lane detection network |
Dengfeng LIU1,2( ),Wenjing GUO1,3,Shihai CHEN1 |
1. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China 2. Engineering Research Center of Intelligent Technology for Healthcare, Ministry of Education, Wuxi 214122, China 3. Intelligent Science and Technology Institute, Tianfu College of SWUFE, Mianyang 621000, China |
|
|
Abstract A content-guided attention network (CGANet) was proposed to effectively utilize attention mechanisms and improve the accuracy of lane detection. To enhance the model’s ability to capture contextual information, a content-guided attention (CGA) mechanism was introduced into the model, emphasizing more useful information encoded in the features while reducing the influence of irrelevant information. To reduce the impact of scale differences on model performance, a balanced feature pyramid network (BFPN) was proposed to achieve the balanced fusion of multi-scale features. An ROI (Region of Interest) extractor was introduced to address the issue of missing visual cues. Additionally, the cross-entropy loss was added to the loss function as an auxiliary classification loss to encourage the model to generate clearer probability distributions. Experimental results on multiple lane detection datasets demonstrated that, compared with the cross-layer refinement network (CLRNet) algorithm, the proposed method improves F1 index by 0.65, 0.18 and 0.29 percentage points on CULane, Tusimple and CurveLanes datasets, respectively.
|
Received: 10 January 2024
Published: 10 March 2025
|
|
Fund: 国家重点研发专项计划资助项目(2022YFE0112400);国家自然科学基金青年项目(21706096);第62批中国博士后科学基金面上资助项目(2017M621627);江苏省博士后科研资助项目(1601009A);江苏省自然科学基金青年项目(BK20160162). |
基于内容引导注意力的车道线检测网络
为了有效利用注意力机制以提高车道线检测的准确性,提出基于内容引导注意力的车道线检测网络(CGANet). 通过设计内容引导注意力机制(CGA),增强捕捉上下文信息的能力,强调编码在特征中更有用的信息,从而削弱无关信息的影响. 为了减轻尺度差异对模型性能的影响,提出均衡特征金字塔网络(BFPN),以实现多尺度特征的均衡融合. 引入ROI(Region of Interest)提取器,以解决无视觉线索问题. 在损失函数中添加交叉熵损失作为辅助分类损失,激励模型生成更加清晰的概率分布. 在多个车道线检测数据集上进行实验验证,结果表明,与跨层细化网络(CLRNet)算法相比,所提方法在CULane、Tusimple和CurveLanes数据集上的F1指标分别提升0.65、0.18和0.29个百分点.
关键词:
无人驾驶技术,
车道线检测,
注意力机制,
多尺度特征融合,
交叉熵损失
|
|
[1] |
ZHENG T, FANG H, ZHANG Y, et al. RESA: recurrent feature-shift aggregator for lane detection [C]// Proceedings of the AAAI Conference on Artificial Intelligence . Palo Alto: AAAI Press, 2021: 3547−3554.
|
|
|
[2] |
QIN Z, WANG H, LI X. Ultra fast structure-aware deep lane detection [C]// Computer Vision–ECCV 2020: 16th European Conference . Glasgow: Springer, 2020: 276−291.
|
|
|
[3] |
HONDA H, UCHIDA Y. CLRerNet: improving confidence of lane detection with LaneIoU [C]// Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision . Waikoloa: IEEE, 2024: 1165–1174.
|
|
|
[4] |
HAN J, DENG X, CAI X, Laneformer: object-aware row-column transformers for lane detection [C]// Proceedings of the AAAI Conference on Artificial Intelligence . Palo Alto: AAAI Press, 2022: 799−807.
|
|
|
[5] |
LEE D H, LIU J L End-to-end deep learning of lane detection and path prediction for real-time autonomous driving[J]. Signal, Image and Video Processing, 2023, 17 (1): 199- 205
doi: 10.1007/s11760-022-02222-2
|
|
|
[6] |
PAN H, CHANG X, SUN W Multitask knowledge distillation guides end-to-end lane detection[J]. IEEE Transactions on Industrial Informatics, 2023, 19 (9): 9703- 9712
doi: 10.1109/TII.2023.3233975
|
|
|
[7] |
ZHENG T, HUANG Y, LIU Y, et al. CLRNet: cross layer refinement network for lane detection [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . New Orleans: IEEE, 2022: 888–897.
|
|
|
[8] |
PAN X, SHI J, LUO P, et al. Spatial as deep: Spatial cnn for traffic scene understanding [C]// Proceedings of the AAAI Conference on Artificial Intelligence . New Orleans: AAAI Press, 2018: 589−592.
|
|
|
[9] |
LEE M, LEE J, LEE D, et al. Robust lane detection via expanded self attention [C]// Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision . Waikoloa: IEEE, 2022: 1949–1958.
|
|
|
[10] |
XU H, WANG S, CAI X, et al. CurveLane-NAS: unifying lane-sensitive architecture search and adaptive point blending [C]// Computer Vision–ECCV 2020: 16th European Conference . Glasgow: Springer, 2020: 689−704.
|
|
|
[11] |
TABELINI L, BERRIEL R, PAIXAO T M, et al. PolyLaneNet: lane estimation via deep polynomial regression [C]// Proceedings of the 25th International Conference on Pattern Recognition . Milan: IEEE, 2021: 6150–6156.
|
|
|
[12] |
LIU R, YUAN Z, LIU T, et al. End-to-end lane shape prediction with transformers [C]// Proceedings of the IEEE Winter Conference on Applications of Computer Vision . Waikoloa: IEEE, 2021: 3694–3702.
|
|
|
[13] |
LIU L, CHEN X, ZHU S, et al. CondLaneNet: a top-to-down lane detection framework based on conditional convolution [C]// Proceedings of the IEEE/CVF International Conference on Computer Vision . Montreal: IEEE, 2021: 3753–3762.
|
|
|
[14] |
LI X, LI J, HU X, et al Line-CNN: end-to-end traffic line detection with line proposal unit[J]. IEEE Transactions on Intelligent Transportation Systems, 2020, 21 (1): 248- 258
doi: 10.1109/TITS.2019.2890870
|
|
|
[15] |
TABELINI L, BERRIEL R, PAIXAO T M, et al. Keep your eyes on the lane: real-time attention-guided lane detection [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Nashville: IEEE, 2021: 294–302.
|
|
|
[16] |
WOO S, PARK J, LEE J Y, et al. CBAM: convolutional block attention module [C]// Proceedings of the European Conference on Computer Vision . Munich: Springer, 2018: 3-19.
|
|
|
[17] |
LIN T Y, DOLLÁR P, GIRSHICK R, et al. Feature pyramid networks for object detection [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . Honolulu: IEEE, 2017: 936–944.
|
|
|
[18] |
LIU S, QI L, QIN H, et al. Path aggregation network for instance segmentation [C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition . Salt Lake City: IEEE, 2018: 8759–8768.
|
|
|
[19] |
LIM B, SON S, KIM H, et al. Enhanced deep residual networks for single image super-resolution [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops . Honolulu: IEEE, 2017: 1132–1140.
|
|
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|