Please wait a minute...
IET Cyber-Systems and Robotics  2022, Vol. 4 Issue (2): 77-85    DOI: 10.1049/csy2.12045
    
Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land
Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land
 全文: PDF 
摘要: In this paper, a novel deep learning dataset, called Air2Land, is presented for advancing the state-of-the-art object detection and pose estimation in the context of one fixed-wing unmanned aerial vehicle autolanding scenarios. It bridges vision and control for ground-based vision guidance systems having the multi-modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms targeted at visually assisted landing of one fixed-wing vehicle. The dataset is composed of sequential stereo images and synchronised sensor data, in terms of the flying vehicle pose and Pan-Tilt Unit angles, simulated in various climate conditions and landing scenarios. Since real-world automated landing data is very limited, the proposed dataset provides the necessary foundation for vision-based tasks such as flying vehicle detection, key point localisation, pose estimation etc. Hereafter, in addition to providing plentiful and scene-rich data, the developed dataset covers high-risk scenarios that are hardly accessible in reality. The dataset is also open and available at https://github.com/micros-uav/micros_air2land as well. The cover image is based on the Research Article Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land by Tianjiang Hu et al., https://doi.org/10.1049/csy2.12045.
Abstract: In this paper, a novel deep learning dataset, called Air2Land, is presented for advancing the state-of-the-art object detection and pose estimation in the context of one fixed-wing unmanned aerial vehicle autolanding scenarios. It bridges vision and control for ground-based vision guidance systems having the multi-modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms targeted at visually assisted landing of one fixed-wing vehicle. The dataset is composed of sequential stereo images and synchronised sensor data, in terms of the flying vehicle pose and Pan-Tilt Unit angles, simulated in various climate conditions and landing scenarios. Since real-world automated landing data is very limited, the proposed dataset provides the necessary foundation for vision-based tasks such as flying vehicle detection, key point localisation, pose estimation etc. Hereafter, in addition to providing plentiful and scene-rich data, the developed dataset covers high-risk scenarios that are hardly accessible in reality. The dataset is also open and available at https://github.com/micros-uav/micros_air2land as well. The cover image is based on the Research Article Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land by Tianjiang Hu et al., https://doi.org/10.1049/csy2.12045.
出版日期: 2022-04-09
基金资助: https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/csy2.12045
服务  
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章  
Xunchen Zheng
Tianjiang Hu

引用本文:

Xunchen Zheng, Tianjiang Hu. Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land. IET Cyber-Systems and Robotics, 2022, 4(2): 77-85.

链接本文:

https://www.zjujournals.com/iet-csr/CN/10.1049/csy2.12045        https://www.zjujournals.com/iet-csr/CN/Y2022/V4/I2/77

No related articles found!