2022年, 第2期 刊出日期:2022-06-30 上一期   
Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land   收藏
Xunchen Zheng, Tianjiang Hu
IET Cyber-Systems and Robotics. 2022 (2): 77-85.   DOI: https://doi.org/10.1049/csy2.12045
摘要( 0 )     PDF(0KB)( 3 )
In this paper, a novel deep learning dataset, called Air2Land, is presented for advancing the state-of-the-art object detection and pose estimation in the context of one fixed-wing unmanned aerial vehicle autolanding scenarios. It bridges vision and control for ground-based vision guidance systems having the multi-modal data obtained by diverse sensors and pushes forward the development of computer vision and autopilot algorithms targeted at visually assisted landing of one fixed-wing vehicle. The dataset is composed of sequential stereo images and synchronised sensor data, in terms of the flying vehicle pose and Pan-Tilt Unit angles, simulated in various climate conditions and landing scenarios. Since real-world automated landing data is very limited, the proposed dataset provides the necessary foundation for vision-based tasks such as flying vehicle detection, key point localisation, pose estimation etc. Hereafter, in addition to providing plentiful and scene-rich data, the developed dataset covers high-risk scenarios that are hardly accessible in reality. The dataset is also open and available at https://github.com/micros-uav/micros_air2land as well. The cover image is based on the Research Article Air2Land: A deep learning dataset for unmanned aerial vehicle autolanding from air to land by Tianjiang Hu et al., https://doi.org/10.1049/csy2.12045.
Applying project-based learning in artificial intelligence and marine discipline: An evaluation study on a robotic sailboat platform   收藏
Xiongwei Lin, Hengli Liu, Qinbo Sun, Xiuhan Li, Huihuan Qian, Zhenglong Sun, Tin Lun Lam
IET Cyber-Systems and Robotics. 2022 (2): 86-96.   DOI: https://doi.org/10.1049/csy2.12050
摘要( 0 )     PDF(0KB)( 2 )
Artificial intelligence is penetrating various fields. The demand for interdisciplinary talent is increasingly important, while interdisciplinary educational activities for high school students are lagging behind. Project-based learning (PBL) in artificial intelligence (AI) and robotic education activities supported by a robotic sailboat platform, the sailboat test arena (STAr), has been shown to popularise AI and robotic knowledge in young students. In the implementation of the programme, PBL was provided for students, and gamification pedagogy was applied to increase participants' learning motivation and engagement. The results show that the proposed STAr-based programme is capable of delivering the desired knowledge and skills to students at high school levels. The assessment results suggest that most students achieve learning outcomes on average. Students showed more interest in AI and marine disciplines and were willing to participate in more such educational programs. The findings fill the research gap that few existing education platforms have facilitated the teaching and learning of AI and marine disciplines for high school students.
Multi-branch angle aware spatial temporal graph convolutional neural network for model-based gait recognition   收藏
Liyang Zheng, Yuheng Zha, Da Kong, Hanqing Yang, Yu Zhang
IET Cyber-Systems and Robotics. 2022 (2): 97-106.   DOI: https://doi.org/10.1049/csy2.12052
摘要( 0 )     PDF(0KB)( 0 )
Model-based gait recognition with skeleton data input has attracted more attention in recent years. The model-based gait recognition methods take skeletons constructed by body joints as input, which are invariant to changing carrying and clothing conditions. However, previous methods limitedly model the skeleton information in either spatial or temporal domains and ignore the pose variety under different view angles, which results in poor performance for gait recognition. To solve the above problems, we propose the Multi-Branch Angle Aware Spatial Temporal Graph Convolutional Neural Network to better depict the spatial-temporal relationship while minimising the interference from the view angles. The model adopts the legacy Spatial Temporal Graph Neural Network (ST-GCN) as its backbone and relocates it to create independent ST-GCN branches. The novel Angle Estimator module is designed to predict the skeletons' view angles, which enables the network robust to the changing views. To balance the weights of different body parts and sequence frames, we build a Part-Frame-Importance module to redistribute them. Our experiments on the challenging CASIA-B dataset have proved the efficacy of the proposed method, which achieves state-of-the-art performance under different carrying and clothing conditions.
LessNet: Lightweight and efficient semantic segmentation for large-scale point clouds   收藏
Guoqiang Feng, Weilong Li, Xiaolin Zhao, Xuemeng Yang, Xin Kong, TianXin Huang, Jinhao Cui
IET Cyber-Systems and Robotics. 2022 (2): 107-115.   DOI: https://doi.org/10.1049/csy2.12047
摘要( 0 )     PDF(0KB)( 1 )
With a wide range of applications in autonomous driving and robotics, semantic segmentation for large-scale outdoor point clouds is a critical and challenging issue. Due to the large number and irregular arrangement of point clouds, it is difficult to balance the efficiency and effectiveness. In this paper, we propose LessNet, a lightweight and efficient voxel-based method for LiDAR-only semantic segmentation, taking advantage of cylindrical partition and intra-voxel feature fusion. Specifically, we use a cylindrical partition method to distribute the outdoor point clouds more evenly in voxels. To better encode the voxel features, we adopt an intra-voxel aggregation method without querying neighbours. The voxel features are further input into a lightweight and effective 3D U-net to aggregate local features and dilate the receptive field. Extensive experiments prove the satisfied semantic segmentation performance and the improvement of each component in our proposed framework. Our method is capable of processing more than one million point clouds at a time while retaining low latency and few parameters. Moreover, our method achieves comparable performance with state-of-the-art approaches and outperforms all projection-based methods on the SemanticKITTI benchmark.
Scene images and text information-based object location of robot grasping   收藏
Zhichao Liu, Kaixuan Ding, Qingyang Xu, Yong Song, Xianfeng Yuan, Yibin Li
IET Cyber-Systems and Robotics. 2022 (2): 116-130.   DOI: https://doi.org/10.1049/csy2.12049
摘要( 0 )     PDF(0KB)( 0 )
Self-supervised monocular depth estimation via asymmetric convolution block   收藏
Lingling Hu, Hao Zhang, Zhuping Wang, Chao Huang, Changzhu Zhang
IET Cyber-Systems and Robotics. 2022 (2): 131-138.   DOI: https://doi.org/10.1049/csy2.12051
摘要( 0 )     PDF(0KB)( 0 )
Without the dependence of depth ground truth, self-supervised learning is a promising alternative to train monocular depth estimation. It builds its own supervision signal with the help of other tools, such as view synthesis and pose networks. However, more training parameters and time consumption may be involved. This paper proposes a monocular depth prediction framework that can jointly learn the depth value and pose transformation between images in an end-to-end manner. The depth network creatively employs an asymmetric convolution block instead of every square kernel layer to strengthen the learning ability of extracting image features when training. During inference time, the asymmetric kernels are fused and converted to the original network to predict more accurate image depth, thus bringing no extra computations anymore. The network is trained and tested on the KITTI monocular dataset. The evaluated results demonstrate that the depth model outperforms some State of the Arts (SOTA) approaches and can reduce the inference time of depth prediction. Additionally, the proposed model performs great adaptability on the Make3D dataset.
Fault tolerant control in an unmanned bicycle robot via sliding mode theory   收藏
Mousa Alizadeh, Amin Ramezani, Hadis Saadatinezhad
IET Cyber-Systems and Robotics. 2022 (2): 139-152.   DOI: https://doi.org/10.1049/csy2.12032
摘要( 0 )     PDF(0KB)( 0 )
In this work, a new active fault tolerant control (FTC) is developed for an unmanned bicycle robot based on an integration between a sliding mode control (SMC), fault detection (FD), and fault estimation (FE) via a residual signal. A sliding surface in accordance with the fault tolerant sliding mode control (FTSMC) is designed for the bicycle robot to get multiple exciting features such as fast transient response with finite time convergence, small overshoot and quick stabilisation in the presence of an actuator fault. To obtain an effective performance for the FTSMC, a fault estimation system is employed and in order to attain estimation, an extended Kalman filter (EKF) as an estimator and a change detection algorithm called cumulative sum (CUSUM) as a residual evaluation function are developed. The innovative features of the proposed approach, that is FTSMC, are verified when compared with the other up-to-date control techniques like fault tolerant model-based predictive control with feedback linearisation (FTMPC + FBL) and fault tolerant linear quadratic regulator with feedback linearisation (FTLQR + FBL) on an unmanned bicycle robot.