Deep Learning-Enhanced Autonomous Aerial and Ground Robotics Using UWB and Lidar in GNSS-Denied Environments

Loading...
Thumbnail Image

Date

2024-11-07

Authors

Arjmandi, Zahra

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Over the last decade, advancements in Unmanned Aerial Vehicle (UAV) technology and Artificial Intelligence (AI) have led to significant improvements in navigation and positioning, yet widespread adoption remains limited due to challenges in integrating various technologies and ensuring reliable real-time data processing. This thesis addresses these issues by developing a comprehensive framework that merges advanced data collection platforms, deep learning algorithms, and novel fusion methods to enhance UAV positioning accuracy and reliability.

A central contribution of this research is the creation of the Q-Drone Ultra-Wideband (UWB) benchmark dataset. This dataset, generated from a UAV equipped with five UWB sensors across five diverse environments (indoor, outdoor, and semi-outdoor) over a distance of 4 km, provides a standardized benchmark for testing UAV positioning systems. It enables researchers to develop and validate algorithms under varied conditions, supporting advancements in UAV navigation and positioning research.

The thesis also introduces an incremental smoothing approach, integrating high-rate and low-rate UWB measurements with inertial data within a unified pose graph framework. This method, using an "add-after-eliminating" strategy, reduces Mean Absolute Error (MAE) by 0.2 meters compared to baseline multilateration methods and achieves a 0.3-meter MAE reduction compared to two-factor pose graph methods.

Further, the DeepCovPG framework is developed, combining a Variational Autoencoder (VAE) with a Long Short-Term Memory (LSTM) network to predict and incorporate dynamic covariances into the pose graph. This approach results in a 48% reduction in Root Mean Square Error (RMSE) and a 51% reduction in Range Covariance RMSE, with notable improvements of 0.41 meters in tunnels and 0.23 meters in fields. The framework also achieves a 26% reduction in multilateration RMSE and a 32% reduction in multilateration Covariance RMSE.

Additionally, the thesis explores Light Detection and Ranging (LiDAR)-based positioning and proposes the INAF fusion method. This method dynamically selects relevant information from geometric and AI-based odometry techniques, improving accuracy by 3.90% over direct fusion methods and 0.25% over attention-based fusion methods. The INAF fusion method demonstrates enhanced adaptability to various driving conditions, improving accuracy in both straight and dynamic environments.

Description

Keywords

Artificial intelligence, Remote sensing, Aerospace engineering

Citation