Deep Reinforcement Learning based Energy-Efficient Multi-UAV Data Collection for IoT Networks

dc.contributor.advisorWang, Ping
dc.contributor.advisorNguyen, Uyen Trang
dc.contributor.authorKhodaparast, Seyed Saeed
dc.date.accessioned2022-03-03T13:57:59Z
dc.date.available2022-03-03T13:57:59Z
dc.date.copyright2021-09
dc.date.issued2022-03-03
dc.date.updated2022-03-03T13:57:59Z
dc.degree.disciplineElectrical and Computer Engineering
dc.degree.levelMaster's
dc.degree.nameMASc - Master of Applied Science
dc.description.abstractUnmanned aerial vehicles (UAVs) are regarded as an emerging technology, which can be effectively utilized to perform the data collection tasks in the Internet of Things (IoT) networks. However, both the UAVs and the sensors in these networks are energy-limited devices, which necessitates an energy-efficient data collection procedure to ensure the network lifetime. In this thesis, we propose a multi-UAV-assisted network, where the UAVs fly to the ground sensors and control the sensor's transmit power during the data collection time. Our goal is to minimize the total energy consumption of the UAVs and the sensors, which is needed to accomplish the data collection mission. We formulate this problem into three sub-problems of single UAV navigation, sensor power control, and multi-UAV scheduling, and model each part as a finite-horizon Markov Decision Process (MDP). We deploy deep reinforcement learning (DRL)-based frameworks to solve each part. Specifically, we use the deep deterministic policy gradient (DDPG) method to generate the best trajectory for the UAVs in an obstacle-constrained environment, given its starting position and the target sensor. We also deploy DDPG to control the sensor's transmit power during data collection. To schedule activity plans for each UAV to visit the sensors, we propose a multi-agent deep Q-learning (DQL) approach by taking the energy consumption of the UAVs on each path into account. Our simulations show that the UAVs can find a safe and optimal path for each of their trips. Continuous power control of the sensors achieves better performance than the fixed power and fixed rate approaches in terms of the sensor's energy consumption and the data collection completion time. In addition, compared to the two commonly used baselines, our scheduling framework achieves better and near-optimal results in the simulated scenario.
dc.identifier.urihttp://hdl.handle.net/10315/39061
dc.languageen
dc.rightsAuthor owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.
dc.subjectEngineering
dc.subject.keywordsData collection
dc.subject.keywordsUnmanned aerial vehicle (UAV)
dc.subject.keywordsInternet of Things (IoT)
dc.subject.keywordsDeep reinforcement learning (DRL)
dc.subject.keywordsEnergy consumption
dc.titleDeep Reinforcement Learning based Energy-Efficient Multi-UAV Data Collection for IoT Networks
dc.typeElectronic Thesis or Dissertation

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Khodaparast_SeyedSaeed_2021_Masters.pdf
Size:
1.68 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
license.txt
Size:
1.87 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
YorkU_ETDlicense.txt
Size:
3.39 KB
Format:
Plain Text
Description: