상세 보기
- Lee, Seungmin;
- Ban, Tae-Won;
- Lee, Howon
WEB OF SCIENCE
2SCOPUS
6초록
In unmanned aerial vehicle (UAV)-aided Internet of Things (IoT) networks, providing seamless and reliable wireless connectivity to ground devices (GDs) is difficult owing to the short battery lifetimes of UAVs. Hence, we consider a deep reinforcement learning (DRL)-based UAV base station (UAV-BS) control method to maximize the network-wide energy efficiency of UAV-aided IoT networks featuring continuously moving GDs. First, we introduce two centralized DRL approaches; round-robin deep Q-learning (RR-DQL) and selective-k deep Q-learning (SK-DQL), where all UAV-BSs are controlled by a ground control station that collects the status information of UAV-BSs and determines their actions. However, significant signaling overhead and undesired processing latency can occur in these centralized approaches. Hence, we herein propose a quasi-distributed DQL-based UAV-BS control (QD-DQL) method that determines the actions of each agent based on its local information. By performing intensive simulations, we verify the algorithmic robustness and performance excellence of the proposed QD-DQL method based on comparison with several benchmark methods (i.e., RR-DQL, SK-DQL, multi-agent Q-learning, and exhaustive search method) while considering the mobility of GDs and the increase in the number of UAV-BSs. © 2014 IEEE.
키워드
- 제목
- Network-Wide Energy Efficiency Maximization in UAV-Aided IoT Networks: Quasi-Distributed Deep Reinforcement Learning Approach
- 저자
- Lee, Seungmin; Ban, Tae-Won; Lee, Howon
- 발행일
- 2025-06
- 유형
- Article
- 권
- 12
- 호
- 11
- 페이지
- 15404 ~ 15414