Cited 2 time in
Network-Wide Energy Efficiency Maximization in UAV-Aided IoT Networks: Quasi-Distributed Deep Reinforcement Learning Approach
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Lee, Seungmin | - |
| dc.contributor.author | Ban, Tae-Won | - |
| dc.contributor.author | Lee, Howon | - |
| dc.date.accessioned | 2025-02-12T06:01:08Z | - |
| dc.date.available | 2025-02-12T06:01:08Z | - |
| dc.date.issued | 2025-06 | - |
| dc.identifier.issn | 2372-2541 | - |
| dc.identifier.issn | 2327-4662 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/75892 | - |
| dc.description.abstract | In unmanned aerial vehicle (UAV)-aided Internet of Things (IoT) networks, providing seamless and reliable wireless connectivity to ground devices (GDs) is difficult owing to the short battery lifetimes of UAVs. Hence, we consider a deep reinforcement learning (DRL)-based UAV base station (UAV-BS) control method to maximize the network-wide energy efficiency of UAV-aided IoT networks featuring continuously moving GDs. First, we introduce two centralized DRL approaches; round-robin deep Q-learning (RR-DQL) and selective-k deep Q-learning (SK-DQL), where all UAV-BSs are controlled by a ground control station that collects the status information of UAV-BSs and determines their actions. However, significant signaling overhead and undesired processing latency can occur in these centralized approaches. Hence, we herein propose a quasi-distributed DQL-based UAV-BS control (QD-DQL) method that determines the actions of each agent based on its local information. By performing intensive simulations, we verify the algorithmic robustness and performance excellence of the proposed QD-DQL method based on comparison with several benchmark methods (i.e., RR-DQL, SK-DQL, multi-agent Q-learning, and exhaustive search method) while considering the mobility of GDs and the increase in the number of UAV-BSs. © 2014 IEEE. | - |
| dc.format.extent | 11 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.title | Network-Wide Energy Efficiency Maximization in UAV-Aided IoT Networks: Quasi-Distributed Deep Reinforcement Learning Approach | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/JIOT.2025.3532477 | - |
| dc.identifier.scopusid | 2-s2.0-85216326600 | - |
| dc.identifier.wosid | 001492153900010 | - |
| dc.identifier.bibliographicCitation | IEEE Internet of Things Journal, v.12, no.11, pp 15404 - 15414 | - |
| dc.citation.title | IEEE Internet of Things Journal | - |
| dc.citation.volume | 12 | - |
| dc.citation.number | 11 | - |
| dc.citation.startPage | 15404 | - |
| dc.citation.endPage | 15414 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordAuthor | multi-agent deep reinforcement learning | - |
| dc.subject.keywordAuthor | network-wide energy efficiency maximization | - |
| dc.subject.keywordAuthor | UAV Control | - |
| dc.subject.keywordAuthor | UAV-aided IoT network | - |
| dc.subject.keywordAuthor | Unmanned aerial vehicle-base station | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
