Cited 1 time in
A Self-Regulating Power-Control Scheme Using Reinforcement Learning for D2D Communication Networks
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ban, Tae-Won | - |
| dc.date.accessioned | 2022-12-26T06:40:46Z | - |
| dc.date.available | 2022-12-26T06:40:46Z | - |
| dc.date.issued | 2022-07 | - |
| dc.identifier.issn | 1424-8220 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/1141 | - |
| dc.description.abstract | We investigate a power control problem for overlay device-to-device (D2D) communication networks relying on a deep deterministic policy gradient (DDPG), which is a model-free off-policy algorithm for learning continuous actions such as transmitting power levels. We propose a DDPG-based self-regulating power control scheme whereby each D2D transmitter can autonomously determine its transmission power level with only local channel gains that can be measured from the sounding symbols transmitted by D2D receivers. The performance of the proposed scheme is analyzed in terms of average sum-rate and energy efficiency and compared to several conventional schemes. Our numerical results show that the proposed scheme increases the average sum-rate compared to the conventional schemes, even with severe interference caused by increasing the number of D2D pairs or high transmission power, and the proposed scheme has the highest energy efficiency. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Multidisciplinary Digital Publishing Institute (MDPI) | - |
| dc.title | A Self-Regulating Power-Control Scheme Using Reinforcement Learning for D2D Communication Networks | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/s22134894 | - |
| dc.identifier.scopusid | 2-s2.0-85133013846 | - |
| dc.identifier.wosid | 000824494900001 | - |
| dc.identifier.bibliographicCitation | Sensors, v.22, no.13 | - |
| dc.citation.title | Sensors | - |
| dc.citation.volume | 22 | - |
| dc.citation.number | 13 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Chemistry | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Instruments & Instrumentation | - |
| dc.relation.journalWebOfScienceCategory | Chemistry, Analytical | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Instruments & Instrumentation | - |
| dc.subject.keywordAuthor | device to device (D2D) | - |
| dc.subject.keywordAuthor | deep deterministic policy gradient (DDPG) | - |
| dc.subject.keywordAuthor | deep reinforcement learning (DRL) | - |
| dc.subject.keywordAuthor | power control | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
