Cited 2 time in
A Deep Learning-Based Transmission Scheme Using Reduced Feedback for D2D Networks
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Ban, Tae-Won | - |
| dc.date.accessioned | 2023-01-05T07:29:01Z | - |
| dc.date.available | 2023-01-05T07:29:01Z | - |
| dc.date.issued | 2022-11 | - |
| dc.identifier.issn | 2169-3536 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/30078 | - |
| dc.description.abstract | In this study, we investigate frequency division duplex (FDD)-based overlay device-to-device (D2D) communication networks. In overlay D2D networks, D2D communication uses a dedicated radio resource to eliminate the cross-interference with cellular communication and multiple D2D devices share the dedicated radio resource to resolve the scarcity of radio spectrum, thereby causing co-channel interference, one of the challenging problems in D2D communication networks. Various radio resource management problems for D2D communication networks can't be solved by conventional optimization methods because they are modelled by non-convex optimization. Recently, various studies have relied on deep reinforcement learning (DRL) as an alternative method to maximize the performance of D2D communication networks overcoming co-channel interference. These studies showed that DRL-based radio resource management schemes can achieve almost optimal performance, and even outperform the state-of-art schemes based on non-convex optimization. Most of DRL-based transmission schemes inevitably require feedback information from D2D receivers to build input states, especially in FDD networks where the channel reciprocity between uplink and downlink is not valid. However, the effect of feedback overhead has not been well investigated in previous studies using DRL, and none of the studies reported on reducing the feedback overhead of DRL-based transmission schemes for FDD-based D2D networks. In this study, we propose a DRL-based transmission scheme for FDD-based D2D networks where input states are built by using reduced feedback information, thereby reducing feedback overhead. The proposed DRL-based transmission scheme using reduced feedback information achieves the same average sum-rates as that using full feedback, while reducing the feedback overhead significantly. | - |
| dc.format.extent | 9 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
| dc.title | A Deep Learning-Based Transmission Scheme Using Reduced Feedback for D2D Networks | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1109/ACCESS.2022.3208572 | - |
| dc.identifier.scopusid | 2-s2.0-85139410646 | - |
| dc.identifier.wosid | 000864357400001 | - |
| dc.identifier.bibliographicCitation | IEEE Access, v.10, pp 102316 - 102324 | - |
| dc.citation.title | IEEE Access | - |
| dc.citation.volume | 10 | - |
| dc.citation.startPage | 102316 | - |
| dc.citation.endPage | 102324 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Telecommunications | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Electrical & Electronic | - |
| dc.relation.journalWebOfScienceCategory | Telecommunications | - |
| dc.subject.keywordAuthor | Device-to-device communication | - |
| dc.subject.keywordAuthor | Receivers | - |
| dc.subject.keywordAuthor | Communication networks | - |
| dc.subject.keywordAuthor | Downlink | - |
| dc.subject.keywordAuthor | Uplink | - |
| dc.subject.keywordAuthor | Cellular networks | - |
| dc.subject.keywordAuthor | Resource management | - |
| dc.subject.keywordAuthor | Deep learning | - |
| dc.subject.keywordAuthor | Reinforcement learning | - |
| dc.subject.keywordAuthor | Autonomous transmission | - |
| dc.subject.keywordAuthor | device-to-device (D2D) | - |
| dc.subject.keywordAuthor | deep reinforcement learning (DRL) | - |
| dc.subject.keywordAuthor | transmission scheme | - |
| dc.subject.keywordAuthor | feedback | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
