액터-크리틱 모형기반 포트폴리오 연구
DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이우식 | - |
dc.date.accessioned | 2022-12-26T08:01:03Z | - |
dc.date.available | 2022-12-26T08:01:03Z | - |
dc.date.issued | 2022-06 | - |
dc.identifier.issn | 1226-833x | - |
dc.identifier.issn | 2765-5415 | - |
dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/2179 | - |
dc.description.abstract | The Bank of Korea raised the benchmark interest rate by a quarter percentage point to 1.75 percent per year, and analysts predict that South Korea's policy rate will reach 2.00 percent by the end of calendar year 2022. Furthermore, because market volatility has been significantly increased by a variety of factors, including rising rates, inflation, and market volatility, many investors have struggled to meet their financial objectives or deliver returns. Banks and financial institutions are attempting to provide Robo-Advisors to manage client portfolios without human intervention in this situation. In this regard, determining the best hyper-parameter combination is becoming increasingly important. This study compares some activation functions of the Deep Deterministic Policy Gradient(DDPG) and Twin-delayed Deep Deterministic Policy Gradient (TD3) Algorithms to choose a sequence of actions that maximizes long-term reward. The DDPG and TD3 outperformed its benchmark index, according to the results. One reason for this is that we need to understand the action probabilities in order to choose an action and receive a reward, which we then compare to the state value to determine an advantage. As interest in machine learning has grown and research into deep reinforcement learning has become more active, finding an optimal hyper-parameter combination for DDPG and TD3 has become increasingly important. | - |
dc.format.extent | 10 | - |
dc.language | 한국어 | - |
dc.language.iso | KOR | - |
dc.publisher | 한국산업융합학회 | - |
dc.title | 액터-크리틱 모형기반 포트폴리오 연구 | - |
dc.title.alternative | A Study on the Portfolio Performance Evaluation using Actor-Critic Reinforcement Learning Algorithms | - |
dc.type | Article | - |
dc.publisher.location | 대한민국 | - |
dc.identifier.bibliographicCitation | 한국산업융합학회논문집, v.25, no.3, pp 467 - 476 | - |
dc.citation.title | 한국산업융합학회논문집 | - |
dc.citation.volume | 25 | - |
dc.citation.number | 3 | - |
dc.citation.startPage | 467 | - |
dc.citation.endPage | 476 | - |
dc.identifier.kciid | ART002851976 | - |
dc.description.isOpenAccess | N | - |
dc.description.journalRegisteredClass | kci | - |
dc.subject.keywordAuthor | Quantitative Finance | - |
dc.subject.keywordAuthor | Business Analytics | - |
dc.subject.keywordAuthor | FinTech | - |
dc.subject.keywordAuthor | Autonomous Portfolio | - |
dc.subject.keywordAuthor | Optimization | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0533
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.