Cited 12 time in
Learning Optimal Q-Function Using Deep Boltzmann Machine for Reliable Trading of Cryptocurrency
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Bu, Seok-Jun | - |
| dc.contributor.author | Cho, Sung-Bae | - |
| dc.date.accessioned | 2024-12-03T02:30:34Z | - |
| dc.date.available | 2024-12-03T02:30:34Z | - |
| dc.date.issued | 2018 | - |
| dc.identifier.issn | 0302-9743 | - |
| dc.identifier.issn | 1611-3349 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/73698 | - |
| dc.description.abstract | The explosive price volatility from the end of 2017 to January 2018 shows that bitcoin is a high risk asset. The deep reinforcement algorithm is straightforward idea for directly outputs the market management actions to achieve higher profit instead of higher price-prediction accuracy. However, existing deep reinforcement learning algorithms including Q-learning are also limited to problems caused by enormous searching space. We propose a combination of double Q-network and unsupervised pre-training using Deep Boltzmann Machine (DBM) to generate and enhance the optimal Q-function in cryptocurrency trading. We obtained the profit of 2,686% in simulation, whereas the best conventional model had that of 2,087% for the same period of test. In addition, our model records 24% of profit while market price significantly drops by -64%. | - |
| dc.format.extent | 13 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Springer Verlag | - |
| dc.title | Learning Optimal Q-Function Using Deep Boltzmann Machine for Reliable Trading of Cryptocurrency | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1007/978-3-030-03493-1_49 | - |
| dc.identifier.scopusid | 2-s2.0-85057076158 | - |
| dc.identifier.wosid | 000582456500049 | - |
| dc.identifier.bibliographicCitation | Lecture Notes in Computer Science, v.11314, pp 468 - 480 | - |
| dc.citation.title | Lecture Notes in Computer Science | - |
| dc.citation.volume | 11314 | - |
| dc.citation.startPage | 468 | - |
| dc.citation.endPage | 480 | - |
| dc.type.docType | Proceedings Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Artificial Intelligence | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Information Systems | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Theory & Methods | - |
| dc.subject.keywordPlus | NETWORKS | - |
| dc.subject.keywordPlus | STOCK | - |
| dc.subject.keywordAuthor | Deep reinforcement learning | - |
| dc.subject.keywordAuthor | Q-network | - |
| dc.subject.keywordAuthor | Deep Boltzmann Machine | - |
| dc.subject.keywordAuthor | Portfolio management | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
