Detailed Information

Cited 10 time in webofscience Cited 12 time in scopus
Metadata Downloads

Learning Optimal Q-Function Using Deep Boltzmann Machine for Reliable Trading of Cryptocurrency

Authors
Bu, Seok-JunCho, Sung-Bae
Issue Date
2018
Publisher
Springer Verlag
Keywords
Deep reinforcement learning; Q-network; Deep Boltzmann Machine; Portfolio management
Citation
Lecture Notes in Computer Science, v.11314, pp 468 - 480
Pages
13
Indexed
SCOPUS
Journal Title
Lecture Notes in Computer Science
Volume
11314
Start Page
468
End Page
480
URI
https://scholarworks.gnu.ac.kr/handle/sw.gnu/73698
DOI
10.1007/978-3-030-03493-1_49
ISSN
0302-9743
1611-3349
Abstract
The explosive price volatility from the end of 2017 to January 2018 shows that bitcoin is a high risk asset. The deep reinforcement algorithm is straightforward idea for directly outputs the market management actions to achieve higher profit instead of higher price-prediction accuracy. However, existing deep reinforcement learning algorithms including Q-learning are also limited to problems caused by enormous searching space. We propose a combination of double Q-network and unsupervised pre-training using Deep Boltzmann Machine (DBM) to generate and enhance the optimal Q-function in cryptocurrency trading. We obtained the profit of 2,686% in simulation, whereas the best conventional model had that of 2,087% for the same period of test. In addition, our model records 24% of profit while market price significantly drops by -64%.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Seok-Jun, Buu photo

Seok-Jun, Buu
IT공과대학 (컴퓨터공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE