Cited 0 time in
Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Farooq, Sehar Shahzad | - |
| dc.contributor.author | Rahman, Hameedur | - |
| dc.contributor.author | Abdul Wahid, Samiya | - |
| dc.contributor.author | Alyan Ansari, Muhammad | - |
| dc.contributor.author | Abdul Wahid, Saira | - |
| dc.contributor.author | Lee, Hosu | - |
| dc.date.accessioned | 2025-11-13T08:30:18Z | - |
| dc.date.available | 2025-11-13T08:30:18Z | - |
| dc.date.issued | 2025-10 | - |
| dc.identifier.issn | 2073-431X | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/80794 | - |
| dc.description.abstract | Games are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor-Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent's generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | MDPI AG | - |
| dc.title | Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games | - |
| dc.type | Article | - |
| dc.publisher.location | 스위스 | - |
| dc.identifier.doi | 10.3390/computers14100434 | - |
| dc.identifier.scopusid | 2-s2.0-105020176955 | - |
| dc.identifier.wosid | 001602343700001 | - |
| dc.identifier.bibliographicCitation | Computers, v.14, no.10 | - |
| dc.citation.title | Computers | - |
| dc.citation.volume | 14 | - |
| dc.citation.number | 10 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.description.journalRegisteredClass | esci | - |
| dc.relation.journalResearchArea | Computer Science | - |
| dc.relation.journalWebOfScienceCategory | Computer Science, Interdisciplinary Applications | - |
| dc.subject.keywordAuthor | integration technology | - |
| dc.subject.keywordAuthor | reinforcement learning | - |
| dc.subject.keywordAuthor | interactive learning environments | - |
| dc.subject.keywordAuthor | action games | - |
| dc.subject.keywordAuthor | gamification | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
