Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Curiosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games

Full metadata record
DC Field Value Language
dc.contributor.authorFarooq, Sehar Shahzad-
dc.contributor.authorRahman, Hameedur-
dc.contributor.authorAbdul Wahid, Samiya-
dc.contributor.authorAlyan Ansari, Muhammad-
dc.contributor.authorAbdul Wahid, Saira-
dc.contributor.authorLee, Hosu-
dc.date.accessioned2025-11-13T08:30:18Z-
dc.date.available2025-11-13T08:30:18Z-
dc.date.issued2025-10-
dc.identifier.issn2073-431X-
dc.identifier.urihttps://scholarworks.gnu.ac.kr/handle/sw.gnu/80794-
dc.description.abstractGames are considered a suitable and standard benchmark for checking the performance of artificial intelligence-based algorithms in terms of training, evaluating, and comparing the performance of AI agents. In this research, an application of the Intrinsic Curiosity Module (ICM) and the Asynchronous Advantage Actor-Critic (A3C) algorithm is explored using action games. Having been proven successful in several gaming environments, its effectiveness in action games is rarely explored. Providing efficient learning and adaptation facilities, this research aims to assess whether integrating ICM with A3C promotes curiosity-driven explorations and adaptive learning in action games. Using the MAME Toolkit library, we interface with the game environments, preprocess game screens to focus on relevant visual elements, and create diverse game episodes for training. The A3C policy is optimized using the Proximal Policy Optimization (PPO) algorithm with tuned hyperparameters. Comparisons are made with baseline methods, including vanilla A3C, ICM with pixel-based predictions, and state-of-the-art exploration techniques. Additionally, we evaluate the agent's generalization capability in separate environments. The results demonstrate that ICM and A3C effectively promote curiosity-driven exploration in action games, with the agent learning exploration behaviors without relying solely on external rewards. Notably, we also observed an improved efficiency and learning speed compared to baseline approaches. This research contributes to curiosity-driven exploration in reinforcement learning-based virtual environments and provides insights into the exploration of complex action games. Successfully applying ICM and A3C in action games presents exciting opportunities for adaptive learning and efficient exploration in challenging real-world environments.-
dc.language영어-
dc.language.isoENG-
dc.publisherMDPI AG-
dc.titleCuriosity-Driven Exploration in Reinforcement Learning: An Adaptive Self-Supervised Learning Approach for Playing Action Games-
dc.typeArticle-
dc.publisher.location스위스-
dc.identifier.doi10.3390/computers14100434-
dc.identifier.scopusid2-s2.0-105020176955-
dc.identifier.wosid001602343700001-
dc.identifier.bibliographicCitationComputers, v.14, no.10-
dc.citation.titleComputers-
dc.citation.volume14-
dc.citation.number10-
dc.type.docTypeArticle-
dc.description.isOpenAccessY-
dc.description.journalRegisteredClassscopus-
dc.description.journalRegisteredClassesci-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalWebOfScienceCategoryComputer Science, Interdisciplinary Applications-
dc.subject.keywordAuthorintegration technology-
dc.subject.keywordAuthorreinforcement learning-
dc.subject.keywordAuthorinteractive learning environments-
dc.subject.keywordAuthoraction games-
dc.subject.keywordAuthorgamification-
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Lee, Hosu photo

Lee, Hosu
IT공과대학 (제어로봇공학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE