Cited 0 time in
심층 강화학습을 사용한 DQN 및 텐서 플로우 에이전트 기반 시각적 객체 추적 알고리즘
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | 박진혁 | - |
| dc.contributor.author | 임선자 | - |
| dc.contributor.author | 이석환 | - |
| dc.contributor.author | 권기룡 | - |
| dc.date.accessioned | 2025-05-23T02:00:06Z | - |
| dc.date.available | 2025-05-23T02:00:06Z | - |
| dc.date.issued | 2024-08 | - |
| dc.identifier.issn | 1229-7771 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/78496 | - |
| dc.description.abstract | Object tracking models are multi-functional algorithms that are applied in many fields of uncertain environment tracking situations. In addition, object tracking in physical environments and conditions is a much more difficult task to obtain accurate results. However, the process can be tested by evaluating and verifying the performance of the model in various simulation conditions by experimenting with a virtual simulator. In this paper, we propose a visual object tracking algorithm based on DQN and TensorFlow agents using deep reinforcement learning. TensorFlow agents are trained in the Blocks environment to adapt to the environment and existing objects in the simulation environment, and further tests and evaluations are performed on the accuracy and speed of tracking. The DQN algorithm uses a deep reinforcement learning model to investigate the environment in a virtual simulation environment using sequential photos of the virtual environment simulation model. The proposed DQN and Tensor Flow agent-based process and the deep reinforcement learning-based object tracker are tested and compared with existing methods, and the results show that the proposed DQN and TensorFlow agent- based process is superior in terms of stability, speed, and numerical performance. | - |
| dc.format.extent | 13 | - |
| dc.language | 한국어 | - |
| dc.language.iso | KOR | - |
| dc.publisher | 한국멀티미디어학회 | - |
| dc.title | 심층 강화학습을 사용한 DQN 및 텐서 플로우 에이전트 기반 시각적 객체 추적 알고리즘 | - |
| dc.title.alternative | DQN and TensorFlow Agent-Based Visual Object Tracking Algorithm using Deep Reinforcement Learning | - |
| dc.type | Article | - |
| dc.publisher.location | 대한민국 | - |
| dc.identifier.bibliographicCitation | 멀티미디어학회논문지, v.27, no.8, pp 969 - 981 | - |
| dc.citation.title | 멀티미디어학회논문지 | - |
| dc.citation.volume | 27 | - |
| dc.citation.number | 8 | - |
| dc.citation.startPage | 969 | - |
| dc.citation.endPage | 981 | - |
| dc.identifier.kciid | ART003114789 | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | kci | - |
| dc.subject.keywordAuthor | Object Tracking | - |
| dc.subject.keywordAuthor | Object Detection | - |
| dc.subject.keywordAuthor | Deep Reinforcement Learning | - |
| dc.subject.keywordAuthor | AirSim | - |
| dc.subject.keywordAuthor | Virtual Simulation | - |
| dc.subject.keywordAuthor | TensorFlow Agent | - |
| dc.subject.keywordAuthor | Deep Q-Network | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
