A Multimodal Deep Learning Model for Cross-Project Issue Classificationopen access
- Authors
- Kwak, Changwon; Heo, Jueun; Jung, Pilsu; Lee, Seonah
- Issue Date
- Sep-2025
- Publisher
- Institute of Electrical and Electronics Engineers Inc.
- Keywords
- code; deep learning; image; issue classification; issue reports; multi-class classification; multimodal; text
- Citation
- IEEE Access, v.13, pp 168839 - 168854
- Pages
- 16
- Indexed
- SCIE
SCOPUS
- Journal Title
- IEEE Access
- Volume
- 13
- Start Page
- 168839
- End Page
- 168854
- URI
- https://scholarworks.gnu.ac.kr/handle/sw.gnu/80294
- DOI
- 10.1109/ACCESS.2025.3613404
- ISSN
- 2169-3536
2169-3536
- Abstract
- Software continuously evolves through changes, and issue reports encapsulate these change requests. In the GitHub system, a labeling mechanism has been introduced for systematic issue management, but significant effort from developers is required to label and manage these issues. To address this, numerous attempts have been made in previous research to automate issue report classification. However, these attempts have shown limitations in classification accuracy. We experiment to determine if integrating heterogeneous information through a multimodal model that combines text, images, and code from issue reports can improve classification accuracy. Specifically, we investigate whether training the model on extensive issue data can enhance classification accuracy. Experimental results show that the multimodal approach outperforms single-modal models by 5.50-7.01% in terms of F1-Score, demonstrating superior performance. These findings indicate that leveraging heterogeneous data sources in issue reports is effective in improving classification performance.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - 공학계열 > AI융합공학과 > Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.