Cited 0 time in
Issue report classification using a multimodal deep learning technique
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kwak, Changwon | - |
| dc.contributor.author | Lee, Seonah | - |
| dc.date.accessioned | 2023-04-25T04:40:57Z | - |
| dc.date.available | 2023-04-25T04:40:57Z | - |
| dc.date.issued | 2022-12 | - |
| dc.identifier.issn | 1613-0073 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/59285 | - |
| dc.description.abstract | Issue reports are useful resources for developing open-source software and continuously maintaining software products. However, it is not easy to systematically classify the issue reports accumulated hundreds of cases a day. To this end, researchers have studied how to classify issue reports automatically. However, these approaches are limited to applying a text-oriented classification method. In this paper, we apply a multi-modal model-based classification method, which has shown great performance improvement in many fields. We use images attached to an issue report to improve the performance of issue report classification. To evaluate our approach, we conduct an experiment, where we compare the performance of a text-based single-modal model and that of a text and image-based multi-modal model. The experimental results show that the multi-modal method yields 2.1% higher classification f1-score than that of the single-modal method. Based on the experimental results, we will continue our further exploration of the multi-modal model, by considering the characteristics of the issue report and various heterogeneous outputs. © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | CEUR-WS | - |
| dc.title | Issue report classification using a multimodal deep learning technique | - |
| dc.type | Article | - |
| dc.identifier.scopusid | 2-s2.0-85151709349 | - |
| dc.identifier.bibliographicCitation | CEUR Workshop Proceedings, v.3362 | - |
| dc.citation.title | CEUR Workshop Proceedings | - |
| dc.citation.volume | 3362 | - |
| dc.type.docType | Conference Paper | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.subject.keywordAuthor | Classification | - |
| dc.subject.keywordAuthor | Issue reports | - |
| dc.subject.keywordAuthor | Multimodal Deep learning | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
