Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Artificial Intelligence-Driven Drafting of Chest X-Ray Reports: 2025 Position Statement From the Korean Society of Thoracic Radiology Based on an Expert Survey

Authors
Jeong Won GiHwang Eui JinJin Gong YongLee Ju-HyungKang Se RiKo HongseokGil BomiKim Jin HwanKim Tae JungPark Chan HoBeck Kyongmin SarahSon Min JiWoo Jeong JooYoo Seung-JinYoo Jin YoungYoon Soon HoLee Ji WonJeon Kyung NyeoJeong Yeon JooHam Soo-YounHong Su JinHong WonjuGoo Jin Mo
Issue Date
Nov-2025
Publisher
대한영상의학회
Keywords
Artificial intelligence; Consensus; Diagnostic imaging/methods; Natural language processing; Radiography; thoracic
Citation
Korean Journal of Radiology, v.26, no.11, pp 1100 - 1108
Pages
9
Indexed
SCIE
SCOPUS
KCI
Journal Title
Korean Journal of Radiology
Volume
26
Number
11
Start Page
1100
End Page
1108
URI
https://scholarworks.gnu.ac.kr/handle/sw.gnu/80639
DOI
10.3348/kjr.2025.0457
ISSN
1229-6929
2005-8330
Abstract
Objective: Generative artificial intelligence (AI) systems can be used to draft automated chest X-ray (CXR) reports. Although promising in terms of efficiency and workforce shortages, their accuracy, reliability, and clinical utility remain uncertain. This article presents the Korean Society of Thoracic Radiology (KSTR) position statement on AI-assisted CXR report drafting, derived from a Delphi survey of experts who used the software on a modest case set. Materials and Methods: Twenty thoracic radiologists completed a Delphi survey after reviewing 60 CXR cases using an AIbased tool for automated report drafting (KARA-CXR, version 1.0.0.3; KakaoBrain, Seoul, Republic of Korea). Prior to the Delphi survey, the participants individually reviewed 60 CXR cases at their respective workplaces as part of the survey preparation process. The 60 cases were distributed evenly across six clinical settings (health screening, inpatient, emergency department, intensive care unit, respiratory outpatient, and non-respiratory outpatient), with 10 cases in each setting. The participants individually selected CXR cases in which they had worked. The entire selection and review processes were completed within 1 month. Subsequently, two Delphi rounds were conducted. Participants rated 12 key questions (72 items) regarding the clinical applicability of the AI-based tool on a 9-point Likert scale. Consensus required ≥70% agreement. Results: Consensus emerged for 41 of 72 items (56.9%). Respondents adopted a neutral stance on most questions concerning accuracy and clinical integration; they were neither impressed nor disappointed with the tool. A favorable view emerged only for health-screening examinations. Conversely, the stand-alone use of the AI-based tool in routine practice was opposed. Participants stressed the need for further performance optimization before deployment and advocated society-endorsed education and guidelines before adoption. Conclusion: The KSTR supports the use of an AI-based automated CXR report-drafting tool only in health-screening settings with radiologist validation and opposes its standalone use in routine practice, recommending performance optimization and society-endorsed education and guidelines before its adoption.
Files in This Item
There are no files associated with this item.
Appears in
Collections
College of Medicine > Department of Medicine > Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Jeon, Kyung Nyeo photo

Jeon, Kyung Nyeo
의과대학 (의학과)
Read more

Altmetrics

Total Views & Downloads

BROWSE