Cited 88 time in
Using a Dual-Input Convolutional Neural Network for Automated Detection of Pediatric Supracondylar Fracture on Conventional Radiography
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Choi, Jae Won | - |
| dc.contributor.author | Cho, Yeon Jin | - |
| dc.contributor.author | Lee, Seowoo | - |
| dc.contributor.author | Lee, Jihyuk | - |
| dc.contributor.author | Lee, Seunghyun | - |
| dc.contributor.author | Choi, Young Hun | - |
| dc.contributor.author | Cheon, Jung-Eun | - |
| dc.contributor.author | Ha, Ji Young | - |
| dc.date.accessioned | 2024-12-02T22:00:41Z | - |
| dc.date.available | 2024-12-02T22:00:41Z | - |
| dc.date.issued | 2020-02 | - |
| dc.identifier.issn | 0020-9996 | - |
| dc.identifier.issn | 1536-0210 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/72095 | - |
| dc.description.abstract | Objectives This study aimed to develop a dual-input convolutional neural network (CNN)-based deep-learning algorithm that utilizes both anteroposterior (AP) and lateral elbow radiographs for the automated detection of pediatric supracondylar fracture in conventional radiography, and assess its feasibility and diagnostic performance. Materials and Methods To develop the deep-learning model, 1266 pairs of AP and lateral elbow radiographs examined between January 2013 and December 2017 at a single institution were split into a training set (1012 pairs, 79.9%) and a validation set (254 pairs, 20.1%). We performed external tests using 2 types of distinct datasets: one temporally and the other geographically separated from the model development. We used 258 pairs of radiographs examined in 2018 at the same institution as a temporal test set and 95 examined between January 2016 and December 2018 at another hospital as a geographic test set. Images underwent preprocessing, including cropping and histogram equalization, and were input into a dual-input neural network constructed by merging 2 ResNet models. An observer study was performed by radiologists on the geographic test set. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of the model and human readers were calculated and compared. Results Our trained model showed an AUC of 0.976 in the validation set, 0.985 in the temporal test set, and 0.992 in the geographic test set. In AUC comparison, the model showed comparable results to the human readers in the geographic test set; the AUCs of human readers were in the range of 0.977 to 0.997 (P's > 0.05). The model had a sensitivity of 93.9%, a specificity of 92.2%, a PPV of 80.5%, and an NPV of 97.8% in the temporal test set, and a sensitivity of 100%, a specificity of 86.1%, a PPV of 69.7%, and an NPV of 100% in the geographic test set. Compared with the developed deep-learning model, all 3 human readers showed a significant difference (P's < 0.05) using the McNemar test, with lower specificity and PPV in the model. On the other hand, there was no significant difference (P's > 0.05) in sensitivity and NPV between all 3 human readers and the proposed model. Conclusions The proposed dual-input deep-learning model that interprets both AP and lateral elbow radiographs provided an accurate diagnosis of pediatric supracondylar fracture comparable to radiologists. | - |
| dc.format.extent | 10 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Lippincott Williams & Wilkins Ltd. | - |
| dc.title | Using a Dual-Input Convolutional Neural Network for Automated Detection of Pediatric Supracondylar Fracture on Conventional Radiography | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.1097/RLI.0000000000000615 | - |
| dc.identifier.scopusid | 2-s2.0-85077478139 | - |
| dc.identifier.wosid | 000507612400006 | - |
| dc.identifier.bibliographicCitation | Investigative Radiology, v.55, no.2, pp 101 - 110 | - |
| dc.citation.title | Investigative Radiology | - |
| dc.citation.volume | 55 | - |
| dc.citation.number | 2 | - |
| dc.citation.startPage | 101 | - |
| dc.citation.endPage | 110 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Radiology, Nuclear Medicine & Medical Imaging | - |
| dc.relation.journalWebOfScienceCategory | Radiology, Nuclear Medicine & Medical Imaging | - |
| dc.subject.keywordPlus | ARTIFICIAL-INTELLIGENCE | - |
| dc.subject.keywordPlus | CLASSIFICATION | - |
| dc.subject.keywordAuthor | deep learning | - |
| dc.subject.keywordAuthor | artificial intelligence | - |
| dc.subject.keywordAuthor | supracondylar fracture | - |
| dc.subject.keywordAuthor | pediatric | - |
| dc.subject.keywordAuthor | children | - |
| dc.subject.keywordAuthor | conventional radiography | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
