Cited 7 time in
Asymmetry between right and left fundus images identified using convolutional neural networks
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Kang, Tae Seen | - |
| dc.contributor.author | Kim, Bum Jun | - |
| dc.contributor.author | Nam, Ki Yup | - |
| dc.contributor.author | Lee, Seongjin | - |
| dc.contributor.author | Kim, Kyonghoon | - |
| dc.contributor.author | Lee, Woong-sub | - |
| dc.contributor.author | Kim, Jinhyun | - |
| dc.contributor.author | Han, Yong Soep | - |
| dc.date.accessioned | 2022-12-26T07:40:38Z | - |
| dc.date.available | 2022-12-26T07:40:38Z | - |
| dc.date.issued | 2022-01 | - |
| dc.identifier.issn | 2045-2322 | - |
| dc.identifier.issn | 2045-2322 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/1730 | - |
| dc.description.abstract | We analyzed fundus images to identify whether convolutional neural networks (CNNs) can discriminate between right and left fundus images. We gathered 98,038 fundus photographs from the Gyeongsang National University Changwon Hospital, South Korea, and augmented these with the Ocular Disease Intelligent Recognition dataset. We created eight combinations of image sets to train CNNs. Class activation mapping was used to identify the discriminative image regions used by the CNNs. CNNs identified right and left fundus images with high accuracy (more than 99.3% in the Gyeongsang National University Changwon Hospital dataset and 91.1% in the Ocular Disease Intelligent Recognition dataset) regardless of whether the images were flipped horizontally. The depth and complexity of the CNN affected the accuracy (DenseNet121: 99.91%, ResNet50: 99.86%, and VGG19: 99.37%). DenseNet121 did not discriminate images composed of only left eyes (55.1%, p = 0.548). Class activation mapping identified the macula as the discriminative region used by the CNNs. Several previous studies used the flipping method to augment data in fundus photographs. However, such photographs are distinct from non-flipped images. This asymmetry could result in undesired bias in machine learning. Therefore, when developing a CNN with fundus photographs, care should be taken when applying data augmentation with flipping. | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Nature Publishing Group | - |
| dc.title | Asymmetry between right and left fundus images identified using convolutional neural networks | - |
| dc.type | Article | - |
| dc.publisher.location | 영국 | - |
| dc.identifier.doi | 10.1038/s41598-021-04323-3 | - |
| dc.identifier.scopusid | 2-s2.0-85123860469 | - |
| dc.identifier.wosid | 000749198000015 | - |
| dc.identifier.bibliographicCitation | Scientific Reports, v.12, no.1 | - |
| dc.citation.title | Scientific Reports | - |
| dc.citation.volume | 12 | - |
| dc.citation.number | 1 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | Y | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Science & Technology - Other Topics | - |
| dc.relation.journalWebOfScienceCategory | Multidisciplinary Sciences | - |
| dc.subject.keywordPlus | SYMMETRY | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
