Cited 1 time in
Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Park, Donghyeok | - |
| dc.contributor.author | Yeon, Sumin | - |
| dc.contributor.author | Seo, Hyeon | - |
| dc.contributor.author | Buu, Seok-Jun | - |
| dc.contributor.author | Lee, Suwon | - |
| dc.date.accessioned | 2025-03-13T05:00:18Z | - |
| dc.date.available | 2025-03-13T05:00:18Z | - |
| dc.date.issued | 2025-02 | - |
| dc.identifier.issn | 1526-1492 | - |
| dc.identifier.issn | 1526-1506 | - |
| dc.identifier.uri | https://scholarworks.gnu.ac.kr/handle/sw.gnu/77420 | - |
| dc.description.abstract | Recent research on adversarial attacks has primarily focused on white-box attack techniques, with limited exploration of black-box attack methods. Furthermore, in many black-box research scenarios, it is assumed that the output label and probability distribution can be observed without imposing any constraints on the number of attack attempts. Unfortunately, this disregard for the real-world practicality of attacks, particularly their potential for human detectability, has left a gap in the research landscape. Considering these limitations, our study focuses on using a similar color attack method, assuming access only to the output label, limiting the number of attack attempts to 100, and subjecting the attacks to human perceptibility testing. Through this approach, we demonstrated the effectiveness black box attack techniques in deceiving models and achieved a success rate of 82.68% in deceiving humans. This study emphasizes the significance of research that addresses the challenge of deceiving both humans and models, highlighting the importance of real-world applicability. | - |
| dc.format.extent | 13 | - |
| dc.language | 영어 | - |
| dc.language.iso | ENG | - |
| dc.publisher | Tech Science Press | - |
| dc.title | Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition | - |
| dc.type | Article | - |
| dc.publisher.location | 미국 | - |
| dc.identifier.doi | 10.32604/cmes.2025.061732 | - |
| dc.identifier.scopusid | 2-s2.0-105000657841 | - |
| dc.identifier.wosid | 001434673800001 | - |
| dc.identifier.bibliographicCitation | CMES - Computer Modeling in Engineering and Sciences, v.142, no.3, pp 2725 - 2737 | - |
| dc.citation.title | CMES - Computer Modeling in Engineering and Sciences | - |
| dc.citation.volume | 142 | - |
| dc.citation.number | 3 | - |
| dc.citation.startPage | 2725 | - |
| dc.citation.endPage | 2737 | - |
| dc.type.docType | Article | - |
| dc.description.isOpenAccess | N | - |
| dc.description.journalRegisteredClass | scie | - |
| dc.description.journalRegisteredClass | scopus | - |
| dc.relation.journalResearchArea | Engineering | - |
| dc.relation.journalResearchArea | Mathematics | - |
| dc.relation.journalWebOfScienceCategory | Engineering, Multidisciplinary | - |
| dc.relation.journalWebOfScienceCategory | Mathematics, Interdisciplinary Applications | - |
| dc.subject.keywordAuthor | Adversarial attacks | - |
| dc.subject.keywordAuthor | image recognition | - |
| dc.subject.keywordAuthor | information security | - |
Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.
Gyeongsang National University Central Library, 501, Jinju-daero, Jinju-si, Gyeongsangnam-do, 52828, Republic of Korea+82-55-772-0532
COPYRIGHT 2022 GYEONGSANG NATIONAL UNIVERSITY LIBRARY. ALL RIGHTS RESERVED.
Certain data included herein are derived from the © Web of Science of Clarivate Analytics. All rights reserved.
You may not copy or re-distribute this material in whole or in part without the prior written consent of Clarivate Analytics.
