Practical Adversarial Attacks Imperceptible to Humans in Visual Recognition
- Authors
- Park, Donghyeok; Yeon, Sumin; Seo, Hyeon; Buu, Seok-Jun; Lee, Suwon
- Issue Date
- Feb-2025
- Publisher
- Tech Science Press
- Keywords
- Adversarial attacks; image recognition; information security
- Citation
- CMES - Computer Modeling in Engineering and Sciences, v.142, no.3, pp 2725 - 2737
- Pages
- 13
- Indexed
- SCIE
SCOPUS
- Journal Title
- CMES - Computer Modeling in Engineering and Sciences
- Volume
- 142
- Number
- 3
- Start Page
- 2725
- End Page
- 2737
- URI
- https://scholarworks.gnu.ac.kr/handle/sw.gnu/77420
- DOI
- 10.32604/cmes.2025.061732
- ISSN
- 1526-1492
1526-1506
- Abstract
- Recent research on adversarial attacks has primarily focused on white-box attack techniques, with limited exploration of black-box attack methods. Furthermore, in many black-box research scenarios, it is assumed that the output label and probability distribution can be observed without imposing any constraints on the number of attack attempts. Unfortunately, this disregard for the real-world practicality of attacks, particularly their potential for human detectability, has left a gap in the research landscape. Considering these limitations, our study focuses on using a similar color attack method, assuming access only to the output label, limiting the number of attack attempts to 100, and subjecting the attacks to human perceptibility testing. Through this approach, we demonstrated the effectiveness black box attack techniques in deceiving models and achieved a success rate of 82.68% in deceiving humans. This study emphasizes the significance of research that addresses the challenge of deceiving both humans and models, highlighting the importance of real-world applicability.
- Files in This Item
- There are no files associated with this item.
- Appears in
Collections - ETC > Journal Articles

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.