Detailed Information

Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Mitigating Adversarial Attack through Randomization Techniques and Image Smoothing

Authors
Kim, Hyeong-GyeongChoi, Sang-MinSeo, HyeonLee, Suwon
Issue Date
Jul-2025
Publisher
Tech Science Press
Keywords
Adversarial attacks; deep learning; artificial intelligence systems; random cropping; Gaussian filtering; image smoothing
Citation
Computers, Materials and Continua, v.84, no.3, pp 4381 - 4397
Pages
17
Indexed
SCIE
SCOPUS
Journal Title
Computers, Materials and Continua
Volume
84
Number
3
Start Page
4381
End Page
4397
URI
https://scholarworks.gnu.ac.kr/handle/sw.gnu/80006
DOI
10.32604/cmc.2025.067024
ISSN
1546-2218
1546-2226
Abstract
Adversarial attacks pose a significant threat to artificial intelligence systems by exposing them to vulnerabilities in deep learning models. Existing defense mechanisms often suffer drawbacks, such as the need for model retraining, significant inference time overhead, and limited effectiveness against specific attack types. Achieving perfect defense against adversarial attacks remains elusive, emphasizing the importance of mitigation strategies. In this study, we propose a defense mechanism that applies random cropping and Gaussian filtering to input images to mitigate the impact of adversarial attacks. First, the image was randomly cropped to vary its dimensions and then placed at the center of a fixed 299 x 299 space, with the remaining areas filled with zero padding. Subsequently, Gaussian filtering with a 7 x 7 kernel and a standard deviation of two was applied using a convolution operation. Finally, the smoothed image was fed into the classification model. The proposed defense method consistently appeared in the upper-right region across all attack scenarios, demonstrating its ability to preserve classification performance on clean images while significantly mitigating adversarial attacks. This visualization confirms that the proposed method is effective and reliable for defending against adversarial perturbations. Moreover, the proposed method incurs minimal computational overhead, making it suitable for real-time applications. Furthermore, owing to its model-agnostic nature, the proposed method can be easily incorporated into various neural network architectures, serving as a fundamental module for adversarial defense strategies.
Files in This Item
There are no files associated with this item.
Appears in
Collections
ETC > Journal Articles

qrcode

Items in ScholarWorks are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher Seo, Hyeon photo

Seo, Hyeon
IT공과대학 (컴퓨터공학부)
Read more

Altmetrics

Total Views & Downloads

BROWSE