Augmented Smoothing for Robust CNN Image Classification against Adversarial Attacks
DOI:
https://doi.org/10.56979/1001/2025/1170Keywords:
Adversarial Robustness, Convolutional Neural Networks, Smoothing, Denoising, Image ClassificationAbstract
This paper presents a defence framework for improving the adversarial robustness of convolutional neural network (CNN) image classifiers through a combination of Feature Denoising Blocks (FDBs), Spatial Smoothing (SS), and Gaussian Data Augmentation (GDA). Unlike prior denoising-based defences, the proposed method specifies explicit architectural integration of FDB modules within CNN feature stages and applies controlled smoothing at both input and intermediate layers. The defence is evaluated under a clearly defined white-box threat model using FGSM, PGD and DeepFool attacks on CIFAR-10 and a subset of ImageNet. Baselines include undefended CNNs, pure denoising, and pure smoothing. Results show that Augmented Smoothing improves adversarial accuracy while maintaining clean-image performance, with the combined system outperforming individual components. Ablation studies confirm that FDB + SS yields the largest robustness gain. While the method does not address certified robustness, it provides a practical and computationally lightweight defence for real-time CNN classification.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License



