Zitao Chen, Pritam Dash, and Karthik Pattabiraman. To appear in the Proceedings of the 18th ACM ASIA Conference on Computer and Communications Security (ACM ASIACCS), 2023. (Acceptance Rate: 16%) [ PDF | Talk ]
Abstract: Adversarial patch attacks create adversarial examples by injecting arbitrary distortions within a bounded region of the input to fool deep neural networks (DNNs). These attacks are robust (i.e.,physically-realizable) and universally malicious, and hence represent a severe security threat to real-world DNN-based systems. We propose Jujutsu, a two-stage technique to detect and mitigate robust and universal adversarial patch attacks. We first observe that adversarial patches are crafted as localized features that yield large influence on the prediction output, and continue to dominate the prediction on any input. Jujutsu leverages this observation for accurate attack detection with low false positives. Patch attacks corrupt only a localized region of the input, while the majority of the input remains unperturbed. Therefore, Jujutsu leverages generative adversarial networks (GAN) to perform localized attack recovery by synthesizing the semantic contents of the input that are corrupted by the attacks, and reconstructs a “clean” input for correct prediction.
We evaluate Jujutsu on four diverse datasets spanning 8 different DNN models, and find that it achieves superior performance and significantly outperforms four leading defenses. We further evaluate Jujutsu against physical-world attacks, as well as adaptive attacks.