![]() |
Copy-paste attacks: Targeted physical world attacks on self-driving |
---|---|
รหัสดีโอไอ | |
Creator | Jian Qu |
Title | Copy-paste attacks: Targeted physical world attacks on self-driving |
Contributor | Chuanxiang Bi |
Publisher | Khon Kaen University, Thailand |
Publication Year | 2568 |
Journal Title | Asia-Pacific Journal of Science and Technology |
Journal Vol. | 30 |
Journal No. | 4 |
Page no. | 13 (16 pages) |
Keyword | Deep neural networks, Copy-Paste Attack, Adversarial attacks, ResNet26-CBAM, Adversarial trained |
URL Website | https://apst.kku.ac.th/ |
Website title | https://apst.kku.ac.th/copy-paste-attacks-targeted-physical-world-attacks-on-self-driving/ |
ISSN | 2539-6293 |
Abstract | Deep neural networks are susceptible to adversarial attacks, which ranging from unseen perturbations to tiny inconspicuous attacks that can cause deep neural networks (DNN) to output errors. Although many adversarial attack methods have been proposed, most methods cannot be easily applied in the physical (real) world due to their use of over-detailed images; such images could not be printed on a normal scale. In this paper, we propose a novel method of physical adversarial attack, the Copy-Paste Attack, by copying other image pattern elements to make stickers and pasting them on the attack target. This attack can be printed out and applied in the physical world. Moreover, this attack reduces the recognition accuracy of deep neural network by making the model misclassify the traffic signs as the attack pattern. We conducted our experiment with a model intelligent car in a physical world. We tested three well-known DNN models, on three different kinds of Datasets. The experimental results demonstrate that our proposed collaborative performance advertising solution (CPAs) greatly interferes with the recognition rate of traffic signs. Moreover, our CPAs outperform the existing method PR2. Furthermore, we tested one of our previous ResNet26-carbon border adjustment mechanism (CBAM) models, although it exhibits higher robustness against the CPA attack compared with other well-known CNN models, our ResNet26-CBAM also got misguided by CPAs with an accuracy of 60%. In addition, we trained the CNN models with the physical defense method of adversarial training; however, it had little effect on our CPA attacks. |