Dangerous Art Paintings: Artistic Adversarial Attacks on Salient Object Detection

Published in 2023 4th International Conference on Computer Engineering and Intelligent Control (ICCEIC), 2023

Recently, deep Convolutional Neural Networks (CNNs) are found to be vulnerable to inputs with some imperceptible perturbations. However, existing adversarial attacks add noise that conforms to certain distribution which can be detected and defended easily. In this paper, we propose a novel attack approach for Salient Object Detection (SOD) network: artistic adversarial attacks. Inspired by the style transfer task, we work on adding learnable adversarial noise by changing texture. Specifically, we propose a hybrid loss function to add attack texture in the original image. Meanwhile, we present a color fusion method to make the adversarial texture more imperceptible and authentic. We come up with a white-box training and black-box testing strategy to stimulate real attack. Experiments demonstrate that our style-transferred images have fooled 10 state-of-the-art detection networks with enough authenticity and naturalness. The source code is available at: https://github.com/NorthForest233/Art.

Recommended citation: Chen Q, Zhao H, Wan C, et al. Dangerous Art Paintings: Artistic Adversarial Attacks on Salient Object Detection[C]//2023 4th International Conference on Computer Engineering and Intelligent Control (ICCEIC). IEEE, 2023: 198-204.
Download Paper