Black-box attack; Convolutional Neural Network; High resolution adversarial image; Noise Blowing-Up method; Automatic classification; Black boxes; Clean images; Convolutional neural network; Exposed to; High resolution; Noise blowing-up method; Zone of interest; Artificial Intelligence; Information Systems; Electrical and Electronic Engineering; Control and Optimization; Instrumentation
Abstract :
[en] Trained convolutional neural networks (CNNs) are among the leading tools used for the automatic classification of images. They are nevertheless exposed to attacks: Given an input clean image classified by a CNN in a category, carefully designed adversarial images may lead CNNs to erroneous classifications, although humans would still classify 'correctly' the constructed adversarial images in the same category as the input image. Currently most attacks are performed in the image input size domain of the considered CNN, which is usually small. However, due to privacy concerns with personal images on social media, there is a demand for generating large adversarial images that preserve the visual information of original images with the highest possible quality, while preventing automatic tracking and personal identification. Creating large-size adversarial images is difficult due to speed, adversity, and visual quality challenges, in particular if a requirement on adversarial images is the inability for humans to notice any difference between them and the original clean images. This paper describes the zone-of-interest generic strategy that aims at increasing drastically the efficiency of any type of attack (white-box or black-box, untargeted or targeted) and any specific attack (FGSM, PGD, BIM, SimBA, AdvGAN, EA-based attacks, etc.) on CNNs. Instead of exploring the full image size, the strategy identifies zones on which to focus the attacks. Although applying to any image size, the strategy is especially valuable for large high-resolution images. This strategy can be combined with other generic approaches, like the noise blowing-up method, to further improve attacks' performances.
Disciplines :
Computer science
Author, co-author :
LEPREVOST, Franck ✱; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
TOPAL, Ali Osman ✱; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
MANCELLARI, Enea ✱; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.
Bibliography
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., et al. : Tensorflow: Large-scale machine learning on heterogeneous distributed systems. ArXiv preprint arXiv: 1603. 04467 (2016)
Brendel, W., Bethge, M. : Approximating cnns with bag-of-localfeatures models works surprisingly well on imagenet. ArXiv preprint arXiv: 1904. 00760 (2019)
Carlini, N., Wagner, D. : Towards Evaluating the Robustness of Neural Networks. In: 2017 IEEE Symposium on Security and Privacy (sp). pp. 39-57. IEEE (2017)
Chollet, F., et al. : Keras. https: //keras. io (2015)
Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., Fei-Fei, L. : The ImageNet Image Database (2009), http: //image-net. org
Goodfellow, I. J., Shlens, J., Szegedy, C. : Explaining and harnessing adversarial examples. CoRR abs/1810. 00069 (2015), http: //arxiv. org/abs/1412. 6572
Guo, C., Gardner, J., You, Y., Wilson, A. G., Weinberger, K. : Simple black-box adversarial attacks. In: International Conference on Machine Learning. pp. 2484-2493. PMLR (2019)
Guo, C., Gardner, J., You, Y., Wilson, A. G., Weinberger, K. : Simple black-box adversarial attacks. In: International Conference on Machine Learning. pp. 2484-2493. PMLR (2019)
He, K., Zhang, X., Ren, S., Sun, J. : Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H. : Mobilenets: Efficient convolutional neural networks for mobile vision applications. ArXiv preprint arXiv: 1704. 04861 (2017)
Hu, W., Tan, Y. : Generating adversarial malware examples for black-box attacks based on gan. In: Data Mining and Big Data: 7th International Conference, DMBD 2022, Beijing, China, November 21-24, 2022, Proceedings, Part II. pp. 409-423. Springer (2023)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K. Q. : Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4700-4708 (2017)
Jere, M., Rossi, L., Hitaj, B., Ciocarlie, G., Boracchi, G., Koushanfar, F. : Scratch that! an evolution-based adversarial attack against neural networks. ArXiv preprint arXiv: 1912. 02316 (2019)
Krizhevsky, A., Nair, V., Hinton, G. : Cifar-10 (Canadian Institute for Advanced Research) http: //www. cs. Toronto. edu/~kriz/ cifar. html
Kurakin, A., Goodfellow, I. J., Bengio, S. : Adversarial examples in the physical world. CoRR abs/1607. 02533 (2016), http: //arxiv. org/abs/1607. 02533
Leprevost, F., Topal, A. O., Mancellari, E. : Creating High-Resolution Adversarial Images Against Convolutional Neural Networks with the Noise Blowing-Up Method. In: Intelligent Information and Database Systems, 15th Asian Conference, ACIIDS 2023, Phuket, Thailand, July 24-26, 2023, Proceedings. Springer (To appear)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A. : Towards deep learning models resistant to adversarial attacks. CoRR abs/1706. 06083 (2019), http: //arxiv. org/abs/1706. 06083
Oliphant, T. E. : A guide to NumPy. Trelgol Publishing USA (2006)
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., Swami, A. : The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P). pp. 372-387. IEEE (2016), https: //ieeexplore. ieee. org/document/7467366
Simonyan, K., Zisserman, A. : Very deep convolutional networks for large-scale image recognition. ArXiv preprint arXiv: 1409. 1556 (2014)
Su, J., Vargas, D. V., Sakurai, K. : One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation 23 (5), 828-841 (2019)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z. : Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2818-2826 (2016), https: //ieeexplore. ieee. org/ document/7780677
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R. : Intriguing properties of neural networks. ArXiv preprint arXiv: 1312. 6199 (2013)
Targonski, C. : Tensorflow implementation of generating adversarial examples with adversarial networks (2019), https: //github. com/ctargon/AdvGAN-tf/
Topal, A. O., Chitic, R., Leprevost, F. : One evolutionary algorithm deceives humans and ten convolutional neural networks trained on imagenet at image recognition. Applied Soft Computing 143, 110397 (2023). https: //doi. org/10. 1016/j. Asoc. 2023. 110397, https: //www. sciencedirect. com/science/article/pii/S1568494623004155
Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jegou, H. : Training data-efficient image transformers & distillation through attention. In: International Conference on Machine Learning. pp. 10347-10357. PMLR (2021)
Van Rossum, G., Drake, F. L. : Python 3 Reference Manual. CreateSpace, Scotts Valley, CA, (2009), https: //dl. Acm. org/doi/ book/10. 5555/1593511
Van der Walt, S., Schonberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., Gouillart, E., Yu, T., the scikit-image contributors: scikit-image: image processing in Python. PeerJ 2, e453 (2014). https: //doi. org/10. 7717/peerj. 453
Wu, J. : Generating adversarial examples in the harsh conditions. CoRR abs/1908. 11332 (2020), https: //arxiv. org/abs/1908. 11332
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A. : Learning deep features for discriminative localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2921-2929 (2016)
Zoph, B., Vasudevan, V., Shlens, J., Le, Q. V. : Learning transferable architectures for scalable image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 8697-8710 (2018)