Chitic, Raluca Ioana ; University of Luxembourg > Faculty of Science, Technology and Communication (FSTC)
Bernard, Nicolas
Leprévost, Franck ; University of Luxembourg > Faculty of Science, Technology and Communication (FSTC) > Computer Science and Communications Research Unit (CSC)
External co-authors :
no
Language :
English
Title :
A proof of concept to deceive humans and machines at image classification with evolutionary algorithms
Publication date :
2020
Event name :
12th Asian Conference on Intelligent Information and Database Systems
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. CoRR abs/1801.00553 (2018). http://arxiv.org/abs/1801.00553
Bernard, N., Leprévost, F.: Evolutionary algorithms for convolutional neural network visualisation. In: Meneses, E., Castro, H., Barrios Hernández, C.J., Ramos-Pollan, R. (eds.) CARLA 2018. CCIS, vol. 979, pp. 18–32. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16205-4 2
Bernard, N., Leprévost, F.: How evolutionary algorithms and information hiding deceive machines and humans for image recognition: a research program. In: Proceedings of the OLA 2019 International Conference on Optimization and Learning, Bangkok, Thailand, 29–31 January 2019, pp. 12–15 (2019)
Chitic, R., Bernard, N., Leprévost, F.: Experimental evidence of neural networks being fooled by evolved images. Work in Progress (2019–2020)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: The imagenet image database (2009). http://image-net.org
Fawzi, A., Fawzi, H., Fawzi, O.: Adversarial vulnerability for any classifier. CoRR abs/1802.08686 (2018). http://arxiv.org/abs/1802.08686
Fawzi, A., Moosavi-Dezfooli, S., Frossard, P.: Robustness of classifiers: from adversarial to random noise. In: Lee, D.D., Sugiyama, M., von Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, Barcelona, Spain, 5–10 December 2016, pp. 1624–1632 (2016). http://papers.nips.cc/paper/6331-robustness-of-classifiers-from-adversarial-to-random-noise
Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., Brendel, W.: Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=Bygh9j09KX
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016). http://www.deeplearningbook.org
Moosavi Dezfooli, S.M., Alhussein, F., Pascal, F.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582. IEEE (2016)
Nguyen, A., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427–436 (2015)
Oliphant, T.E.: A Guide to NumPy. Trelgol Publishing, New York (2006)
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017). https://doi.org/10.1145/3052973.3053009
Shafahi, A., Huang, W.R., Studer, C., Feizi, S., Goldstein, T.: Are adversarial examples inevitable? CoRR abs/1809.02104 (2018). http://arxiv.org/abs/1809. 02104
Shamir, A., Safran, I., Ronen, E., Dunkelman, O.: A simple explanation for the existence of adversarial examples with small hamming distance. CoRR abs/1901.10861 (2019). http://arxiv.org/abs/1901.10861
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014). http://arxiv.org/abs/1409.1556
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. CoRR abs/1710.08864 (2017)
Szegedy, C., et al.: Going deeper with convolutions. arXiv 1409.4842 (2014). https://arxiv.org/pdf/1409.4842.pdf
Szegedy, C., et al.: Intriguing properties of neural networks (2013). arXiv:1312.6199
Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1708. IEEE (2014)
Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction APIS. In: Proceedings of the 25th USENIX Security Symposium Austin, TX, USA, 10–12 August 2016, pp. 601–618. USENIX (2016)
Varrette, S., Bouvry, P., Cartiaux, H., Georgatos, F.: Management of an academic HPC cluster: the UL experience. In: Proceedings of the 2014 International Conference on High Performance Computing & Simulation (HPCS 2014), pp. 959–967. IEEE, Bologna, July 2014. https://hpc.uni.lu
van der Walt, S., et al.: scikit-image: image processing in Python. PeerJ 2, e453 (2014). https://doi.org/10.7717/peerj.453