Paper published in a journal (Scientific congresses, symposiums and conference proceedings)
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
Gubri, Martin; Cordy, Maxime; Papadakis, Mike et al.
2022In The 38th Conference on Uncertainty in Artificial Intelligence
Peer reviewed
 

Files


Full Text
2011.05074.pdf
Author preprint (910.79 kB)
Preprint arXiv
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Machine Learning; Adversarial examples; Bayesian; Neural Networks; Deep Learning; Transferability
Abstract :
[en] An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on an ensemble-based surrogate to increase diversity. We argue that transferability is fundamentally related to uncertainty. Based on a state-of-the-art Bayesian Deep Learning technique, we propose a new method to efficiently build a surrogate by sampling approximately from the posterior distribution of neural network weights, which represents the belief about the value of each parameter. Our extensive experiments on ImageNet, CIFAR-10 and MNIST show that our approach improves the success rates of four state-of-the-art attacks significantly (up to 83.2 percentage points), in both intra-architecture and inter-architecture transferability. On ImageNet, our approach can reach 94% of success rate while reducing training computations from 11.6 to 2.4 exaflops, compared to an ensemble of independently trained DNNs. Our vanilla surrogate achieves 87.5% of the time higher transferability than three test-time techniques designed for this purpose. Our work demonstrates that the way to train a surrogate has been overlooked, although it is an important element of transfer-based attacks. We are, therefore, the first to review the effectiveness of several training methods in increasing transferability. We provide new directions to better understand the transferability phenomenon and offer a simple but strong baseline for future work.
Disciplines :
Computer science
Author, co-author :
Gubri, Martin ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
Cordy, Maxime  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
Papadakis, Mike ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Le Traon, Yves ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
Sen, Koushik;  University of California, Berkeley > Computer Sciences Division
External co-authors :
yes
Language :
English
Title :
Efficient and Transferable Adversarial Examples from Bayesian Neural Networks
Publication date :
2022
Event name :
CONFERENCE IN UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
Event date :
from 01-08-2022 to 05-08-2022
Audience :
International
Journal title :
The 38th Conference on Uncertainty in Artificial Intelligence
Peer reviewed :
Peer reviewed
Focus Area :
Security, Reliability and Trust
FnR Project :
FNR12669767 - Testing Self-learning Systems, 2018 (01/09/2019-31/08/2022) - Yves Le Traon
Available on ORBilu :
since 04 January 2022

Statistics


Number of views
118 (9 by Unilu)
Number of downloads
471 (1 by Unilu)

Scopus citations®
 
1
Scopus citations®
without self-citations
0

Bibliography


Similar publications



Contact ORBilu