References of "Gubri, Martin 50036374"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailLGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity
Gubri, Martin UL; Cordy, Maxime UL; Papadakis, Mike UL et al

in Computer Vision -- ECCV 2022 (2022)

We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the transferability of black-box adversarial attacks. LGV starts from a pretrained surrogate model and collects ... [more ▼]

We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the transferability of black-box adversarial attacks. LGV starts from a pretrained surrogate model and collects multiple weight sets from a few additional training epochs with a constant and high learning rate. LGV exploits two geometric properties that we relate to transferability. First, models that belong to a wider weight optimum are better surrogates. Second, we identify a subspace able to generate an effective surrogate ensemble among this wider optimum. Through extensive experiments, we show that LGV alone outperforms all (combinations of) four established test-time transformations by 1.8 to 59.9\% points. Our findings shed new light on the importance of the geometry of the weight space to explain the transferability of adversarial examples. [less ▲]

Detailed reference viewed: 41 (2 UL)
Full Text
Peer Reviewed
See detailEfficient and Transferable Adversarial Examples from Bayesian Neural Networks
Gubri, Martin UL; Cordy, Maxime UL; Papadakis, Mike UL et al

in The 38th Conference on Uncertainty in Artificial Intelligence (2022)

An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on an ensemble-based surrogate to increase diversity. We argue that transferability is ... [more ▼]

An established way to improve the transferability of black-box evasion attacks is to craft the adversarial examples on an ensemble-based surrogate to increase diversity. We argue that transferability is fundamentally related to uncertainty. Based on a state-of-the-art Bayesian Deep Learning technique, we propose a new method to efficiently build a surrogate by sampling approximately from the posterior distribution of neural network weights, which represents the belief about the value of each parameter. Our extensive experiments on ImageNet, CIFAR-10 and MNIST show that our approach improves the success rates of four state-of-the-art attacks significantly (up to 83.2 percentage points), in both intra-architecture and inter-architecture transferability. On ImageNet, our approach can reach 94% of success rate while reducing training computations from 11.6 to 2.4 exaflops, compared to an ensemble of independently trained DNNs. Our vanilla surrogate achieves 87.5% of the time higher transferability than three test-time techniques designed for this purpose. Our work demonstrates that the way to train a surrogate has been overlooked, although it is an important element of transfer-based attacks. We are, therefore, the first to review the effectiveness of several training methods in increasing transferability. We provide new directions to better understand the transferability phenomenon and offer a simple but strong baseline for future work. [less ▲]

Detailed reference viewed: 76 (9 UL)
Full Text
Peer Reviewed
See detailSearch-based adversarial testing and improvement of constrained credit scoring systems
Ghamizi, Salah UL; Cordy, Maxime UL; Gubri, Martin UL et al

in ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE '20), November 8-13, 2020 (2020)

Detailed reference viewed: 187 (27 UL)