Eprint already available on another site (E-prints, Working papers and Research blog)
Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability
GUBRI, Martin; CORDY, Maxime; LE TRAON, Yves
2023
 

Files


Full Text
2304.02688.pdf
Author preprint (777 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Abstract :
[en] Transferability is the property of adversarial examples to be misclassified by other models than the surrogate model for which they were crafted. Previous research has shown that transferability is substantially increased when the training of the surrogate model has been early stopped. A common hypothesis to explain this is that the later training epochs are when models learn the non-robust features that adversarial attacks exploit. Hence, an early stopped model is more robust (hence, a better surrogate) than fully trained models. We demonstrate that the reasons why early stopping improves transferability lie in the side effects it has on the learning dynamics of the model. We first show that early stopping benefits transferability even on models learning from data with non-robust features. We then establish links between transferability and the exploration of the loss landscape in the parameter space, on which early stopping has an inherent effect. More precisely, we observe that transferability peaks when the learning rate decays, which is also the time at which the sharpness of the loss significantly drops. This leads us to propose RFN, a new approach for transferability that minimizes loss sharpness during training in order to maximize transferability. We show that by searching for large flat neighborhoods, RFN always improves over early stopping (by up to 47 points of transferability rate) and is competitive to (if not better than) strong state-of-the-art baselines.
Research center :
- Interdisciplinary Centre for Security, Reliability and Trust (SnT) > SerVal - Security, Reasoning & Validation
Disciplines :
Mathematics
Computer science
Author, co-author :
GUBRI, Martin ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
CORDY, Maxime  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
LE TRAON, Yves ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
Language :
English
Title :
Going Further: Flatness at the Rescue of Early Stopping for Adversarial Example Transferability
Publication date :
April 2023
Focus Area :
Computational Sciences
FnR Project :
FNR12669767 - Testing Self-learning Systems, 2018 (01/09/2019-31/08/2022) - Yves Le Traon
Available on ORBilu :
since 28 June 2023

Statistics


Number of views
58 (2 by Unilu)
Number of downloads
17 (0 by Unilu)

Bibliography


Similar publications



Contact ORBilu