[en] While the literature on security attacks and defense of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern about the under-explored field of realistic adversarial attacks and their implications on the robustness of real-world systems. Our paper paves the way for a better understanding of adversarial robustness against realistic attacks and makes two major contributions. First, we conduct a study on three real-world use cases (text classification, botnet detection, malware detection)) and five datasets in order to evaluate whether unrealistic adversarial examples can be used to protect models against realistic examples. Our results reveal discrepancies across the use cases, where unrealistic examples can either be as effective as the realistic ones or may offer only limited improvement. Second, to explain these results, we analyze the latent representation of the adversarial examples generated with realistic and unrealistic attacks. We shed light on the patterns that discriminate which unrealistic examples can be used for effective hardening. We release our code, datasets and models to support future research in exploring how to reduce the gap between unrealistic and realistic adversarial attacks.
Research center :
ULHPC - University of Luxembourg: High Performance Computing Interdisciplinary Centre for Security, Reliability and Trust (SnT) > Other
Disciplines :
Computer science
Author, co-author :
DYRMISHI, Salijona ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
GHAMIZI, Salah ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
SIMONETTO, Thibault Jean Angel ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
LE TRAON, Yves ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
CORDY, Maxime ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
External co-authors :
no
Language :
English
Title :
On the empirical effectiveness of unrealistic adversarial hardening against realistic adversarial attacks
Publication date :
2023
Event name :
IEEE Symposium on Security and Privacy 2023
Event date :
from 22-05-2023 t0 26-05-2023
Audience :
International
Main work title :
Conference Proceedings 2023 IEEE Symposium on Security and Privacy (SP)