Paper published in a book (Scientific congresses, symposiums and conference proceedings)
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability
ALECCI, Marco; Conti, Mauro; Marchiori, Francesco et al.
2023In Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2023
Peer reviewed
 

Files


Full Text
3607199.3607227.pdf
Author postprint (1.31 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Adversarial Attacks; Adversarial Machine Learning; Evasion Attacks; Surrogate Model; Transferability; Adversarial attack; Adversarial machine learning; Evasion attack; Ground truth; Machine learning models; Machine-learning; Modeling architecture; Side effect; Surrogate modeling; Human-Computer Interaction; Computer Networks and Communications; Computer Vision and Pattern Recognition; Software
Abstract :
[en] Evasion attacks are a threat to machine learning models, where adversaries attempt to affect classifiers by injecting malicious samples. An alarming side-effect of evasion attacks is their ability to transfer among different models: this property is called transferability. Therefore, an attacker can produce adversarial samples on a custom model (surrogate) to conduct the attack on a victim’s organization later. Although literature widely discusses how adversaries can transfer their attacks, their experimental settings are limited and far from reality. For instance, many experiments consider both attacker and defender sharing the same dataset, balance level (i.e., how the ground truth is distributed), and model architecture. In this work, we propose the DUMB attacker model. This framework allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ. DUMB considers the following conditions: Dataset soUrces, Model architecture, and the Balance of the ground truth. We then propose a novel testbed to evaluate many state-of-the-art evasion attacks with DUMB; the testbed consists of three computer vision tasks with two distinct datasets each, four types of balance levels, and three model architectures. Our analysis, which generated 13K tests over 14 distinct attacks, led to numerous novel findings in the scope of transferable attacks with surrogate models. In particular, mismatches between attackers and victims in terms of dataset source, balance levels, and model architecture lead to non-negligible loss of attack performance.
Disciplines :
Computer science
Author, co-author :
ALECCI, Marco  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > TruX
Conti, Mauro ;  University of Padova, Padua, Italy
Marchiori, Francesco ;  University of Padova, Padua, Italy
Martinelli, Luca ;  University of Padua, Padua, Italy
Pajola, Luca ;  SnT, University of Luxembourg, Luxembourg, Luxembourg
External co-authors :
yes
Language :
English
Title :
Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial Transferability
Publication date :
16 October 2023
Event name :
Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses
Event organizer :
Hong Kong Polytechnic University
Event place :
Hong Kong, Hong Kong SAR China
Event date :
16-10-2023 => 18-10-2023
Audience :
International
Main work title :
Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2023
Publisher :
Association for Computing Machinery
ISBN/EAN :
9798400707650
Peer reviewed :
Peer reviewed
Available on ORBilu :
since 22 November 2023

Statistics


Number of views
85 (2 by Unilu)
Number of downloads
30 (0 by Unilu)

Scopus citations®
 
10
Scopus citations®
without self-citations
4
OpenAlex citations
 
10

Bibliography


Similar publications



Contact ORBilu