Paper published in a book (Scientific congresses, symposiums and conference proceedings)
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data
SIMONETTO, Thibault Jean Angel; GHAMIZI, Salah; CORDY, Maxime
2024In Proceedings of The Thirty-Eighth Annual Conference on Neural Information Processing Systems
Peer reviewed
 

Files


Full Text
constrained_adaptive_attack_effective_adversarial_attack_against_deep_neural_networks_for_tabular_data.pdf
Author preprint (639.82 kB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
machine learning; security; adversarial attacks; tabular data; threat models; constrained machine learning
Abstract :
[en] State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings. However, the robustness of these models remains scarcely explored. Contrary to computer vision, there are no effective attacks to properly evaluate the adversarial robustness of deep tabular models due to intrinsic properties of tabular data, such as categorical features, immutability, and feature relationship constraints. To fill this gap, we first propose CAPGD, a gradient attack that overcomes the failures of existing gradient attacks with adaptive mechanisms. This new attack does not require parameter tuning and further degrades the accuracy, up to 81% points compared to the previous gradient attacks. Second, we design CAA, an efficient evasion attack that combines our CAPGD attack and MOEVA, the best search-based attack. We demonstrate the effectiveness of our attacks on five architectures and four critical use cases. Our empirical study demonstrates that CAA outperforms all existing attacks in 17 over the 20 settings, and leads to a drop in the accuracy by up to 96.1% points and 21.9% points compared to CAPGD and MOEVA respectively while being up to five times faster than MOEVA. Given the effectiveness and efficiency of our new attacks, we argue that they should become the minimal test for any new defense or robust architectures in tabular machine learning.
Research center :
NCER-FT - FinTech National Centre of Excellence in Research
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > SerVal - Security, Reasoning & Validation
Disciplines :
Computer science
Author, co-author :
SIMONETTO, Thibault Jean Angel ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
GHAMIZI, Salah;  LIST - Luxembourg Institute of Science and Technology [LU] > Intelligent Clean Energy Systems ; RIKEN Center for Advanced Intelligence Project
CORDY, Maxime  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
External co-authors :
yes
Language :
English
Title :
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data
Publication date :
2024
Event name :
The Thirty-Eighth Annual Conference on Neural Information Processing Systems
Event date :
2024
Main work title :
Proceedings of The Thirty-Eighth Annual Conference on Neural Information Processing Systems
Publisher :
TBD
Peer reviewed :
Peer reviewed
Focus Area :
Computational Sciences
Name of the research project :
U-AGR-7180 - BRIDGES2022-1/17437536/TIMELESS BGL Cont - CORDY Maxime
Available on ORBilu :
since 15 December 2024

Statistics


Number of views
135 (12 by Unilu)
Number of downloads
101 (4 by Unilu)

Scopus citations®
 
1
Scopus citations®
without self-citations
0

Bibliography


Similar publications



Contact ORBilu