[en] State-of-the-art deep learning models for tabular data have recently achieved acceptable performance to be deployed in industrial settings. Contrary to computer vision, there is to date no efficient constrained whitebox attack to evaluate the adversarial robustness of deep tabular models due to intrinsic properties of tabular data such as categorical features, immutability, and feature relationship constraints. To fill this gap, we propose CAPGD, the first efficient evasion attack for constrained tabular deep learning models. CAPGD is an iterative parameter-free attack to generate adversarial examples under constraints. We evaluate CAPGD across four critical use cases: credit scoring, phishing, botnet attacks, and ICU survival prediction. Our empirical study covers 5 modern tabular deep learning architectures and demonstrates the effectiveness of our attack which improves over the most effective constrained attack by 81% points.
Research center :
NCER-FT - FinTech National Centre of Excellence in Research Interdisciplinary Centre for Security, Reliability and Trust (SnT) > SerVal - Security, Reasoning & Validation
Disciplines :
Computer science
Author, co-author :
SIMONETTO, Thibault Jean Angel ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
GHAMIZI, Salah; LIST - Luxembourg Institute of Science and Technology [LU] > Intelligent Clean Energy Systems ; RIKEN Center for Advanced Intelligence Project
CORDY, Maxime ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
External co-authors :
yes
Language :
English
Title :
Towards Adaptive Attacks on Constrained Tabular Machine Learning
Publication date :
2024
Event name :
ICML 2024 Workshop on the Next Generation of AI Safety