A Unified Framework for Adversarial Attack and Defense in Constrained Feature Space
English
Simonetto, Thibault Jean Angel[University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Dyrmishi, Salijona[University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Ghamizi, Salah[University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > Computer Science and Communications Research Unit (CSC)]
Cordy, Maxime[University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
Le Traon, Yves[University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal >]
2022
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI-22
International Joint Conferences on Artificial Intelligence Organization
1313-1319
Yes
International
978-1-956792-00-3
INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE
from 23-07-2022 to 29-07-2022
[en] Computer Vision: Adversarial learning, adversarial attack and defense methods ; Constraint Satisfaction and Optimization: Constraints and Machine Learning ; Constraint Satisfaction and Optimization: Constraint Satisfaction ; Constraint Satisfaction and Optimization: Constraint Optimization ; Search: Evolutionary Computation
[en] The generation of feasible adversarial examples is necessary for properly assessing models that work in constrained feature space. However, it remains a challenging task to enforce constraints into attacks that were designed for computer vision. We propose a unified framework to generate feasible adversarial examples that satisfy given domain constraints. Our framework can handle both linear and non-linear constraints. We instantiate our framework into two algorithms: a gradient-based attack that introduces constraints in the loss function to maximize, and a multi-objective search algorithm that aims for misclassification, perturbation minimization, and constraint satisfaction. We show that our approach is effective in four different domains, with a success rate of up to 100%, where state-of-the-art attacks fail to generate a single feasible example. In addition to adversarial retraining, we propose to introduce engineered non-convex constraints to improve model adversarial robustness. We demonstrate that this new defense is as effective as adversarial retraining. Our framework forms the starting point for research on constrained adversarial attacks and provides relevant baselines and datasets that future research can exploit.
Interdisciplinary Centre for Security, Reliability and Trust (SnT)