Paper published in a book (Scientific congresses, symposiums and conference proceedings)
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models
Li, Lin; Guan, Haoyan; Qiu, Jianing et al.
2024In Proceeding of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Peer reviewed
 

Files


Full Text
2403.01849-1.pdf
Author preprint (2.26 MB) Creative Commons License - Attribution, Non-Commercial, ShareAlike
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Computer Science - Computer Vision and Pattern Recognition; Computer Science - Artificial Intelligence; Machine Learning
Abstract :
[en] Large pre-trained Vision-Language Models (VLMs) like CLIP, despite having remarkable generalization ability, are highly vulnerable to adversarial examples. This work studies the adversarial robustness of VLMs from the novel perspective of the text prompt instead of the extensively studied model weights (frozen in this work). We first show that the effectiveness of both adversarial attack and defense are sensitive to the used text prompt. Inspired by this, we propose a method to improve resilience to adversarial attacks by learning a robust text prompt for VLMs. The proposed method, named Adversarial Prompt Tuning (APT), is effective while being both computationally and data efficient. Extensive experiments are conducted across 15 datasets and 4 data sparsity schemes (from 1-shot to full training data settings) to show APT's superiority over hand-engineered prompts and other state-of-the-art adaption methods. APT demonstrated excellent abilities in terms of the in-distribution performance and the generalization under input distribution shift and across datasets. Surprisingly, by simply adding one learned word to the prompts, APT can significantly boost the accuracy and robustness (epsilon=4/255) over the hand-engineered prompts by +13% and +8.5% on average respectively. The improvement further increases, in our most effective setting, to +26.4% for accuracy and +16.7% for robustness. Code is available at https://github.com/TreeLLi/APT.
Disciplines :
Computer science
Author, co-author :
Li, Lin
Guan, Haoyan
Qiu, Jianing
SPRATLING, Michael  ;  University of Luxembourg > Faculty of Humanities, Education and Social Sciences (FHSE) > Department of Behavioural and Cognitive Sciences (DBCS) > Cognitive Science and Assessment
External co-authors :
yes
Language :
English
Title :
One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models
Publication date :
June 2024
Event name :
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Event date :
17/06/2024
Main work title :
Proceeding of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Publisher :
IEEE/CVF, United States
Peer reviewed :
Peer reviewed
Commentary :
CVPR2024
Available on ORBilu :
since 16 April 2024

Statistics


Number of views
85 (1 by Unilu)
Number of downloads
43 (1 by Unilu)

OpenAlex citations
 
1

Bibliography


Similar publications



Contact ORBilu