Doctoral thesis (Dissertations and theses)
Privacy Attacks and Protection in Generative Models
HU, Hailong
2023
 

Files


Full Text
Dissertation_HailongHU.pdf
Author postprint (5.62 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Privacy; Generative Models
Abstract :
[en] Recent years have witnessed the tremendous success of generative models in data synthesis. Typically, a well-trained model itself and its training set constitute key assets for model owners, which allows technology companies to gain a leading position in the global market. However, privacy is a key consideration in deploying state-of-the-art generative models in practice. On the one hand, the exposure of model privacy can lead to the compromise of the intellectual property rights of legitimate model owners, which consequently affects the market share of companies. On the other hand, the disclosure of training data, especially when it includes personal information, constitutes a direct infringement of data privacy, which severely leads to legal sanctions for companies. Indeed, the advent of emerging generative models critically necessitates novel privacy analysis and protection techniques to ensure the confidentiality of cutting-edge models and their training data. To solve these challenges, this dissertation investigates several new privacy attacks and protection methods for generative models from the perspective of model privacy and data privacy. In addition, this dissertation also explores a new mode that leverages existing pre-trained generative models to study the security vulnerabilities of discriminative models, which provides a fresh angle to apply generative models to the risk analysis of discriminative models. This dissertation is organized into three parts. In the first part, i.e. model privacy in generative models, I develop new model extraction attacks to steal generative adversarial networks (GANs). The evaluations show that preventing model extraction attacks against GANs is difficult but protecting GANs through verifying the ownership can be a deterrence against malicious adversaries. Thus, I further propose an ownership protection method to safeguard GANs, which can effectively recognize these stolen models constructed from physical stealing and model extraction. In the second part, i.e. data privacy in generative models, I develop two types of membership inference attacks against diffusion models, and the proposed loss-based method reveals the relationship between membership inference risks and the generative mechanism of diffusion models. I also investigate property inference risks in diffusion models and propose the first property aware sampling method to mitigate this attack, which bears the benefits of being plug-in and model-agnostic. In the third part, i.e. applications of generative models, I propose a new type of out-of-distribution (OOD) attack by leveraging off-the-shelf pre-trained GANs, which demonstrates that GANs can be utilized to directly construct samples to fool classification models and evade OOD detection. Taken together, this dissertation primarily provides new privacy attacks and protection methods for generative models and can contribute to a deeper and more comprehensive understanding of the privacy of generative artificial intelligence.
Disciplines :
Computer science
Author, co-author :
HU, Hailong  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PI Mauw
Language :
English
Title :
Privacy Attacks and Protection in Generative Models
Defense date :
12 December 2023
Institution :
Unilu - Université du Luxembourg [Interdisciplinary Centre for Security, Reliability and Trust (SNT)], Esch-sur-Alzette, Luxembourg
Degree :
DOCTEUR DE L’UNIVERSITÉ DU LUXEMBOURG EN INFORMATIQUE
Promotor :
PANG, Jun  ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
President :
LENZINI, Gabriele ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Jury member :
HUMBERT, Mathias;  UNIL - University of Lausanne [CH]
ZHANG, Yang;  CISPA Helmholtz Center for Information Security
Focus Area :
Security, Reliability and Trust
FnR Project :
FNR13550291 - Privacy Attacks And Protection In Machine Learning As A Service, 2019 (01/12/2019-30/11/2023) - Hailong Hu
Name of the research project :
Privacy Attacks And Protection In Machine Learning As A Service
Funders :
FNR - Fonds National de la Recherche [LU]
Funding number :
13550291
Available on ORBilu :
since 18 December 2023

Statistics


Number of views
193 (11 by Unilu)
Number of downloads
90 (5 by Unilu)

Bibliography


Similar publications



Contact ORBilu