Abstract :
[en] Generative adversarial networks (GANs) have shown remarkable success in image synthesis, making GAN models themselves commercially valuable to legitimate model owners. Therefore, it is critical to technically protect the intellectual property of GANs. Prior works need to tamper with the training set or training process to verify the ownership of a GAN. In this article, we show that these methods are not robust to emerging model extraction attacks. Then, we propose a new method GAN-Guards which utilizes the common characteristics of a target model and its stolen models for ownership infringement detection. Our method can be directly applicable to all well-trained GANs as it does not require retraining target models. Extensive experimental results show that our new method achieves superior detection performance, compared with the watermark-based and fingerprint-based methods. Finally, we demonstrate the effectiveness of our method with respect to the number of generations of model extraction attacks, the number of generated samples, and adaptive attacks.
Funding text :
Received 17 January 2025; revised 24 March 2025; accepted 10 April 2025. Date of publication 16 April 2025; date of current version 31 October 2025. This work was supported by Luxembourg National Research Fund (FNR) under Grant 13550291. This article was recommended for publication by Associate Editor Pablo Estevez upon evaluation of the reviewers\u2019 comments. (Corresponding author: Hailong Hu.) Hailong Hu was with the Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg, 4365 Esch-sur-Alzette, Luxembourg. He is now with the National Research Base of Intelligent Manufacturing Service, Chongqing Technology and Business University, Chongqing 400067, China (e-mail: huhailong@ctbu.edu.cn.Jun).
Scopus citations®
without self-citations
0