Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In Proceedings of USENIX Security Symposium (USENIX Security). USENIX Association, 267-284.
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. 2021. Extracting Training Data from Large Language Models. In Proceedings of USENIX Security Symposium (USENIX Security). USENIX Association, 2633-2650.
Dingfan Chen, Ning Yu, Yang Zhang, and Mario Fritz. 2020. Gan-leaks: A taxonomy of membership inference attacks against generative models. In Proceedings of ACM SIGSAC Conference on Computer and Communications Security (CCS). ACM, 343-362.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of Annual Conference on Neural Information Processing Systems (NeurIPS). Curran Associates, Inc., 2672-2680.
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2019. LOGAN: Membership inference attacks against generative models. In Proceedings on Privacy Enhancing Technologies, Vol. 2019. Sciendo, 133-152.
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of Annual Conference on Neural Information Processing Systems (NeurIPS). Curran Associates, Inc., 6626-6637.
Hailong Hu and Jun Pang. 2021. Model Extraction and Defenses on Generative Adversarial Networks. ArXiv preprint arXiv:2101.02069 (2021).
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. 2018. Progressive Growing of GANs for Quality, Stability, and Variation. In Proceedings of International Conference on Learning Representations (ICLR).
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 4401-4410.
Klas Leino and Matt Fredrikson. 2020. Stolen memories: Leveraging model memorization for calibrated white-box membership inference. In Proceedings of USENIX Security Symposium (USENIX Security). USENIX Association, 1605-1622.
Yunhui Long, LeiWang, Diyue Bu, Vincent Bindschaedler, XiaofengWang, Haixu Tang, Carl A Gunter, and Kai Chen. 2020. A Pragmatic Approach to Membership Inferences on Machine Learning Models. In Proceedings of IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 521-534.
C Meehan, K Chaudhuri, and S Dasgupta. 2020. A non-parametric test to detect data-copying in generative models. In International Conference on Artificial Intelligence and Statistics.
Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. 2019. ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models. In Proceedings of Network and Distributed Systems Security Symposium (NDSS). Internet Society.
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In Proceedings of IEEE Symposium on Security and Privacy (S&P). IEEE, 3-18.
Liwei Song and Prateek Mittal. 2021. Systematic evaluation of privacy risks of machine learning models. In Proceedings of USENIX Security Symposium (USENIX Security). USENIX Association, 2615-2632.
Samuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. 2018. Privacy risk in machine learning: Analyzing the connection to overfitting. In Proceedings of IEEE Computer Security Foundations Symposium (CSF). IEEE, 268-282.