[en] The wide adoption of Machine Learning to solve a large set of real-life problems came with the need to collect and process large volumes of data, some of which are considered personal and sensitive, raising serious concerns about data protection. Privacy-enhancing technologies (PETs) are often indicated as a solution to protect personal data and to achieve a general trustworthiness as required by current EU regulations on data protection and AI. However, an off-the-shelf application of PETs is insufficient to ensure a high-quality of data protection, which one needs to understand. This work systematically discusses the risks against data protection in modern Machine Learning systems taking the original perspective of the data owners, who are those who hold the various data sets, data models, or both, throughout the machine learning life cycle and considering the different Machine Learning architectures. It argues that the origin of the threats, the risks against the data, and the level of protection offered by PETs depend on the data processing phase, the role of the parties involved, and the architecture where the machine learning systems are deployed. By offering a framework in which to discuss privacy and confidentiality risks for data owners and by identifying and assessing privacy-preserving countermeasures for machine learning, this work could facilitate the discussion about compliance with EU regulations and directives.
We discuss current challenges and research questions that are still unsolved in the field. In this respect, this paper provides researchers and developers working on machine learning with a comprehensive body of knowledge to let them advance in the science of data protection in machine learning field as well as in closely related fields such as Artificial Intelligence.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > IRiSC - Socio-Technical Cybersecurity
Disciplines :
Sciences informatiques
Auteur, co-auteur :
EL MESTARI, Soumia Zohra ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
LENZINI, Gabriele ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
DEMIRCI, Huseyin ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
Preserving data privacy in machine learning systems
Abdalla, Michel, Catalano, Dario, Fiore, Dario, Gay, Romain, Ursu, Bogdan, Multi-input functional encryption for inner products: function-hiding realizations and constructions without pairings. Shacham, Hovav, Boldyreva, Alexandra, (eds.) Advances in Cryptology – CRYPTO 2018, 2018, Springer International Publishing, 597–627.
Agrawal, Shashank, Chase, Melissa, Fame: fast attribute-based message encryption. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS '17, 2017, Association for Computing Machinery, New York, NY, USA, 665–682.
Agrawal, Shweta, Libert, Benoît, Stehlé, Damien, Fully secure functional encryption for inner products, from standard assumptions. Robshaw, Matthew, Katz, Jonathan, (eds.) Advances in Cryptology – CRYPTO 2016, 2016, Springer Berlin Heidelberg, Berlin, Heidelberg, 333–362.
Aharoni, Ehud, Adir, Allon, Baruch, Moran, Drucker, Nir, Ezov, Gilad, Farkash, Ariel, Greenberg, Lev, Masalha, Ramy, Moshkowich, Guy, Murik, Dov, et al. Helayers: a tile tensors framework for large neural networks on encrypted data. arXiv preprint arXiv:2011.01805, 2020.
Al-Rubaie, Mohammad, Chang, J. Morris, Privacy-preserving machine learning: threats and solutions. IEEE Secur. Priv. 17:2 (2019), 49–58.
Alaa, Ahmed, Van Breugel, Boris, Saveliev, Evgeny S., van der Schaar, Mihaela, How faithful is your synthetic data? Sample-level metrics for evaluating and auditing generative models. International Conference on Machine Learning, 2022, PMLR, 290–306.
Md Ali, Nawab Yousuf, Md Rahman, Lizur, Chaki, Jyotismita, Dey, Nilanjan, Santosh, K.C., et al. Machine translation using deep learning for universal networking language based on their structure. Int. J. Mach. Learn. Cybern. 12:8 (2021), 2365–2376.
Alrashedy, Halima Hamid N., Almansour, Atheer Fahad, Ibrahim, Dina M., Hammoudeh, Mohammad Ali A., Braingan: brain mri image generation and classification framework using gan architectures and cnn models. Sensors, 22(11), 2022, 4297.
Assefa, Samuel A., Dervovic, Danial, Mahfouz, Mahmoud, Tillman, Robert E., Reddy, Prashant, Veloso, Manuela, Generating synthetic data in finance: opportunities, challenges and pitfalls. Proceedings of the First ACM International Conference on AI in Finance, ICAIF '20, 2021, Association for Computing Machinery, New York, NY, USA.
Aubry, Pascal, Carpov, Sergiu, Sirdey, Renaud, Faster homomorphic encryption is not enough: improved heuristic for multiplicative depth minimization of Boolean circuits. Topics in Cryptology–CT-RSA 2020: The Cryptographers' Track at the RSA Conference 2020, San Francisco, CA, USA, February 24–28, 2020, Proceedings, 2020, Springer, 345–363.
Aydin, Furkan, Karabulut, Emre, Potluri, Seetal, Alkim, Erdem, Aysu, Aydin, RevEAL: single-trace side-channel leakage of the seal homomorphic encryption library. 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), 2022, IEEE, 1527–1532.
Barbedo, Jayme Garcia Arnal, Impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. Comput. Electron. Agric. 153 (2018), 46–53.
Barni, Mauro, Orlandi, Claudio, Piva, Alessandro, A privacy-preserving protocol for neural-network-based computation. Proceedings of the 8th Workshop on Multimedia and Security, MM&Sec '06, 2006, Association for Computing Machinery, New York, NY, USA, 146–151.
Baruch, Moran, Drucker, Nir, Greenberg, Lev, Moshkowich, Guy, A methodology for training homomorphic encryption friendly neural networks. International Conference on Applied Cryptography and Network Security, 2022, Springer, 536–553.
Belgodere, Brian, Dognin, Pierre, Ivankay, Adam, Melnyk, Igor, Mroueh, Youssef, Mojsilovic, Aleksandra, Navartil, Jiri, Nitsure, Apoorva, Padhi, Inkit, Rigotti, Mattia, et al. Auditing and generating synthetic data with controllable trust trade-offs. arXiv preprint arXiv:2304.10819, 2023.
Benaissa, Ayoub, Retiat, Bilal, Cebere, Bogdan, Belfedhal, Alaa Eddine, Tenseal: a library for encrypted tensor operations using homomorphic encryption. arXiv preprint arXiv:2104.03152, 2021.
Benaloh, Josh, Leichter, Jerry, Generalized secret sharing and monotone functions. Conference on the Theory and Application of Cryptography, 1988, Springer, 27–35.
Bernau, Daniel, Grassal, Philip-William, Robl, Jonas, Kerschbaum, Florian, Assessing differentially private deep learning with membership inference. CoRR arXiv:1912.11328 [abs], 2019.
Bhunia, Swarup, Hsiao, Michael S., Banga, Mainak, Narasimhan, Seetharam, Hardware trojan attacks: threat analysis and countermeasures. Proc. IEEE 102:8 (2014), 1229–1247.
Boneh, Dan, Sahai, Amit, Waters, Brent, Functional encryption: definitions and challenges. Proceedings of the 8th Conference on Theory of Cryptography, TCC'11, 2011, Springer-Verlag, Berlin, Heidelberg, 253–273.
Brakerski, Zvika, Fully homomorphic encryption without modulus switching from classical gapsvp. Annual Cryptology Conference, 2012, Springer, 868–886.
Brakerski, Zvika, Gentry, Craig, Vaikuntanathan, Vinod, (Leveled) fully homomorphic encryption without bootstrapping. ACM Trans. Comput. Theory 6:3 (2014), 1–36.
Brickell, Justin, Shmatikov, Vitaly, The cost of privacy: destruction of data-mining utility in anonymized data publishing. Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2008, 70–78.
Cao, Jianneng, Karras, Panagiotis, Publishing microdata with a robust privacy guarantee. arXiv preprint arXiv:1208.0220, 2012.
Carlini, Nicholas, Ippolito, Daphne, Jagielski, Matthew, Lee, Katherine, Tramer, Florian, Zhang, Chiyuan, Quantifying memorization across neural language models. Conference on Learning Representations, vol. 11, 2023.
Carlini, Nicholas, Liu, Chang, Erlingsson, Úlfar, Kos, Jernej, Song, Dawn, The secret sharer: evaluating and testing unintended memorization in neural networks. Proceedings of the 28th USENIX Conference on Security Symposium, SEC'19, 2019, USENIX Association, USA, 267–284.
Carlini, Nicholas, Tramèr, Florian, Wallace, Eric, Jagielski, Matthew, Herbert-Voss, Ariel, Lee, Katherine, Roberts, Adam, Brown, Tom, Song, Dawn, Erlingsson, Úlfar, Oprea, Alina, Raffel, Colin, Extracting training data from large language models. 30th USENIX Security Symposium (USENIX Security 21), August 2021, USENIX Association, 2633–2650.
Chai, Junyi, Zeng, Hao, Li, Anming, Ngai, Eric W.T., Deep learning in computer vision: a critical review of emerging techniques and application scenarios. Mach. Learn. Appl., 6, 2021, 100134.
Chamani, Javad Ghareh, Papadopoulos, Dimitrios, Mitigating leakage in federated learning with trusted hardware. arXiv preprint arXiv:2011.04948, 2020.
Charles, Zachary, Konečnỳ, Jakub, Convergence and accuracy trade-offs in federated learning and meta-learning. International Conference on Artificial Intelligence and Statistics, 2021, PMLR, 2575–2583.
Chen, Rui, Mohammed, Noman, Fung, Benjamin C.M., Desai, Bipin C., Xiong, Li, Publishing set-valued data via differential privacy. Proc. VLDB Endow. 4:11 (aug 2011), 1087–1098.
Chen, Yudong, Su, Lili, Xu, Jiaming, Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems, 2018, 96.
Cheon, Jung Hee, Han, Kyoohyung, Kim, Andrey, Kim, Miran, Song, Yongsoo, Bootstrapping for approximate homomorphic encryption. Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2018, Springer, 360–384.
Cheon, Jung Hee, Kim, Andrey, Kim, Miran, Song, Yongsoo, Homomorphic encryption for arithmetic of approximate numbers. International Conference on the Theory and Application of Cryptology and Information Security, 2017, Springer, 409–437.
Chillotti, Ilaria, Gama, Nicolas, Georgieva, Mariya, Izabachène, Malika, Tfhe: fast fully homomorphic encryption over the torus. J. Cryptol. 33:1 (2020), 34–91.
Chillotti, Ilaria, Gama, Nicolas, Georgieva, Mariya, Izabachène, Malika, TFHE: fast fully homomorphic encryption library. https://tfhe.github.io/tfhe/, August 2016.
Choquette-Choo, Christopher A., Dullerud, Natalie, Dziedzic, Adam, Zhang, Yunxiang, Jha, Somesh, Papernot, Nicolas, Wang, Xiao, Capc learning: confidential and private collaborative learning. arXiv preprint arXiv:2102.05188, 2021.
Choquette-Choo, Christopher A., Tramer, Florian, Carlini, Nicholas, Papernot, Nicolas, Label-only membership inference attacks. Meila, Marina, Zhang, Tong, (eds.) Proceedings of the 38th International Conference on Machine Learning, 18–24 Jul Proceedings of Machine Learning Research, vol. 139, 2021, PMLR, 1964–1974.
Chung, Yeounoh, Haas, Peter J., Upfal, Eli, Kraska, Tim, Unknown examples & machine learning model generalization. CoRR arXiv:1808.08294 [abs], 2018.
Clements, Joseph, Lao, Yingjie, Hardware trojan design on neural networks. 2019 IEEE International Symposium on Circuits and Systems (ISCAS), 2019, 1–5.
de Cock, Martine, Dowsley, Rafael, Nascimento, Anderson C.A., Newman, Stacey C., Fast, privacy preserving linear regression over distributed datasets based on pre-distributed data. Proceedings of the 8th ACM Workshop on Artificial Intelligence and Security, 2015, 3–14.
European Commission, Content Directorate-General for Communications Networks, and Technology. Ethics Guidelines for Trustworthy AI, 2019, Publications Office.
OpenDP community, OpenDP: the opendp library is a modular collection of statistical algorithms that adhere to the definition of differential privacy. https://github.com/opendp/opendp, Jul 2021.
Dash, Saloni, Yale, Andrew, Guyon, Isabelle, Bennett, Kristin P., Medical time-series data generation using generative adversarial networks. Artificial Intelligence in Medicine: 18th International Conference on Artificial Intelligence in Medicine, AIME 2020, Minneapolis, MN, USA, August 25–28, 2020, Proceedings 18, 2020, Springer, 382–391.
De Montjoye, Yves-Alexandre, Hidalgo, César A., Verleysen, Michel, Blondel, Vincent D., Unique in the crowd: the privacy bounds of human mobility. Sci. Rep. 3:1 (2013), 1–5.
Deng, Li, The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29:6 (2012), 141–142.
Diao, Enmao, Ding, Jie, Tarokh, Vahid, Heterofl: computation and communication efficient federated learning for heterogeneous clients. arXiv preprint arXiv:2010.01264, 2020.
van Dijk, Marten, Gentry, Craig, Halevi, Shai, Vaikuntanathan, Vinod, Fully homomorphic encryption over the integers. Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2010, Springer, 24–43.
Ducas, Léo, Micciancio, Daniele, Fhew: bootstrapping homomorphic encryption in less than a second. Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2015, Springer, 617–640.
Dufour-Sans, Edouard, Gay, Romain, Pointcheval, David, Reading in the dark: Classifying encrypted digits with functional encryption. Cryptology ePrint Archive, 2018.
Dutta, Sanghamitra, Wei, Dennis, Yueksel, Hazar, Chen, Pin-Yu, Liu, Sijia, Varshney, Kush, Is there a trade-off between fairness and accuracy? A perspective using mismatched hypothesis testing. International Conference on Machine Learning, 2020, PMLR, 2803–2813.
Dwork, Cynthia, Roth, Aaron, The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9:3–4 (aug 2014), 211–407.
Content European Commission, Directorate-General for Communications Networks and Technology. Ethics Guidelines for Trustworthy AI, 2019.
Erlingsson, Úlfar, Pihur, Vasyl, Korolova, Aleksandra, Rappor: randomized aggregatable privacy-preserving ordinal response. Proceedings of the 21st ACM Conference on Computer and Communications Security, Scottsdale, Arizona, 2014.
Evans, David, Kolesnikov, Vladimir, Rosulek, Mike, et al. A pragmatic introduction to secure multi-party computation. Found. Trends® Priv. Secur. 2:2–3 (2018), 70–246.
Evfimievski, Alexandre, Gehrke, Johannes, Srikant, Ramakrishnan, Limiting privacy breaches in privacy preserving data mining. Proceedings of the Twenty-Second ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS '03, 2003, Association for Computing Machinery, New York, NY, USA, 211–222.
Fanti, Giulia, Pihur, Vasyl, Erlingsson, Úlfar, Building a rappor with the unknown: privacy-preserving learning of associations and data dictionaries. arXiv preprint arXiv:1503.01214, 2015.
Felzmann, Heike, Villaronga, Eduard Fosch, Lutz, Christoph, Tamò-Larrieux, Aurelia, Transparency you can trust: transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc., 6(1), 2019, 2053951719860542.
Fernandez, Virginia, Pinaya, Walter Hugo Lopez, Borges, Pedro, Tudosiu, Petru-Daniel, Graham, Mark S., Vercauteren, Tom, Cardoso, M. Jorge, Can segmentation models be trained with fully synthetically generated data?. International Workshop on Simulation and Synthesis in Medical Imaging, 2022, Springer, 79–90.
Fischer-Hübner, Simone, Angulo, Julio, Karegar, Farzaneh, Pulls, Tobias, Transparency, privacy and trust–technology for tracking and controlling my data disclosures: does this work?. IFIP International Conference on Trust Management, 2016, Springer, 3–14.
Fredrikson, Matt, Jha, Somesh, Ristenpart, Thomas, Model inversion attacks that exploit confidence information and basic countermeasures. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS '15, 2015, Association for Computing Machinery, New York, NY, USA, 1322–1333.
Friedman, Arik, Wolff, Ran, Schuster, Assaf, Providing k-anonymity in data mining. VLDB J., 17, 2008, 07.
Fujita, Taisuke, AnonyPy: anonymization library for python. https://github.com/glassonion1/anonypy/, Oct 2021.
Gentry, Craig, Fully homomorphic encryption using ideal lattices. Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, STOC '09, 2009, Association for Computing Machinery, New York, NY, USA, 169–178.
Geyer, Robin C., Klein, Tassilo, Nabi, Moin, Differentially private federated learning: a client level perspective. arXiv preprint. arXiv:1712.07557, 2017.
Ghanem, Sahar M., Moursy, Islam A., Secure multiparty computation via homomorphic encryption library. 2019 Ninth International Conference on Intelligent Computing and Information Systems (ICICIS), 2019, 227–232.
Ghassemi, Marzyeh, Naumann, Tristan, Schulam, Peter, Beam, Andrew L., Chen, Irene Y., Ranganath, Rajesh, A review of challenges and opportunities in machine learning for health. AMIA Summits Transl. Sci. Proc., 2020, 2020, 191.
Gilad-Bachrach, Ran, Dowlin, Nathan, Laine, Kim, Lauter, Kristin, Naehrig, Michael, Wernsing, John, Cryptonets: applying neural networks to encrypted data with high throughput and accuracy. International Conference on Machine Learning, 2016, PMLR, 201–210.
Goldsteen, Abigail, Ezov, Gilad, Shmelkin, Ron, Moffie, Micha, Farkash, Ariel, Anonymizing machine learning models. Garcia-Alfaro, Joaquin, Muñoz-Tapia, Jose Luis, Navarro-Arribas, Guillermo, Soriano, Miguel, (eds.) Data Privacy Management, Cryptocurrencies and Blockchain Technology, 2022, Springer International Publishing, Cham, 121–136.
Goldwasser, Shafi, Gordon, S. Dov, Goyal, Vipul, Jain, Abhishek, Katz, Jonathan, Liu, Feng-Hao, Sahai, Amit, Shi, Elaine, Zhou, Hong-Sheng, Multi-input functional encryption. Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2014, Springer, 578–602.
Google, Google DP repository: libraries to generate differentially private statistics over datasets. https://github.com/google/differential-privacy, Sep 2021.
Goyal, Vipul, Pandey, Omkant, Sahai, Amit, Waters, Brent, Attribute-based encryption for fine-grained access control of encrypted data. Proceedings of the 13th ACM Conference on Computer and Communications Security, CCS '06, 2006, Association for Computing Machinery, New York, NY, USA, 89–98.
Gürses, Seda, Pets and their users: a critical review of the potentials and limitations of the privacy as confidentiality paradigm. Identity Inf. Soc. 3:3 (2010), 539–563.
Hall, Adam James, Jay, Madhava, Cebere, Tudor, Cebere, Bogdan, van der Veen, Koen Lennart, Muraru, George, Xu, Tongye, Cason, Patrick, Abramson, William, Benaissa, Ayoub, et al. Syft 0.5: a platform for universally deployable structured transparency. arXiv preprint arXiv:2104.12385, 2021.
Hayes, Jamie, Melis, Luca, Danezis, George, De Cristofaro, Emiliano, LOGAN: evaluating privacy leakage of generative models using generative adversarial networks. CoRR arXiv:1705.07663 [abs], 2017.
He, Zecheng, Zhang, Tianwei, Lee, Ruby B., Model inversion attacks against collaborative inference. Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC '19, 2019, Association for Computing Machinery, New York, NY, USA, 148–162.
High-Level Expert Group on AI, Ethics Guidelines for Trustworthy AI. Technical report, April 2019, European Commission.
Hitaj, Briland, Ateniese, Giuseppe, Perez-Cruz, Fernando, Deep models under the gan: information leakage from collaborative deep learning. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS '17, 2017, Association for Computing Machinery, New York, NY, USA, 603–618.
Holohan, Naoise, Braghin, Stefano, Aonghusa, Pól Mac, Levacher, Killian, Diffprivlib: the IBM differential privacy library. ArXiv e-prints arXiv:1907.02444 [cs.CR], July 2019.
Huang, Po-Hsuan, Tu, Chia-Heng, Chung, Shen-Ming, Tonic: towards oblivious neural inference compiler. Proceedings of the 36th Annual ACM Symposium on Applied Computing, 2021, 491–500.
Hunt, Hamish, Crawford, Jack L., Steffinlongo, Enrico, Shoup, Victor J., HElib: open-source software library that implements homomorphic encryption. https://github.com/homenc/HElib/, 2020.
Huo, Yuankai, Xu, Zhoubing, Moon, Hyeonsoo, Bao, Shunxing, Assad, Albert, Moyo, Tamara K., Savona, Michael R., Abramson, Richard G., Landman, Bennett A., Synseg-net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imaging 38:4 (2018), 1016–1025.
Hussain, Siam, Li, Baiyu, Koushanfar, Farinaz, Cammarota, Rosario, Tinygarble2: smart, efficient, and scalable Yao's Garble Circuit. Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, 2020, 65–67.
Jagielski, Matthew, Oprea, Alina, Biggio, Battista, Liu, Chang, Nita-Rotaru, Cristina, Li, Bo, Manipulating machine learning: poisoning attacks and countermeasures for regression learning. 2018 IEEE Symposium on Security and Privacy (SP), 2018, IEEE, 19–35.
Jayaraman, Bargav, Evans, David, Are attribute inference attacks just imputation?. Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, CCS '22, 2022, Association for Computing Machinery, New York, NY, USA, 1569–1582.
Jia, Jinyuan, Salem, Ahmed, Backes, Michael, Zhang, Yang, Gong, Neil Zhenqiang, Memguard: defending against black-box membership inference attacks via adversarial examples. CoRR arXiv:1909.10594 [abs], 2019.
Jia, Jinyuan, Salem, Ahmed, Backes, Michael, Zhang, Yang, Gong, Neil Zhenqiang, Memguard: defending against black-box membership inference attacks via adversarial examples. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS '19, 2019, Association for Computing Machinery, New York, NY, USA, 259–274.
Jiang, Kaifeng, Shao, Dongxu, Bressan, Stéphane, Kister, Thomas, Tan, Kian-Lee, Publishing trajectories with differential privacy guarantees. Proceedings of the 25th International Conference on Scientific and Statistical Database Management, 2013, SSDBM, New York, NY, USA Association for Computing Machinery.
Jiang, Xue, Zhou, Xuebing, Grossklags, Jens, Comprehensive analysis of privacy leakage in vertical federated learning during prediction. Proc. Priv. Enh. Technol. 2022:2 (2022), 263–281.
Jordon, James, Yoon, Jinsung, Van Der Schaar, Mihaela, Pate-gan: generating synthetic data with differential privacy guarantees. International Conference on Learning Representations, 2018.
Kallus, Nathan, Zhou, Angela, Residual unfairness in fair machine learning from prejudiced data. Dy, Jennifer, Krause, Andreas, (eds.) Proceedings of the 35th International Conference on Machine Learning, 10–15 Jul Proceedings of Machine Learning Research, vol. 80, 2018, PMLR, 2439–2448.
Kang, Yan, Luo, Jiahuan, He, Yuanqin, Zhang, Xiaojin, Fan, Lixin, Yang, Qiang, A framework for evaluating privacy-utility trade-off in vertical federated learning. arXiv preprint arXiv:2209.03885, 2022.
Keller, Marcel, MP-SPDZ: a versatile framework for multi-party computation. Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020.
Keller, Marcel, Orsini, Emmanuela, Scholl, Peter, Mascot: faster malicious arithmetic secure computation with oblivious transfer. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, 2016, 830–842.
Keller, Marcel, Pastro, Valerio, Rotaru, Dragos, Overdrive: making spdz great again. Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2018, Springer, 158–189.
Kifer, Daniel, Gehrke, Johannes, Injecting utility into anonymized datasets. Proceedings of the 2006 ACM SIGMOD International Conference on Management of Data, SIGMOD '06, 2006, Association for Computing Machinery, New York, NY, USA, 217–228.
Kim, Andrey, Papadimitriou, Antonis, Polyakov, Yuriy, Approximate homomorphic encryption with reduced approximation error. Cryptographers' Track at the RSA Conference, 2022, Springer, 120–144.
Kusner, Matt, Gardner, Jacob, Garnett, Roman, Weinberger, Kilian, Differentially private Bayesian optimization. Bach, Francis, Blei, David, (eds.) Proceedings of the 32nd International Conference on Machine Learning, 07–09 Jul Proceedings of Machine Learning Research, vol. 37, 2015, PMLR, Lille, France, 918–927.
Law, Andrew, Leung, Chester, Poddar, Rishabh, Ada Popa, Raluca, Shi, Chenyu, Sima, Octavian, Yu, Chaofan, Zhang, Xingmeng, Zheng, Wenting, Secure collaborative training and inference for xgboost. Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, 2020, 21–26.
Lee, Joon-Woo, Kang, Hyungchul, Lee, Yongwoo, Choi, Woosuk, Eom, Jieun, Deryabin, Maxim, Lee, Eunsang, Lee, Junghyun, Yoo, Donghoon, Kim, Young-Sik, No, Jong-Seon, Privacy-preserving machine learning with fully homomorphic encryption for deep neural network. IEEE Access 10 (2022), 30039–30054.
Lepri, Bruno, Oliver, Nuria, Letouzé, Emmanuel, Pentland, Alex, Vinck, Patrick, Fair, transparent, and accountable algorithmic decision-making processes: the premise, the proposed solutions, and the open challenges. Philos. Technol. 31:4 (2018), 611–627.
Li, Baiyu, Micciancio, Daniele, On the security of homomorphic encryption on approximate numbers. Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2021, Springer, 648–677.
Li, Zheng, Zhang, Yang, Membership leakage in label-only exposures. Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, CCS '21, 2021, Association for Computing Machinery, New York, NY, USA, 880–895.
Liu, Bo, Ding, Ming, Shaham, Sina, Rahayu, Wenny, Farokhi, Farhad, Lin, Zihuai, When machine learning meets privacy: a survey and outlook. ACM Comput. Surv. 54:2 (2021), 1–36.
Liu, Jian, Juuti, Mika, Lu, Yao, Asokan, Nadarajah, Oblivious neural network predictions via minionn transformations. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, 619–631.
Long, Yunhui, Bindschaedler, Vincent, Wang, Lei, Bu, Diyue, Wang, Xiaofeng, Tang, Haixu, Gunter, Carl A., Chen, Kai, Understanding membership inferences on well-generalized learning models. CoRR arXiv:1802.04889 [abs], 2018.
Long, Yunhui, Wang, Boxin, Yang, Zhuolin, Kailkhura, Bhavya, Zhang, Aston, Gunter, Carl, Li, Bo, G-pate: scalable differentially private data generator via private aggregation of teacher discriminators. Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J. Wortman, (eds.) Advances in Neural Information Processing Systems, vol. 34, 2021, Curran Associates, Inc., 2965–2977.
Luo, Xinjian, Wu, Yuncheng, Xiao, Xiaokui, Ooi, Beng Chin, Feature inference attack on model predictions in vertical federated learning. 2021 IEEE 37th International Conference on Data Engineering (ICDE), 2021, IEEE, 181–192.
Machanavajjhala, Ashwin, Gehrke, Johannes, Kifer, Daniel, Venkitasubramaniam, Muthuramakrishna, L-diversity: privacy beyond k-anonymity. 22nd International Conference on Data Engineering (ICDE'06), 2006, 24–36.
Madaio, Michael A., Stark, Luke, Wortman Vaughan, Jennifer, Wallach, Hanna, Co-designing checklists to understand organizational challenges and opportunities around fairness in ai. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020, 1–14.
Mamoshina, Polina, Vieira, Armando, Putin, Evgeny, Zhavoronkov, Alex, Applications of deep learning in biomedicine. Mol. Pharm. 13:5 (2016), 1445–1454.
Mannino, Miro, Abouzied, Azza, Is this real? Generating synthetic data that looks real. Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST '19, 2019, Association for Computing Machinery, New York, NY, USA, 549–561.
Marc, Tilen, Stopar, Miha, Hartman, Jan, Bizjak, Manca, Modic, Jolanda, Privacy-enhanced machine learning with functional encryption. European Symposium on Research in Computer Security, 2019, Springer, 3–21.
Martins, Paulo, Sousa, Leonel, Mariano, Artur, A survey on fully homomorphic encryption: an engineering perspective. ACM Comput. Surv. 50:6 (2017), 1–33.
Mazzone, Federico, van den Heuvel, Leander, Huber, Maximilian, Verdecchia, Cristian, Everts, Maarten, Hahn, Florian, Peter, Andreas, Repeated knowledge distillation with confidence masking to mitigate membership inference attacks. Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, AISec'22, 2022, Association for Computing Machinery, New York, NY, USA, 13–24.
McMahan, H. Brendan, Moore, Eider, Ramage, Daniel, Hampson, Seth, y Arcas, Blaise Aguera, Communication-efficient learning of deep networks from decentralized data. Artificial Intelligence and Statistics, 2017, PMLR, 1273–1282.
Melis, Luca, Song, Congzheng, De Cristofaro, Emiliano, Shmatikov, Vitaly, Inference attacks against collaborative learning. CoRR arXiv:1805.04049 [abs], 2018.
Menon, Aditya Krishna, Williamson, Robert C., The cost of fairness in binary classification. Conference on Fairness, Accountability and Transparency, 2018, PMLR, 107–118.
Michalevsky, Yan, Joye, Marc, Decentralized policy-hiding abe with receiver privacy. European Symposium on Research in Computer Security, 2018, Springer, 548–567.
Michels, Felix, Uelwer, Tobias, Upschulte, Eric, Harmeling, Stefan, On the vulnerability of capsule networks to adversarial attacks. CoRR arXiv:1906.03612 [abs], 2019.
Mihara, Kentaro, Yamaguchi, Ryohei, Mitsuishi, Miguel, Maruyama, Yusuke, Neural network training with homomorphic encryption. arXiv preprint arXiv:2012.13552, 2020.
Milli, Smitha, Schmidt, Ludwig, Dragan, Anca D., Hardt, Moritz, Model reconstruction from model explanations. Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* '19, 2019, Association for Computing Machinery, New York, NY, USA, 1–9.
Mishra, Pratyush, Lehmkuhl, Ryan, Srinivasan, Akshayaram, Zheng, Wenting, Popa, Raluca Ada, Delphi: a cryptographic inference service for neural networks. 29th USENIX Security Symposium (USENIX Security 20), 2020, 2505–2522.
Mo, Fan, Haddadi, Hamed, Katevas, Kleomenis, Marin, Eduard, Perino, Diego, Kourtellis, Nicolas, Ppfl: privacy-preserving federated learning with trusted execution environments. Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, 2021, 94–108.
Mo, Ran, Liu, Jianfeng, Yu, Wentao, Jiang, Fu, Gu, Xin, Zhao, Xiaoshuai, Liu, Weirong, Peng, Jun, A differential privacy-based protecting data preprocessing method for big data mining. 2019 18th IEEE International Conference on Trust, Security and Privacy in Computing and Communications/13th IEEE International Conference on Big Data Science and Engineering (TrustCom/BigDataSE), 2019, 693–699.
Mohassel, Payman, Rindal, Peter, Aby3: a mixed protocol framework for machine learning. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018, 35–52.
Mohassel, Payman, Rosulek, Mike, Zhang, Ye, Fast and secure three-party computation: the garbled circuit approach. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, 591–602.
Mulligan, Deirdre K., Kroll, Joshua A., Kohli, Nitin, Wong, Richmond Y., This thing called fairness: disciplinary confusion realizing a value in technology. Proc. ACM Hum.-Comput. Interact., 3(CSCW), nov 2019.
Muñoz-González, Luis, Biggio, Battista, Demontis, Ambra, Paudice, Andrea, Wongrassamee, Vasin, Lupu, Emil C., Roli, Fabio, Towards poisoning of deep learning algorithms with back-gradient optimization. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017, 27–38.
Nandakumar, Karthik, Ratha, Nalini, Pankanti, Sharath, Halevi, Shai, Towards deep neural network training on encrypted data. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2019.
Narayanan, Arvind, Shmatikov, Vitaly, Robust de-anonymization of large sparse datasets: a decade later. May, 21, 2019, 2019.
Nasr, Milad, Shokri, Reza, et al. Improving deep learning with differential privacy using gradient encoding and denoising. arXiv preprint arXiv:2007.11524, 2020.
Nasr, Milad, Shokri, Reza, Houmansadr, Amir, Comprehensive privacy analysis of deep learning: passive and active white-box inference attacks against centralized and federated learning. 2019 IEEE Symposium on Security and Privacy (SP), 2019, 739–753.
Nergiz, Mehmet Ercan, Atzori, Maurizio, Clifton, Chris, Hiding the presence of individuals from shared databases. Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data, 2007, 665–676.
Neubauer, Thomas, Heurix, Johannes, A methodology for the pseudonymization of medical data. Int. J. Med. Inform. 80:3 (2011), 190–204.
Ni, Chunchun, Cang, Li Shan, Gope, Prosanta, Min, Geyong, Data anonymization evaluation for big data and iot environment. Inf. Sci. 605 (2022), 381–392.
Nielsen, Jesper Buus, Nordholt, Peter Sebastian, Orlandi, Claudio, Burra, Sai Sheshank, A new approach to practical active-secure two-party computation. Annual Cryptology Conference, 2012, Springer, 681–700.
Nik Aznan, Nik Khadijah, Atapour-Abarghouei, Amir, Bonner, Stephen, Connolly, Jason D., Al Moubayed, Noura, Breckon, Toby P., Simulating brain signals: creating synthetic eeg data via neural-based generative models for improved ssvep classification. 2019 International Joint Conference on Neural Networks (IJCNN), 2019, 1–8.
Nikolaenko, Valeria, Weinsberg, Udi, Ioannidis, Stratis, Joye, Marc, Boneh, Dan, Taft, Nina, Privacy-preserving ridge regression on hundreds of millions of records. 2013 IEEE Symposium on Security and Privacy, 2013, IEEE, 334–348.
Nissenbaum, Helen, Privacy as contextual integrity. Wash. L. Rev., 79, 2004, 119.
Nissim, Kobbi, Wood, Alexandra, Is privacy privacy?. Philos. Trans. R. Soc. A, Math. Phys. Eng. Sci., 376(2128), 2018, 20170358.
Obla, Srinath, Gong, Xinghan, Aloufi, Asma, Hu, Peizhao, Takabi, Daniel, Effective activation functions for homomorphic evaluation of deep neural networks. IEEE Access 8 (2020), 153098–153112.
Paillier, Pascal, Public-key cryptosystems based on composite degree residuosity classes. Advances in Cryptology - EUROCRYPT '99, International Conference on the Theory and Application of Cryptographic Techniques Lecture Notes in Computer Science, vol. 1592, 1999, Springer, 223–238.
Papernot, Nicolas, McDaniel, Patrick, Sinha, Arunesh, Wellman, Michael P., SoK: security and privacy in machine learning. Proc. of the 2018 IEEE European Symposium on Security and Privacy (EuroS&P), 2018.
Park, Saerom, Byun, Junyoung, Lee, Joohee, Privacy-preserving fair learning of support vector machine with homomorphic encryption. Proceedings of the ACM Web Conference 2022, WWW '22, 2022, Association for Computing Machinery, New York, NY, USA, 3572–3583.
Phan, Nhathai, Wu, Xintao, Hu, Han, Dou, Dejing, Adaptive Laplace mechanism: differential privacy preservation in deep learning. Proceedings - 17th IEEE International Conference on Data Mining, ICDM 2017, Dec 2017, 385–394.
Phong, Le Trieu, Aono, Yoshinori, Hayashi, Takuya, Wang, Lihua, Moriai, Shiho, Privacy-preserving deep learning via additively homomorphic encryption. IEEE Trans. Inf. Forensics Secur. 13:5 (2018), 1333–1345.
Prasser, Fabian, Eicher, Johanna, Spengler, Helmut, Bild, Raffael, Kuhn, Klaus A., Flexible data anonymization using arx—current status and challenges ahead. Softw. Pract. Exp. 50:7 (2020), 1277–1304.
Qasim, Ahmad B., Ezhov, Ivan, Shit, Suprosanna, Schoppe, Oliver, Paetzold, Johannes C., Sekuboyina, Anjany, Kofler, Florian, Lipkova, Jana, Li, Hongwei, Bjoern, Menze, Red-gan: attacking class imbalance via conditioned generation. Yet another medical imaging perspective. Arbel, Tal, Ben Ayed, Ismail, de Bruijne, Marleen, Descoteaux, Maxime, Lombaert, Herve, Pal, Christopher, (eds.) Proceedings of the Third Conference on Medical Imaging with Deep Learning, 06–08 Jul Proceedings of Machine Learning Research, vol. 121, 2020, PMLR, 655–668.
Rathee, Deevashwer, Rathee, Mayank, Goli, Rahul Kranti Kiran, Gupta, Divya, Sharma, Rahul, Chandran, Nishanth, Rastogi, Aseem, Sirnn: a math library for secure rnn inference. 2021 IEEE Symposium on Security and Privacy (SP), 2021, IEEE, 1003–1020.
Ren, Hanchi, Deng, Jingjing, Xie, Xianghua, Grnn: generative regression neural network—a data leakage attack for federated learning. ACM Trans. Intell. Syst. Technol., 13(4), may 2022.
Riazi, Mohammad Sadegh, Weinert, Christian, Tkachenko, Oleksandr, Songhori, Ebrahim M., Schneider, Thomas, Koushanfar, Farinaz, Chameleon: a hybrid secure computation framework for machine learning applications. Proceedings of the 2018 on Asia Conference on Computer and Communications Security, 2018, 707–721.
Rouhani, Bita Darvish, Riazi, M. Sadegh, Koushanfar, Farinaz, Deepsecure: scalable provably-secure deep learning. Proceedings of the 55th Annual Design Automation Conference, 2018, 1–6.
Rudin, Cynthia, Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1:5 (2019), 206–215.
Sabay, Alfeo, Harris, Laurie, Bejugama, Vivek, Jaceldo-Siegl, Karen, Overcoming small data limitations in heart disease prediction by using surrogate data. SMU Data Sci. Rev., 1(3), 2018, 12.
Salem, Ahmed, Bhattacharya, Apratim, Backes, Michael, Fritz, Mario, Zhang, Yang, Updates-Leak: data set inference and reconstruction attacks in online learning. 29th USENIX Security Symposium (USENIX Security 20), August 2020, USENIX Association, 1291–1308.
Sayyad, Suhel, Privacy preserving deep learning using secure multiparty computation. 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA), 2020, 139–142.
Microsoft SEAL (release 4.0) https://github.com/Microsoft/SEAL, March 2022, Microsoft Research, Redmond, WA.
Shah, Muhammad A., Szurley, Joseph, Mueller, Markus, Mouchtaris, Athanasios, Droppo, Jasha, Evaluating the vulnerability of end-to-end automatic speech recognition models to membership inference attacks. Proc. Interspeech 2021, 2021, 891–895.
Shamir, Adi, How to share a secret. Commun. ACM 22:11 (1979), 612–613.
Shokri, Reza, Shmatikov, Vitaly, Privacy-preserving deep learning. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS '15, 2015, Association for Computing Machinery, New York, NY, USA, 1310–1321.
Shokri, Reza, Stronati, Marco, Song, Congzheng, Shmatikov, Vitaly, Membership inference attacks against machine learning models. 2017 IEEE Symposium on Security and Privacy (SP), 2017, IEEE, 3–18.
Song, Congzheng, Shmatikov, Vitaly, Auditing data provenance in text-generation models. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19, 2019, Association for Computing Machinery, New York, NY, USA, 196–206.
Sun, Yuwei, Chong, Ng S.T., Ochiai, Hideya, Information stealing in federated learning systems based on generative adversarial networks. 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2021, 2749–2754.
Surden, Harry, Machine learning and law. Wash. L. Rev., 89, 2014, 87.
Sweeney, Latanya, k-anonymity: a model for protecting privacy. Int. J. Uncertain. Fuzziness Knowl.-Based Syst. 10:05 (2002), 557–570.
Tensorflow team. Tensorflow privacy: a python library that includes implementations of tensorflow optimizers for training machine learning models with differential privacy. https://github.com/tensorflow/privacy, Aug 2019.
Thakkar, Om Dipakbhai, Ramaswamy, Swaroop, Mathews, Rajiv, Beaufays, Francoise, Understanding unintended memorization in language models under federated learning. Proceedings of the Third Workshop on Privacy in Natural Language Processing, jun 2021, Association for Computational Linguistics, 1–10 Online.
Tramer, Florian, Boneh, Dan, Slalom: fast, verifiable and private execution of neural networks in trusted hardware. arXiv preprint arXiv:1806.03287, 2018.
Tramèr, Florian, Shokri, Reza, San Joaquin, Ayrton, Le, Hoang, Jagielski, Matthew, Hong, Sanghyun, Carlini, Nicholas, Truth serum: poisoning machine learning models to reveal their secrets. arXiv preprint arXiv:2204.00032, 2022.
Tramèr, Florian, Zhang, Fan, Juels, Ari, Reiter, Michael K., Ristenpart, Thomas, Stealing machine learning models via prediction {APIs}. 25th USENIX Security Symposium (USENIX Security 16), 2016, 601–618.
Truex, Stacey, Liu, Ling, Gursoy, Mehmet Emre, Yu, Lei, Wei, Wenqi, Towards demystifying membership inference attacks. CoRR arXiv:1807.09173 [abs], 2018.
Vila, Laura Cross, Escolano, Carlos, Fonollosa, José A.R., Costa-Jussa, Marta R., End-to-end speech translation with the transformer. IberSPEECH, 2018, 60–63.
Wondracek, Gilbert, Holz, Thorsten, Kirda, Engin, Kruegel, Christopher, A practical attack to de-anonymize social network users. Proceedings of the 2010 IEEE Symposium on Security and Privacy, SP '10, 2010, IEEE Computer Society, USA, 223–238.
Wu, Bang, Yang, Xiangwen, Pan, Shirui, Yuan, Xingliang, Model extraction attacks on graph neural networks: taxonomy and realization. CoRR arXiv:2010.12751 [abs], 2020.
Wu, Zuxuan, Lim, Ser-Nam, Davis, Larry S., Goldstein, Tom, Making an invisibility cloak: real world adversarial attacks on object detectors. European Conference on Computer Vision, 2020, Springer, 1–17.
Xu, Runhua, Baracaldo, Nathalie, Zhou, Yi, Anwar, Ali, Ludwig, Heiko, Hybridalpha: an efficient approach for privacy-preserving federated learning. Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, 2019, 13–23.
Xu, Runhua, Joshi, James B.D., Li, Chao, Cryptonn: training neural networks over encrypted data. 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), 2019, IEEE, 1199–1209.
Yang, Chao-Han Huck, Siniscalchi, Sabato Marco, Lee, Chin-Hui, Pate-aae: incorporating adversarial autoencoder into private aggregation of teacher ensembles for spoken command classification. Interspeech, 2021.
Yang, Haomiao, Ge, Mengyu, Xiang, Kunlan, Li, Jingwei, Using highly compressed gradients in federated learning for data reconstruction attacks. IEEE Trans. Inf. Forensics Secur. 18 (2023), 818–830.
Yang, Weng, Chenkai, Lan, Xiao, Zhang, Jiang, Wang, Xiao, Ferret: fast extension for correlated ot with small communication. Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020, 1607–1626.
Yang, Mengwei, Song, Linqi, Xu, Jie, Li, Congduan, Tan, Guozhen, The tradeoff between privacy and accuracy in anomaly detection using federated xgboost. arXiv preprint arXiv:1907.07157, 2019.
Yang, Ziqi, Shao, Bin, Xuan, Bohan, Chang, Ee-Chien, Zhang, Fan, Defending model inversion and membership inference attacks via prediction purification. CoRR arXiv:2005.03915 [abs], 2020.
Yang, Ziqi, Zhang, Jiyi, Chang, Ee-Chien, Liang, Zhenkai, Neural network inversion in adversarial setting via background knowledge alignment. Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS '19, 2019, Association for Computing Machinery, New York, NY, USA, 225–240.
Yao, Andrew C., Protocols for secure computations. 23rd Annual Symposium on Foundations of Computer Science (sfcs 1982), 1982, IEEE, 160–164.
Yao, Lin, Wang, Xue, Hu, Haibo, Wu, Guowei, A utility-aware anonymization model for multiple sensitive attributes based on association concealment. IEEE Trans. Dependable Secure Comput., 2023, 1–12.
Ye, Dongdong, Yu, Rong, Pan, Miao, Han, Zhu, Federated learning in vehicular edge computing: a selective model aggregation approach. IEEE Access 8 (2020), 23920–23935.
Yeom, Samuel, Fredrikson, Matt, Jha, Somesh, The unintended consequences of overfitting: training data inference attacks. CoRR arXiv:1709.01604 [abs], 2017.
Yin, Hongxu, Mallya, Arun, Vahdat, Arash, Alvarez, Jose M., Kautz, Jan, Molchanov, Pavlo, See through gradients: image batch recovery via gradinversion. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, 16337–16346.
Zheng, Wenting, Popa, Raluca Ada, Gonzalez, Joseph E., Stoica, Ion, Helen: maliciously secure coopetitive learning for linear models. 2019 IEEE Symposium on Security and Privacy (SP), 2019, IEEE, 724–738.
Zhu, Ligeng, Liu, Zhijian, Han, Song, Deep leakage from gradients. Adv. Neural Inf. Process. Syst., 32, 2019.
Zou, Yang, Zhang, Zhikun, Backes, Michael, Zhang, Yang, Privacy analysis of deep learning in the wild: membership inference attacks against transfer learning. CoRR arXiv:2009.04872 [abs], 2020.