Paper published in a journal (Scientific congresses, symposiums and conference proceedings)
ProPILE: Probing Privacy Leakage in Large Language Models
Kim, Siwon; Yun, Sangdoo; Lee, Hwaran et al.
2023In Advances in Neural Information Processing Systems 36 (NeurIPS 2023)
Peer reviewed
 

Files


Full Text
2307.01881.pdf
Author preprint (4.09 MB)
Download

All documents in ORBilu are protected by a user license.

Send to



Details



Abstract :
[en] The rapid advancement and widespread use of large language models (LLMs) have raised significant concerns regarding the potential leakage of personally identifiable information (PII). These models are often trained on vast quantities of web-collected data, which may inadvertently include sensitive personal data. This paper presents ProPILE, a novel probing tool designed to empower data subjects, or the owners of the PII, with awareness of potential PII leakage in LLM-based services. ProPILE lets data subjects formulate prompts based on their own PII to evaluate the level of privacy intrusion in LLMs. We demonstrate its application on the OPT-1.3B model trained on the publicly available Pile dataset. We show how hypothetical data subjects may assess the likelihood of their PII being included in the Pile dataset being revealed. ProPILE can also be leveraged by LLM service providers to effectively evaluate their own levels of PII leakage with more powerful prompts specifically tuned for their in-house models. This tool represents a pioneering step towards empowering the data subjects for their awareness and control over their own data on the web.
Research center :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > SerVal - Security, Reasoning & Validation
Disciplines :
Computer science
Author, co-author :
Kim, Siwon
Yun, Sangdoo
Lee, Hwaran
GUBRI, Martin ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > SerVal
Yoon, Sungroh
Oh, Seong Joon
External co-authors :
yes
Language :
English
Title :
ProPILE: Probing Privacy Leakage in Large Language Models
Publication date :
December 2023
Event name :
The Thirty-Seventh Annual Conference on Neural Information Processing Systems (NeurIPS 2023)
Event date :
from 11-12-2023 to 14-12-2023
Audience :
International
Journal title :
Advances in Neural Information Processing Systems 36 (NeurIPS 2023)
Peer reviewed :
Peer reviewed
Focus Area :
Security, Reliability and Trust
FnR Project :
FNR12669767 - Testing Self-learning Systems, 2018 (01/09/2019-31/08/2022) - Yves Le Traon
Available on ORBilu :
since 22 September 2023

Statistics


Number of views
46 (2 by Unilu)
Number of downloads
24 (0 by Unilu)

Bibliography


Similar publications



Contact ORBilu