![]() Pascoal, Túlio ![]() in Proceedings on Privacy Enhancing Technologies (2023) Genome-wide Association Studies (GWASes) identify genomic variations that are statistically associated with a trait, such as a disease, in a group of individuals. Unfortunately, careless sharing of GWAS ... [more ▼] Genome-wide Association Studies (GWASes) identify genomic variations that are statistically associated with a trait, such as a disease, in a group of individuals. Unfortunately, careless sharing of GWAS statistics might give rise to privacy attacks. Several works attempted to reconcile secure processing with privacy-preserving releases of GWASes. However, we highlight that these approaches remain vulnerable if GWASes utilize overlapping sets of individuals and genomic variations. In such conditions, we show that even when relying on state-of-the-art techniques for protecting releases, an adversary could reconstruct the genomic variations of up to 28.6% of participants, and that the released statistics of up to 92.3% of the genomic variations would enable membership inference attacks. We introduce I-GWAS, a novel framework that securely computes and releases the results of multiple possibly interdependent GWASes. I-GWAS continuously releases privacy-preserving and noise-free GWAS results as new genomes become available. [less ▲] Detailed reference viewed: 54 (11 UL)![]() ; ; et al in Proceedings on Privacy Enhancing Technologies (2022), 2022(2), Detailed reference viewed: 34 (8 UL)![]() Damodaran, Aditya Shyam Shankar ![]() ![]() in Proceedings on Privacy Enhancing Technologies (2021, July), 2021(3), 95-121 Loyalty programs allow vendors to profile buyers based on their purchase histories, which can reveal privacy sensitive information. Existing privacy friendly loyalty programs force buyers to choose ... [more ▼] Loyalty programs allow vendors to profile buyers based on their purchase histories, which can reveal privacy sensitive information. Existing privacy friendly loyalty programs force buyers to choose whether their purchases are linkable. Moreover, vendors receive more purchase data than required for the sake of profiling. We propose a privacy-preserving loyalty program where purchases are always unlinkable, yet a vendor can profile a buyer based on her purchase history, which remains hidden from the vendor. Our protocol is based on a new building block, an unlinkable updatable hiding database (HD), which we define and construct. HD allows the vendor to initialize and update databases stored by buyers that contain their purchase histories and their accumulated loyalty points. Updates are unlinkable and, at each update, the database is hidden from the vendor. Buyers can neither modify the database nor use old versions of it. Our construction for HD is practical for large databases. [less ▲] Detailed reference viewed: 106 (25 UL)![]() Pascoal, Túlio ![]() ![]() in Proceedings on Privacy Enhancing Technologies (2021) Genome-Wide Association Studies (GWAS) identify the genomic variations that are statistically associated with a particular phenotype (e.g., a disease). The confidence in GWAS results increases with the ... [more ▼] Genome-Wide Association Studies (GWAS) identify the genomic variations that are statistically associated with a particular phenotype (e.g., a disease). The confidence in GWAS results increases with the number of genomes analyzed, which encourages federated computations where biocenters would periodically share the genomes they have sequenced. However, for economical and legal reasons, this collaboration will only happen if biocenters cannot learn each others’ data. In addition, GWAS releases should not jeopardize the privacy of the individuals whose genomes are used. We introduce DyPS, a novel framework to conduct dynamic privacy-preserving federated GWAS. DyPS leverages a Trusted Execution Environment to secure dynamic GWAS computations. Moreover, DyPS uses a scaling mechanism to speed up the releases of GWAS results according to the evolving number of genomes used in the study, even if individuals retract their participation consent. Lastly, DyPS also tolerates up to all-but-one colluding biocenters without privacy leaks. We implemented and extensively evaluated DyPS through several scenarios involving more than 6 million simulated genomes and up to 35,000 real genomes. Our evaluation shows that DyPS updates test statistics with a reasonable additional request processing delay (11% longer) compared to an approach that would update them with minimal delay but would lead to 8% of the genomes not being protected. In addition, DyPS can result in the same amount of aggregate statistics as a static release (i.e., at the end of the study), but can produce up to 2.6 times more statistics information during earlier dynamic releases. Besides, we show that DyPS can support a larger number of genomes and SNP positions without any significant performance penalty. [less ▲] Detailed reference viewed: 364 (60 UL)![]() Chen, Xihui ![]() ![]() ![]() in Proceedings on Privacy Enhancing Technologies (2020), 2020(4), 131-152 We present a novel method for publishing differentially private synthetic attributed graphs. Our method allows, for the first time, to publish synthetic graphs simultaneously preserving structural ... [more ▼] We present a novel method for publishing differentially private synthetic attributed graphs. Our method allows, for the first time, to publish synthetic graphs simultaneously preserving structural properties, user attributes and the community structure of the original graph. Our proposal relies on CAGM, a new community-preserving generative model for attributed graphs. We equip CAGM with efficient methods for attributed graph sampling and parameter estimation. For the latter, we introduce differentially private computation methods, which allow us to release communitypreserving synthetic attributed social graphs with a strong formal privacy guarantee. Through comprehensive experiments, we show that our new model outperforms its most relevant counterparts in synthesising differentially private attributed social graphs that preserve the community structure of the original graph, as well as degree sequences and clustering coefficients. [less ▲] Detailed reference viewed: 339 (5 UL)![]() ; ; et al in Proceedings on Privacy Enhancing Technologies (2020), 2020(3), 62--152 Detailed reference viewed: 90 (5 UL)![]() ; ; et al in Proceedings on Privacy Enhancing Technologies (2020), 2020(1), 165--194 Detailed reference viewed: 83 (8 UL)![]() Pejo, Balazs ![]() in Proceedings on Privacy Enhancing Technologies (2019, July) Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training ... [more ▼] Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have the necessary data to train a reasonably accurate model. For such organizations, a realistic solution is to train their machine learning models based on their joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, by focusing on a two-player setting, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player. Using recommendation systems as our main use case, we demonstrate how two players can make practical use of the proposed theoretical framework, including setting up the parameters and approximating the non-trivial Nash Equilibrium. [less ▲] Detailed reference viewed: 108 (4 UL)![]() ; ; Schiffner, Stefan ![]() in Proceedings on Privacy Enhancing Technologies (2019), 2019(2), 105--125 Many anonymous communication networks (ACNs) with different privacy goals have been devel- oped. Still, there are no accepted formal definitions of privacy goals, and ACNs often define their goals ad hoc ... [more ▼] Many anonymous communication networks (ACNs) with different privacy goals have been devel- oped. Still, there are no accepted formal definitions of privacy goals, and ACNs often define their goals ad hoc. However, the formal definition of privacy goals benefits the understanding and comparison of different flavors of privacy and, as a result, the improvement of ACNs. In this paper, we work towards defining and comparing pri- vacy goals by formalizing them as privacy notions and identifying their building blocks. For any pair of no- tions we prove whether one is strictly stronger, and, if so, which. Hence, we are able to present a complete hier- archy. Using this rigorous comparison between notions, we revise inconsistencies between the existing works and improve the understanding of privacy goals. [less ▲] Detailed reference viewed: 167 (8 UL) |
||