References of "Pejo, Balazs 50009197"
     in
Bookmark and Share    
See detailIntegrity and Confidentiality Problems of Outsourcing
Pejo, Balazs UL

Doctoral thesis (2019)

Cloud services enable companies to outsource data storage and computation. Resource-limited entities could use this pay-per-use model to outsource large-scale computational tasks to a cloud-service ... [more ▼]

Cloud services enable companies to outsource data storage and computation. Resource-limited entities could use this pay-per-use model to outsource large-scale computational tasks to a cloud-service-provider. Nonetheless, this on-demand network access raises the issues of security and privacy, which has become a primary concern of recent decades. In this dissertation, we tackle these problems from two perspectives: data confidentiality and result integrity. Concerning data confidentiality, we systematically classify the relaxations of the most widely used privacy preserving technique called Differential Privacy. We also establish a partial ordering of strength between these relaxations and enlist whether they satisfy additional desirable properties, such as composition and privacy axioms. Tackling the problem of confidentiality, we design a Collaborative Learning game, which helps the data holders to determine how to set the privacy parameter based on economic aspects. We also define the Price of Privacy to measure the overall degradation of accuracy resulting from the applied privacy protection. Moreover, we develop a procedure called Self-Division, which bridges the gap between the game and real-world scenarios. Concerning result integrity, we formulate a Stackelberg game between outsourcer and outsourcee where no absolute correctness is required. We provide the optimal strategies for the players and perform a sensitivity analysis. Furthermore, we extend the game by allowing the outsourcer no to verify and show its Nash Equilibriums. Regarding integrity verification, we analyze and compare two verification methods for Collaborative Filtering algorithms: the splitting and the auxiliary data approach. We observe that neither methods provides a full solution for the raised problem. Hence, we propose a solution, which besides outperforming these is also applicable to both stage of the algorithms. [less ▲]

Detailed reference viewed: 38 (10 UL)
Full Text
Peer Reviewed
See detailTogether or Alone: The Price of Privacy in Collaborative Learinig
Pejo, Balazs UL; Tang, Qiang; Biczók, Gergely

in Proceedings on Privacy Enhancing Technologies (2019, July)

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training ... [more ▼]

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have the necessary data to train a reasonably accurate model. For such organizations, a realistic solution is to train their machine learning models based on their joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, by focusing on a two-player setting, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player. Using recommendation systems as our main use case, we demonstrate how two players can make practical use of the proposed theoretical framework, including setting up the parameters and approximating the non-trivial Nash Equilibrium. [less ▲]

Detailed reference viewed: 75 (3 UL)
Full Text
Peer Reviewed
See detailThe Price of Privacy in Collaborative Learning
Pejo, Balazs UL; Tang, Qiang UL; Gergely, Biczok

Poster (2018, October)

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training ... [more ▼]

Machine learning algorithms have reached mainstream status and are widely deployed in many applications. The accuracy of such algorithms depends significantly on the size of the underlying training dataset; in reality a small or medium sized organization often does not have enough data to train a reasonably accurate model. For such organizations, a realistic solution is to train machine learning models based on a joint dataset (which is a union of the individual ones). Unfortunately, privacy concerns prevent them from straightforwardly doing so. While a number of privacy-preserving solutions exist for collaborating organizations to securely aggregate the parameters in the process of training the models, we are not aware of any work that provides a rational framework for the participants to precisely balance the privacy loss and accuracy gain in their collaboration. In this paper, we model the collaborative training process as a two-player game where each player aims to achieve higher accuracy while preserving the privacy of its own dataset. We introduce the notion of Price of Privacy, a novel approach for measuring the impact of privacy protection on the accuracy in the proposed framework. Furthermore, we develop a game-theoretical model for different player types, and then either find or prove the existence of a Nash Equilibrium with regard to the strength of privacy protection for each player. [less ▲]

Detailed reference viewed: 125 (5 UL)
Full Text
Peer Reviewed
See detailTo Cheat or Not to Cheat - A Game-Theoretic Analysis of Outsourced Computation Verification
Pejo, Balazs UL; Tang, Qiang

in Fifth ACM International Workshop on Security in Cloud Computing, Abu Dhabi 2 April 2017 (2017, April 02)

In the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their com- putations to third-party cloud servers. In order to protect service quality, the ... [more ▼]

In the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their com- putations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its pay- o while ensuring the desired level of integrity for the out- sourced computation. We de ne two Stackelberg games and analyze the optimal setting's sensitivity for the parameters of the model. [less ▲]

Detailed reference viewed: 102 (8 UL)
Full Text
Peer Reviewed
See detailGame-Theoretic Framework for Integrity Verification in Computation Outsourcing
Pejo, Balazs UL; Tang, Qiang

Poster (2016, November 03)

n the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their computations to third-party cloud servers. In order to protect service quality, the integrity ... [more ▼]

n the cloud computing era, in order to avoid computational burdens, many organizations tend to outsource their computations to third-party cloud servers. In order to protect service quality, the integrity of computation results need to be guaranteed. In this paper, we develop a game theoretic framework which helps the outsourcer to maximize its payo while ensuring the desired level of integrity for the outsourced computation. We de ne two Stackelberg games and analyze the optimal se ing’s sensitivity for the parameters of the model. [less ▲]

Detailed reference viewed: 89 (8 UL)
Full Text
Peer Reviewed
See detailProtect both Integrity and Confidentiality in Outsourcing Collaborative Filtering Computations
Pejo, Balazs UL; Tang, Qiang; Wang, Husen

Scientific Conference (2016, June 27)

In the cloud computing era, in order to avoid the computational burdens, many recommendation service providers tend to outsource their collaborative filtering computations to third-party cloud servers. In ... [more ▼]

In the cloud computing era, in order to avoid the computational burdens, many recommendation service providers tend to outsource their collaborative filtering computations to third-party cloud servers. In order to protect service quality, the integrity of computation results needs to be guaranteed. In this paper, we analyze two integrity verification approaches by Vaidya et al. and demonstrate their performances. In particular, we analyze the verification via auxiliary data approach which is only briefly mentioned in the original paper, and demonstrate the experimental results. We then propose a new solution to outsource all computations of the weighted Slope One algorithm in two-server setting and provide experimental results. [less ▲]

Detailed reference viewed: 111 (14 UL)