Article (Périodiques scientifiques)
Preserving data privacy in machine learning systems
EL MESTARI, Soumia Zohra; LENZINI, Gabriele; DEMIRCI, Huseyin
2024In Computers and Security, 137, p. 103605
Peer reviewed vérifié par ORBi
 

Documents


Texte intégral
1-s2.0-S0167404823005151-main.pdf
Postprint Auteur (1.49 MB) Licence Creative Commons - Attribution, Pas d'Utilisation Commerciale
Télécharger

Tous les documents dans ORBilu sont protégés par une licence d'utilisation.

Envoyer vers



Détails



Mots-clés :
Trustworthy machine learning; Machine learning; Differential privacy; Homomorphic encryption; Functional encryption; Secure multiparty computation; Privacy threats
Résumé :
[en] The wide adoption of Machine Learning to solve a large set of real-life problems came with the need to collect and process large volumes of data, some of which are considered personal and sensitive, raising serious concerns about data protection. Privacy-enhancing technologies (PETs) are often indicated as a solution to protect personal data and to achieve a general trustworthiness as required by current EU regulations on data protection and AI. However, an off-the-shelf application of PETs is insufficient to ensure a high-quality of data protection, which one needs to understand. This work systematically discusses the risks against data protection in modern Machine Learning systems taking the original perspective of the data owners, who are those who hold the various data sets, data models, or both, throughout the machine learning life cycle and considering the different Machine Learning architectures. It argues that the origin of the threats, the risks against the data, and the level of protection offered by PETs depend on the data processing phase, the role of the parties involved, and the architecture where the machine learning systems are deployed. By offering a framework in which to discuss privacy and confidentiality risks for data owners and by identifying and assessing privacy-preserving countermeasures for machine learning, this work could facilitate the discussion about compliance with EU regulations and directives. We discuss current challenges and research questions that are still unsolved in the field. In this respect, this paper provides researchers and developers working on machine learning with a comprehensive body of knowledge to let them advance in the science of data protection in machine learning field as well as in closely related fields such as Artificial Intelligence.
Centre de recherche :
Interdisciplinary Centre for Security, Reliability and Trust (SnT) > IRiSC - Socio-Technical Cybersecurity
Disciplines :
Sciences informatiques
Auteur, co-auteur :
EL MESTARI, Soumia Zohra  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
LENZINI, Gabriele  ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
DEMIRCI, Huseyin ;  University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > IRiSC
Co-auteurs externes :
no
Langue du document :
Anglais
Titre :
Preserving data privacy in machine learning systems
Date de publication/diffusion :
février 2024
Titre du périodique :
Computers and Security
ISSN :
0167-4048
Maison d'édition :
Elsevier BV
Volume/Tome :
137
Pagination :
103605
Peer reviewed :
Peer reviewed vérifié par ORBi
Focus Area :
Computational Sciences
Projet européen :
H2020 - 956562 - LeADS - Legality Attentive Data Scientists
Organisme subsidiant :
European Commission
Union Européenne
N° du Fonds :
956562
Subventionnement (détails) :
This work has been supported by the EU 956562, MSCA-ITN-2020 - Innovative Training Networks, “Legality Attentive Data Scientists” (LeADS) project.
Disponible sur ORBilu :
depuis le 19 décembre 2023

Statistiques


Nombre de vues
366 (dont 16 Unilu)
Nombre de téléchargements
948 (dont 12 Unilu)

citations Scopus®
 
90
citations Scopus®
sans auto-citations
89
citations OpenAlex
 
98

Bibliographie


Publications similaires



Contacter ORBilu