Abstract :
[en] The ever-growing use of artificial intelligence (AI) systems through various techniques, and the pervasive presence of automation, have caused several ethical and legal concerns to
surge. At the heart of this discourse lies the fundamental topic of fairness. Structured in six chapters, this doctoral dissertation seeks to explore what fairness is, how it is guaranteed, and how it can be operationalised in algorithmic systems that significantly impact humans.
As algorithmic decisions increasingly support or even replace human judgment on a large societal scale, concerns about algorithmic discrimination have become a central focus of research. Scholars and policymakers worldwide recognise this as a critical challenge for
modern societies.
Rather than strictly defining fairness, this dissertation seeks to explore the practical operationalisation of fairness in algorithmic systems within decision-making contexts that significantly impact human lives.
The research employs a multi-disciplinary methodology rooted in legal informatics. It synthesises existing legal, philosophical, and ethical literature on fairness, providing a comprehensive overview of current theories and debates. Building on this foundation, it introduces a novel approach that conceptualises fairness as a multi-dimensional and multi-
layered concept. This methodology integrates various definitions of fairness proposed by philosophers of law and conducts an in-depth analysis of EU anti-discrimination legal frameworks. By critically evaluating their strengths and limitations, the study seeks to
develop a more robust framework for addressing algorithmic discrimination effectively. Further, the research evaluates technical solutions to bias evaluation and mitigation, namely fairness metrics and synthetic data, as potential technical solutions to the issue of
algorithmic discrimination. The dissertation also provides interpretative guidance for the EU Artificial Intelligence Act (AI Act) from a legal and ethical standpoint. This guidance helps stakeholders navigate
this new regulation’s complexities, ensuring its obligations can be achieved in practice.
Ultimately, the dissertation develops a bias assessment checklist through a harm-based approach focused on the prevention, evaluation, and mitigation of harm throughout the AI lifecycle. The Fair, Transparent, Accountable, and Legal (Fair-y-TALe) checklist is a novel tool designed according to the AI Act’s provisions to operationalise fairness in algorithmic decisions, enabling the identification and mitigation of discriminatory harms in AI systems.