No document available.
Keywords :
criminal responsibility, accountability, autonomy, legal personhood, criminal negligence, autonomous AI systems, robotics
Abstract :
[en] The efforts to incorporate applications of Artificial Intelligence (AI) into the decision-making structures of various social institutions, like the automobile and health care sectors, have currently reached the peak of their intensity, and are likely to continue in a similar pace in the future. Amongst the impressive technological developments that are being registered, legal scholars are debating on the adequacy of existing rules and doctrines to accommodate the features of modern AI systems, and one of the most important questions lies in the fair attribution of responsibility for any unintended harm caused. The present PhD thesis discusses the challenges that AI systems create for basic concepts of the criminal law, and the normative conditions that should be met so as to ensure that individual criminal liability in the field can continue to be established, at least in the most severe cases of harm. In the present analysis, it is explored whether the challenges associated with modern AI systems could lead to a paradigm shift in the attribution of criminal responsibility in a double sense. The first question posed is whether certain sophisticated AI systems could be endowed with criminal personhood, that is, whether they could be legitimately regarded as subjects of punishment in the near future. The second question is whether the complexity and opacity of AI systems poses such severe difficulties in the process of tracing a harm back to culpable individual conduct, that AI systems’ operators (i.e., the natural and corporate persons behind their design, training, marketing and use) cannot in many cases be blamed when unintended outcomes materialise and severe physical harm is caused.
Institution :
Unilu - Université du Luxembourg [Faculty of Law, Economics and Finance (FDEF)], Esch sur Alzette, Luxembourg