Doctoral thesis (Dissertations and theses)
Deception in social robotics: problematic profiles of human-robot interaction and the universality of human vulnerability
Carli, Rachele
2024
 

Files


Full Text
PhD_Thesis_Rachele_Carli_Unilu.pdf
Embargo Until 01/Jan/2029 - Author postprint (1.89 MB)
Request a copy

All documents in ORBilu are protected by a user license.

Send to



Details



Keywords :
Law & Technology Human-Robot Interaction Vulnerability Theory Social Robotics Social Robotics Regulation Manipulation Deception Vulnerable Users Manipulation in Human-Robot Interaction Vulnerability in Human-Robot Interaction
Abstract :
[en] Social robots are designed to interact closely with humans for the purposes of education, care, rehabilitation, entertainment, and companionship. Hence, these robots should be socially accepted by users to maximise their utility. To achieve this, current design philosophies emphasise endowing them with a pleasant ap-pearance and an affable demeanour so as to leverage the inherent human tendency towards anthropomorphism and thereby elicit trust and an emotional bond. Such expedients have been known in computer science since the Turing Test. Therefore, deception can be said to be an intrinsic feature of the design process not only of AI technologies per se, but also of the interactions they are supposed to establish with non-specialised users. Whether deceptive mechanisms should be allowed and to what extent is still an open debate in the social sciences, especially in view of their possible negative effects, such as manipulative drifts, loss of meaningful human contact, addiction, and psychological harm. However, such evaluations very rarely attempt to inte-grate the functional reasons why deception is so studied in technology, which is, at least partly, due to the benefits it can bring to the end-user. The European Parliament recently published a resolution on addictive design; highlighting the urgent need for further investigation into the characteristics and risks of possible addictive designs in order to promote comprehensive approaches to manage and regulate addictive designs. The same pattern is followed with the regulation of dark patterns in the Digital Service Act, manipulative practices in the proposed AI Act, and the European Commission’s work to ensure digital fairness. The general approach is to assume that there are individuals who are potentially immune to manipulation or its side effects. This often results in ineffective protec-tion for real ‘average users’ who do not adhere to the canons of perfect rationality that traditional legal models assume. Therefore, it is necessary to identify a framework that is able to take a holistic perspective of Human-Robot Interaction, and that comprehensively analyses all of the interests at stake, including all of the possible benefits and harms for the end-user. In this thesis, Vulnerability Theory is proposed as such a framework. It recognises the universality and non-reducibility of human vulnerability, which is rooted in the individual dependency of subjects on one another, and collective dependence on institutions, laws, and the State. Therefore, dependency is understood as the material manifestation of every person’s inherent vulnerability, which is conceived of as universal while also retaining a situational aspect. Vulnerability is not inter-preted negatively by default — as a fragility or weakness — but can become so if it is not properly counterbalanced by resilience. Starting from this theory, this thesis suggests using the heuristic power of vul-nerability to identify drivers of dependency in HRI, and to investigate whether and how they are counterbalanced by a corresponding level of resilience – in terms of both technical design and legal norms. The resulting analytical toolkit, the Safety and Intimacy in Human-Robot Interaction Pipeline, is a concrete framework to qualitatively assess various interactions between robots and humans to evaluate the balance between dependence and resilience factors in accordance with Vulner-ability Theory. This analysis pipeline, if utilised by jurists and technicians, could facilitate pursuing the objectives set by the European Commission and European Parliament: a truly human-centred development of new technologies and effective protection of fundamental rights. To conclude, a case study will be presented analysing instances of the previously identified drivers of dependency in order to highlight possible resilience gaps and consequent areas of exposure to manipulative dynamics. This will serve as a demonstration of how this tool could be used by various stakeholders to understand potential benefits and risks when developing robots intended to enter the human social space.
Disciplines :
Engineering, computing & technology: Multidisciplinary, general & others
Author, co-author :
Language :
English
Title :
Deception in social robotics: problematic profiles of human-robot interaction and the universality of human vulnerability
Defense date :
03 October 2024
Institution :
Unilu - University of Luxembourg, Esch-sur-Alzette, Luxembourg
Degree :
Docteur en Informatique (DIP_DOC_0006_B)
Cotutelle degree :
University of Bologna, PhD in Law
Promotor :
VAN DER TORRE, Leon ;  University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Damiano, Luisa
President :
Weitzenboeck, Emily M.
Secretary :
Slavkovik, Marija
Available on ORBilu :
since 25 October 2024

Statistics


Number of views
38 (2 by Unilu)
Number of downloads
0 (0 by Unilu)

Bibliography


Similar publications



Contact ORBilu