References of "Carli, Rachele 0211741250"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailRethinking Trust in Social Robotics
Carli, Rachele UL; Najjar, Amro UL

in arXiv (2021)

— In 2018 the European Commission highlighted the demand of a human-centered approach to AI. Such a claim is gaining even more relevance considering technologies specifi cally designed to directly ... [more ▼]

— In 2018 the European Commission highlighted the demand of a human-centered approach to AI. Such a claim is gaining even more relevance considering technologies specifi cally designed to directly interact and physically collaborate with human users in the real world. This is notably the case of social robots. The domain of Human-Robot Interaction (HRI) emerged to investigate these issues."Human-robot trust" has been highlighted as one of the most challenging and intriguing factors influencing HRI. On the one hand, user studies and technical experts underline how trust is a key element to facilitate users’ acceptance, consequently increasing the chances to pursue the given task. On the other hand, such a phenomenon raises also ethical and philosophical concerns leading scholars in these domains to argue that humans should not trust robots. However, trust in HRI is not an index of fragility, it is rooted in anthropomorphism, and it is a natural characteristic of every human being. Thus, instead of focusing solely on how to inspire user trust in social robots, this paper argues that what should be investigated is to what extent and for which purpose it is suitable to trust robots. Such an endeavour requires an interdisciplinary approach taking into account (i) technical needs and (ii) psychological implications. [less ▲]

Detailed reference viewed: 18 (2 UL)
Full Text
Peer Reviewed
See detailSocial robotics and deception: beyond the ethical approach
Carli, Rachele UL

in Proceedings of BNAIC/BeneLearn 2021 (2021)

Social robots are designed to directly interact with users, to collaborate with them and to act in a human-centred environment, with different degrees of automation. In order to encourage acceptability ... [more ▼]

Social robots are designed to directly interact with users, to collaborate with them and to act in a human-centred environment, with different degrees of automation. In order to encourage acceptability and trust, they are structured as so to leverage the human tendency to anthropomorphise what they interact with. It follows that some machines are able to simulate the feeling of genuine emotions or empathy, to appear needy of help, to pretend to have an own rersonality and – more in general – to induce the user to think that they are something more than mere objects. Thus, it may be argued that such interaction could lead to forms of manipulation that fall within the remit of a deceptive dynamic. Such a phenomenon is still much debated by the scientific community and raises significant concerns regarding long-term ethical and psychological repercussions on the users. This paper investigates which tools we have and which ones we may need to tackle the theme of deception in social robotics. Therefore, both ethical and legal perspectives are reconstructed, with the attempt to try to distinguish their respective scope and to emphasise their fruitful in tegration in addressing these issues. Finally, the possible relevance of fundamental human rights in human-robot interaction dynamics is dis cussed, due to their ability to reconcile ethical demands with the binding feature of legal norms. [less ▲]

Detailed reference viewed: 27 (3 UL)