Virtues and vices in our relationships with robots: Is there an asymmetry and how might it be explained?

Author

Rob Sparrow

Abstract

In previous work (“Robots, rape, and representation”), drawing on virtue ethics, I have argued that we may demonstrate morally significant vices in our treatment of robots. Even if an agent’s “cruel” treatment of a robot has no implications for their future behaviour towards people or animals, I believe that it may reveal something about their character, which in turn gives us reason to criticise their actions. Viciousness towards robots is real viciousness. However, I don’t have the same intuition about virtuous behaviour. That is to say, I see no reason to think that “kind” treatment of a robot reflects well on an agent’s character nor do I have any inclination to praise it. At first sight at least this is puzzling: if we should morally evaluate some of our relationships with robots why not all of them? In this presentation, I will argue that these conflicting intuitions may be reconciled by drawing on further intuitions about the nature of virtue and vice and the moral significance of self-deception. Neglecting the moral reality of the targets of our actions is little barrier to vice and may sometimes be characteristic of it. However, virtue requires an exercise of practical wisdom that may be vitiated by failure to attend to the distinction between representation and reality. Thus, while enjoying representations of unethical behaviour is unethical, acting out fantasies of good behaviour with robots is, I believe, at best morally neutral and, in certain circumstances, may even be morally problematic.

Bio

rob sparrow

Rob Sparrow is a Professor in the Philosophy Program, a Chief Investigator in the Australian Research Council Centre of Excellence for Electromaterials Science, and an adjunct Professor in the Monash Bioethics Centre, at Monash University, where he works on ethical issues raised by new technologies. He has been an ARC Future Fellow, a Japanese Society for the Promotion of Science Visiting Fellow at Kyoto University, a Visiting Fellow in the CUHK Centre for Bioethics, in the Faculty of Medicine, at the Chinese University of Hong Kong, and a Visiting Fellow at the Centre for Biomedical Ethics, in the Yong Loo Lin School of Medicine, at the National University of Singapore. He has published widely, in both academic journals and the popular press, on the ethics of military robotics, social robotics, videogames, and AI. He is a co-chair of the IEEE Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.

Virtuous human-robot assemblages: trust and tempered anthropomorphism

Author

Eleanor Sandry

Abstract

My research draws on perspectives from communication theory and the philosophy of technology to suggest that perceptions of the agency of robots are enmeshed with their situatedness in the world, dynamic responses to the environment and, in particular, the ways in which they interact with people. Analysing specific instances of human-robot interaction, whether these occur in the context of creative art, in homes, on roads or at work, suggests that it may be helpful to recognise robots as having an agency that emerges during human-robot collaborations. When interacting closely together, humans and robots might be understood to be in a cyborg relation, but a more flexible approach is to consider them as human-robot assemblages, which are temporary and whose components can change as required. The operation of human-robot assemblages relies on a process of what could be called tempered anthropomorphism, which supports meaningful communication between human and robot while also ensuring people are continually reminded that the robot is a machine. This analysis suggest it is human-robot assemblages, as opposed to just robots, that need to be virtuous (not to mention the systems and institutions that surround them) to work towards a technological future that encompasses “the good life”.

Bio

eleanor sandry

Eleanor Sandry is a lecturer and researcher in Internet Studies at Curtin University and previously a Fellow of the Curtin Centre for Culture and Technology. Her research is focused on developing an ethical and pragmatic recognition of, and respect for, otherness and difference in communication, drawing on examples from science and technology, science fiction and the creative arts. She is particularly interested in exploring the communicative and collaborative possibilities of human interactions with robots. Her book, Robots and Communication, was published in 2015 by Palgrave Macmillan.

Robot addiction: From science fiction representation to real addiction.

Author

Eduardo B. Sandoval

Abstract

This work raises current concerns about the possible development of robot addiction, both physical and emotional, as we witness cognitive overload produced by multiple gadgets, social media and other technologies. Certain sectors of the population excessively use computers, internet, mobiles, tables, apps, videogames, VR viewers and other screen-based technologies . This “addiction” to technology is creating physical, psychological and emotional issues in users due to sedentarism, cognitive overload and lack of socialisation . Rapid advances in robotics foreshadow a daily life of service/social robots performing tasks on our behalf and interacting with humans in daily scenarios, generating the possibility of future physical and emotional dependency in the near future. An over-use of robots for physical and cognitive activities could result in modification of multiple behaviours affecting the way that humans experience daily life. Hence, some adjustments should be done in order to raise a healthy global population using robots and avoiding future negative consequences for users and society in general. Furthermore, addiction towards robots should be defined. Questions requiring further research are raised: to what extent a service/social robot should be used? To what extent should we encourage social, and emotional engagement with robots? How can we avoid negative interactions with robots as is suggested in much of science fiction? Media amplifies the possible interactive and emotional scenarios involving humans and robot relationships. People fall in love with robots, and being friends and enemies with robots are fictional topics depicted in movies, books, and tv-series. Under this representation is possible to imagine different future scenarios with “toxic relationships”, “emotional dependency” and “addiction to robots”. However, technology hasn’t yet achieved the level of sophistication required for natural human-robot interaction. At the moment, there is limited progress in the development of social robots capable of minimal and limited social interaction involving emotional, and psychological engagement with users under controlled conditions. Nevertheless, future short-term applications of social robots aim to use this technology in education, psychotherapy, caregiving and several other human interactive purposes. Considering the importance of those areas for human development, this proposal aims reflect over the adjustments required to use safely social robots in emotional, social and psychological terms avoiding future addition towards them. Similarly, ethical implications in robot-behavioural design must be prepared in order to provide moral guidance to future robot designers.

Bio

Eduardo B. Sandoval is a social robotics researcher. His work spans different aspects of social robotics, such as Reciprocity in Human-Robot Interaction (HRI), robots and education, and robots and fiction, among other topics. Mainly, He is interested in how people make decisions when they interact with robots and other interactive devices. Beyond traditional robotics, there is a growing interest in the idea of designing machines that are capable of meaningful social interactions with humans. His work incorporates insights from behavioural economics and social psychology in order to explore different approaches in social robotics. He claims that “as a result of working in social robotics I have a better understanding of the human condition”.

Psychometric Measures of Incentives and Self-Efficacy to Interact with a Social Humanoid Robot

Author

Nicole Robinson

Abstract

New measures were developed to explore the human-robot interaction dimensions of self-efficacy, incentives and intentions relating to use of a social humanoid robot. Exploratory Factor Analyses were applied to investigate the initial structure of the scales from a high-school student sample rating a live 2-minute interaction between a robot and a person. Confirmatory Factor Analyses were applied to confirm the factor structure from a new cross-sectional online survey sample using a 2-minute video stimulus of an adult and a robot discussing a health-related topic. This trial also involved a two-part study which applied the newly developed measures in different experimental designs to investigate how individuals rate a human-robot interaction across different presentations using a between-groups and within-group design. Trial findings and implications of this paper will be discussed, including future considerations and development of the measures.

Bio

nicole robinson
QUT PhD student Nicole Robinson and NAO robot Andy. Nicole is conducting a survey exploring human reaction to robot assisting with healthcare.

Dr Nicole Robinson is a Robotics Researcher at the Australian Centre for Robotic Vision on the Humanoid Robotics project, an R&D project supported by the Queensland Government. She is currently leading clinical trials and experimental studies involving the use of humanoid robots in interpersonal interactions. Nicole’s research interests involve designing human behaviours in agents and robotic systems. She has previously conducted research in the healthcare field, including the translation of a psychotherapeutic program to be delivered by a humanoid robot.

Hypothetical Machines: Innovation, Labour, and Reward in an age of AI

Author

Liam Magee

Abstract

In his critique of AI, Pasquinelli writes “The ‘intelligence’ of neural networks is, therefore, just a statistical inference of the correlations of a training dataset… neural networks cannot escape the boundary of the categories that are implicitly embedded in the training dataset”. Pasquinelli’s commentary belongs to a growing but also historically well-established literature linking the epistemological limits of AI to questions of political economy. If machines can only reproduce well-defined tasks — however ingenuously — human activity must remain an irreducible residual in any economic calculus. At the same time, and against big tech and accelerationist enthusiasm for AI, machinic labour serves only to reproduce existing relations of production. He concludes “statistical inference is the distorted, new eye of the capital’s Master”. Whether the current AI connectionist paradigm is in fact constrained in the ways Pasquinelli and other critics diagnose remains questionable. Might there instead be scope, within the massive parallel operations conducted deep inside today’s neural networks, to produce novelty? If so, what implications would such novelties have for incentive-based economies — including academia — premised substantially upon innovation? What happens when, as legal scholars have begun to theorise, algorithmic outputs become patentable? This paper examines heralded instances of machine learning in gameplay, code generation and knowledge production that stretch at the limits of what is thought computable. Such examples suggest at sufficient scale, what begins as “statistical inference” may become indistinguishable from other, more privileged forms of human cognition. Specifically, the ability to generate scientific hypothesises, poetic metaphor or disruptive market innovation — examples of what Peirce referred to as “abduction” — would no longer be exceptionally human. The paper suggests that under current tendencies, this realisation will only further distance the owners of technologically-invested capital from disenfranchised subjects, with barely even their labour to sell. At the same time, full cognitive automation also deflates moral arguments for the merit-based differential distributions of rewards that underpin free-marketism. If algorithms come to dominate the commanding heights of cognitive capitalism, other systems for resource distribution may appear more compelling, morally and politically. Such prospects reflect back upon ethical questions raised in connection with AI. Echoing Marx, heterodox economists today (e.g. Dumenil & Levy) argue any talk of the “good life” must remain perniciously ideological in the presence of widespread domination. The paper concludes with desiderata for a virtuous AI sociality, including a retracing of rising inequality, that would form the ground for any ethical encounter between individual human and machine.

Bio

Liam Magee is a sociologist of science and technology, specialising in the application and impact of software on urban ways of life. His current research centres on machine learning, digital games and data analytics, and how these technologies interact and interfere with social systems such as cities, organisations, labour, environmental movements and financial markets.

Pinocchio Doctrine. Exposing the paradox of robotic nature

Author

Massimiliano Cappuccio

Abstract

What kind of entity essentially is a social robot, what is its true purpose, and how do we assess its significance for our lives? Are robots meant to replace/overtake, extend/augment, or imitate/represent human? To address these philosophical questions, I propose a speculative approach to human-robot interaction inspired by virtue ethics, recognition theory, and constructivist epistemology. I illustrate this approach allegorically through Mario Collodi’s famous children novel Pinocchio. Pinocchio is the literary prototype of a social robot and every social robot is, in some sense, an instantiation of Pinocchio. That is why the key elements of Pinocchio’s story tell us something about human-robot interaction: Pinocchio is a marionette that aspires to become a person; Pinocchio is created by Geppetto (the poietic ingenuity) for this purpose, but it is the Blue Fairy (empathy and social interaction) that makes the transformation possible; Pinocchio is a pathological liar, but cannot hide his deceptive intentions; Pinocchio can become a person only when, instead of deceiving others, develops an authentic concern for them. This “Pinocchio doctrine”, as I call it, gives voice to the intuition that, if we want to correctly define their ontological, axiological, and moral status, social robots should neither be conceived as slaves nor as companions. These two antithetical options presuppose the same paradoxical normative view of human-robot interaction, one that is rooted into oxymoronic dyads (ownership/autonomy, subordination/decision, instrumentality/personhood). To make sense of their value, I suggest social robots should be first of all understood as human creations and, more specifically, as proto-creatures – that is, a particular subset of creations meant (but not necessarily able) to recognize and be recognized by a social other. Even if the creature is originally tied to its creator, their bond is established only to be transcended: the ideal creator is the one who welcomes the independency of its creature, and an accomplished creature is the one who eventually achieves autonomy from its creator. The irreducible tension between these two aspects of the social robot explains why we are unable to ascribe rights to robots, which we own as our own tools, and yet we must give some moral consideration to them, as we inevitably tend to value their autonomy, even when incomplete. These considerations justify a constructivist and relational framework to articulate the ethical, epistemic, and aesthetic value of social robots. Relying on this conceptual framework, Pinocchio doctrine allows us to clarify the status of robots as moral patients and agents. Also, it indicates whether obligations, rights, and legal liabilities apply to social robots. Ultimately, it suggests a virtuous way to design robots, to behave with them, and interpret their role in our life.

Bio

Massimiliano Cappuccio (PhD, State University of Pavia) is a Research Associate at the University of New South Wales, where he is also Deputy Lead (Knowledge Exchange) of the Values in Defense & Security Technology (VDST) group. He has a secondary affiliation with United Arab Emirates University, where he is Associate Professor of Cognitive Science and Director of the Cog Sci Lab. His work as a cognitive philosopher and a philosophical psychologist addresses both theoretical and applied issues in embodied cognition and social cognition combining analytic, phenomenological, and empirical perspectives. His publications have covered topics like: motor intentionality and phenomenology of skill and expertise; unreflective action and choking effect; joint attention and deictic gestures; empathy and mirror neurons; social robotics and technology ethics; artificial intelligence theory; implicit knowledge and the frame problem. He is one of the organizers of the annual Joint UAE Symposium on Social Robotics, hosted by United Arab Emirates University and New York University Abu Dhabi, and the International Conference in Sport Psychology and Embodied Cognition, sponsored by Abu Dhabi Sports Council.

Robots and Racism Revisited

Author

Christoph Bartneck (UTS)

Abstract

We previously showed that radicalized robots are being treated with the same racial biases as humans by replicating the well-established shooter bias study by Correll et al. with robots. I will now present new insights into the study of racism within the framework on the shooter bias methodology. Furthermore, I will elaborate on the process of conducting research on racism in the human-robot interaction community.

Bio

Dr. Christoph Bartneck is an associate professor and director of postgraduate studies at the HIT Lab NZ of the University of Canterbury. He has a background in Industrial Design and Human-Computer Interaction, and his projects and studies have been published in leading journals, newspapers, and conferences. His interests lie in the fields of Human-Computer Interaction, Science and Technology Studies, and Visual Design. More specifically, he focuses on the effect of anthropomorphism on human-robot interaction. As a secondary research interest he works on bibliometric analyses, agent based social simulations, and the critical review on scientific processes and policies. In the field of Design Christoph investigates the history of product design, tessellations and photography. He has worked for several international organizations including the Technology Centre of Hannover (Germany), LEGO (Denmark), Eagle River Interactive (USA), Philips Research (Netherlands), ATR (Japan), and The Eindhoven University of Technology (Netherlands). Christoph is an associate editor of the International Journal of Social Robotics, the International Journal of Human Computer Studies and Entertainment Computing Journal. Christoph is a member of the New Zealand Institute for Language Brain & Behavior, ACM SIGCHI, The New Zealand Association Of Scientists and Academic Freedom Aotearoa. The press regularly reports on his work, including the New Scientist, Scientific American, Popular Science, Wired, New York Times, The Times, BBC, Huffington Post, Washington Post, The Guardian, and The Economist.