With the advance of AI technology, it seems that robots might take over decision making from humans in many aspects of daily life. For humans to accept it, nevertheless, the decisions robots make must be morally acceptable for humans. A natural thought is that robots should be taught how to apply moral reasoning as humans do. For example, the Moral Machine project by MIT attempts to assimilate robots’ moral reasoning into humans’. In other words, robots should act as moral agents like us. Call it the anthropomorphic view. I oppose the anthropomorphic view. Based on the insight from P. F. Strawson’s view about reactive attitudes, I argue that it is morally wrong and psychologically unacceptable for robots to interfere with our autonomy.
Tsung-Hsing Ho‘s main research interests include epistemic normativity, the normativity of mental attitudes, and the fitting-attitude account of value. He is an assistant professor of the department of philosophy at National Chung Cheng University (Taiwan).
A new generation of implantable AI brain–computer interfaces devices (advisory system) have been tested for the first time in a human clinical trial, with significant success. These AI predictive implants detect specific neuronal activity patterns, such as an epileptic seizure, and provide information to help patients to respond to the upcoming neuronal events; as such they are advisory system. By forecasting a seizure, the AI device gives control to patients on how to respond and decide on a therapeutic course ahead of time. In theory, these AI advisory system implants could be used for a large range of clinical and non-clinical application; such as augmenting and empowering agential cognitive capacities (i.e. reasoning, learning, decision making information retrieval and analysis), but also predicting unwanted outcomes (i.e. depressive episodes; addictive habits, socially reprehensive conducts). Being advised by an implantable AI system can positively increase individual’ quality of life; however, doing so does not come free of ethical concerns. There is currently a lack of evidence concerning the various impacts of invasive AI brain implants on patients’ decision-making processes, especially how being in the decisional loop impacts patients’ sense of autonomy. This presentation addresses these gaps by providing data that we obtained from a first-in-human clinical trial involving patients implanted with advisory brain devices. This presentation explores ethical issues related to the potential psychological harms from an AI device that ‘knowns better’ than the implanted individual.
Frédéric Gilbert focuses on bio-ethics. He is an expert in neuro-ethics. He is not a scientist. He is a philosopher. By monitoring patients with brain devices, Dr Gilbert grapples with the ethical questions posed by invasive brain technologies. His research informs the debates that guide policy regulation, especially in regard to human clinical and experimental trials.
It was clear at the start of this century that we had entered into a paradigm of behaviours and relationships that push established definitions and disciplines to be reformulated. Technology is transforming from a mechanism that used to mediate dialogue and discourse among humans to itself becoming potential interlocutor in those activities. This presentation proposes that this development elevates the discussion in design, engineering, humanities and social sciences to a new sense of reality that is complex and difficult to decipher because intelligent technologies do not have a unique niche and materiality. One discipline cannot explain them alone since their embodiment can alternate between and be simultaneously physical and digital within live, fluid and changing networks and connections. Designers and engineers need to expand their field of action from making and measuring tangible and software artefacts to understand the social and psychological implications of animate objects in society. Humanities and social sciences should also expand from interpreting the cultural and social impact of media and information technologies. This type of after-the-fact study, as interpretation of the present based on the archeology of the past, lags behind the present engineering of society that is based on future visions of technology; never neutral from human-driven economic and political plan and intervention. Interdisciplinary research on robotics and artificial intelligence in society should lead to a better path for that evolution among their limitations, assistive potential and vernacular belief that ranges from a Terminator’s apocalypse and the naïve wish for a perfect and free high-tech society. In the meanwhile, 4.0 industrial revolution already shows a skill void between a knowers’ elite and a group formed by blue collar and manual workforce. For now, mainly the latter group is being swallowed into that growing gap in the middle with a vortex force of automation and unemployment. Innovative findings will come from reconceptualising labor and participation beyond robotic architectural concerns (e.g. programming, wiring, mechanical engineering) and humanities and social sciences critical interpretation (e.g. history, large databases, social media). Importantly, analysis should concentrate on relationships between humans, humans and intelligent objects and among intelligent objects. That will force the reframing of their activities, contextuality (defining meaning in reference to a larger expression), intercontextuality (participants’ motifs that intersect and mirror one another) and transcontextuality (capacity to create connections between things or ideas that are not typically associated with each other). These are matters of intersubjectivity that confer similar sociotechnical and animate qualities to humans and artificial participants. Particularly what are their similar and unique capacities for emotions and feelings, identity and belonging that allow them to cross boundaries. Designing of new and appropriate solutions would be the result of redrawing the two disconnected narratives of humans and artificial intelligences together by revaluing each side on their self-recognition (as function of applying self-awareness to distinguish between the self and the other), and self-efficacy (degree of confidence on the ability to perform a behaviour, succeed and accomplish a task in a specific situation).
Mauricio Novoa is a designer and academic with more than 35 years’ experience in the industry from product to industrial, architecture, advertising, communications and marketing (2D, 3D and 4D time based, events and moving image). As an academic for the last 12 years, he has been Director of the Academic Program in the School of Computing, Engineering and Mathematics from 2012 to 2015 (end January) leading the School through discipline transformation, writing the new curriculum, shaping the new vision for their courses and implementing them as to “bringing creativity, innovation and entrepreneurship”. He is a PhD candidate at WSU’s Institute for Culture and Society (Globalization and Cultural Economy stream) investigating a “New Knowledge Ecology for Industrial Design Artefact and Expertise in Education and Industry”. Together with industry, he currently leads a REDI Connections project researching “New Learning and Environment”. His work as an artist has been exhibited nationally and internationally as early as in the touring exhibition “Chilean Artists of the 20th Century”, Chile, South America (1984) and later in Australia as in “The Boundary Rider: 9th Biennale of Sydney” (1992/93) and subsequent touring exhibitions.
Since the ancient times people perceive human shapes in non-human objects and depict Gods in their resemblance. Therefore, it should not be surprising that the advancements in technology lead to the development of products that are becoming increasingly similar to us in their form and behavior. It is especially evident in the field of Social Robotics dominated by humanoids and a growing number of androids. The development of these robots is believed to facilitate Human-Robot Interaction since humans are used to interact with other humans. Furthermore, it helps us to understand better our own nature and what does it mean to be a human. However, understanding the consequences of anthropomorphism and the process itself received less attention. Currently, anthropomorphism is used to describe humanlike appearance of robots as well as attribution of mind to them. However, without understanding the psychological process of attributing humanlike properties or characteristics to non-human agents, it may be impossible to form theories that can accommodate often contradictory results of studies. In particular, it may be helpful to differentiate between objective properties of robots and the subjective perception of them by the users, and consider a possibility that anthropomorphism may be an outcome of more than a single process. This in turn can lead to unique consequences in human-robot interactions.
Jakub Złotowski is a postdoctoral fellow at the Cluster of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University and a visiting fellow at the School of Electrical Engineering and Computer Science, Queensland University of Technology. He received his PhD in Human-Robot Interaction at the University of Canterbury in 2015. His research focus is on anthropomorphism and social aspects of Human-Robot Interaction. He has also conducted research in the field of Android Science. His interdisciplinary research approach spans the areas of Human-Computer Interaction, Social Psychology, Cognitive Science and Machine Learning. He has worked at several international institutions including the University of Salzburg (Austria), ATR (Japan), Osaka University (Japan) and Abu Dhabi University (UAE).
There’s a lot of discussion in many different fora about AI and Ethics. In this talk, I’ll attempt to identify what new issues AI brings to the table, as well as where AI requires us to address otherwise old issues. I will cover topics from autonomous cars via predictive analytics to killer robots.
Toby Walsh is Scientia Professor of Artificial Intelligence at the University of New South Wales and Data61. He was named by the Australian newspaper as a “rock star” of Australia’s digital revolution. Professor Walsh is a strong advocate for limits to ensure AI is used to improve our lives. He has been a leading voice in the discussion about lethal autonomous weapons (aka killer robots) speaking at the UN in New York and Geneva on the topic. He is a Fellow of the Australia Academy of Science. He appears regularly on TV and radio, and has authored two books on AI for a general audience, the most recent entitled “2062: The World that AI Made”.
As the field of Social Robotics rapidly grows there is a need to reconsider robot aesthetics, behaviour, learning and adaptability to varying social contexts in order to improve fluency, effectiveness and human interest during long term interaction with a robot. There is also a pressing need for a more informed multi-disciplinary approach in the design, development and evaluation of these systems. Velonaki’s presentation will focus on experiential human robot interaction as a key function and a driver for developing social robots. In order to be effective in social contexts, robots ultimately need the ability to ‘understand’ human behaviours and social settings to integrate in a fluid and non-intrusive manner. An important next step is to model social environments for a wider range of interactions with a robot —interactions that trigger a greater variety of behavioural patterns, rather than mere task performance. Furthermore, Velonaki will argue that is imperative that our encounters with social robots must be continually engaging and valued in order to maintain our long term interest and attention.
Mari Velonaki is a Professor of Social Robotics at the University of New South Wales, Sydney. She is the founder and director of the Creative Robotics Lab (Art & Design UNSW) and the founder and director of the National Facility for Human Robot Interaction Research (UNSW, USYD, UTS, St Vincent’s Hospital). Mari’s robots and interactive installations have been exhibited worldwide, including: Victoria & Albert Museum, London; National Art Museum Beijing; Gyeonggi Museum of Modern Art, Korea; Aros Aarhus Museum of Modern Art, Denmark; Wood Street Galleries, Pittsburgh; Millennium Museum – Beijing Biennale of Electronic Arts; Ars Electronica, Linz; European Media Arts Festival, Osnabruck; ZENDAI Museum of Modern Art, Shanghai; Art Gallery of NSW, Sydney, Museum of Contemporary Arts, Sydney; Conde Duque Museum, Madrid. Mari Velonaki’s practice and research is situated in the multi-disciplinary field of Social Robotics. Her approach to Social Robotics has been informed by aesthetics and design principles that stem from the theory and practice of Interactive Media Art. Velonaki has made significant contributions in the areas of Social Robotics, Media Art and Human-Machine Interface Design. Her career outputs across these fields are extensive. Velonaki began working as a media artist/researcher in the field of responsive environments and interactive interface design in 1997. She pioneered experimental interfaces that incorporate movement, speech, touch, breath, electrostatic charge, artificial vision and robotics, allowing for the development of haptic and immersive relationships between participants and interactive agents. Mari’s designed her first robots in 2004 as part of a major Australian Research Council (ARC) project ‘Fish–Bird’ (2004-07) which led at the Australian Centre for Field Robotics (ACFR USYD). In 2006 she founded the Centre for Social Robotics. In 2009 she was awarded an ARC Fellowship (2009–2013) leading to the creation of her humanoid robot ‘Diamandini’. In 2014, she was voted by Robohub – a large robotics community of researchers, educators and business- as one of the world’s 25 women in robotics you need to know about.
In previous work (“Robots, rape, and representation”), drawing on virtue ethics, I have argued that we may demonstrate morally significant vices in our treatment of robots. Even if an agent’s “cruel” treatment of a robot has no implications for their future behaviour towards people or animals, I believe that it may reveal something about their character, which in turn gives us reason to criticise their actions. Viciousness towards robots is real viciousness. However, I don’t have the same intuition about virtuous behaviour. That is to say, I see no reason to think that “kind” treatment of a robot reflects well on an agent’s character nor do I have any inclination to praise it. At first sight at least this is puzzling: if we should morally evaluate some of our relationships with robots why not all of them? In this presentation, I will argue that these conflicting intuitions may be reconciled by drawing on further intuitions about the nature of virtue and vice and the moral significance of self-deception. Neglecting the moral reality of the targets of our actions is little barrier to vice and may sometimes be characteristic of it. However, virtue requires an exercise of practical wisdom that may be vitiated by failure to attend to the distinction between representation and reality. Thus, while enjoying representations of unethical behaviour is unethical, acting out fantasies of good behaviour with robots is, I believe, at best morally neutral and, in certain circumstances, may even be morally problematic.
Rob Sparrow is a Professor in the Philosophy Program, a Chief Investigator in the Australian Research Council Centre of Excellence for Electromaterials Science, and an adjunct Professor in the Monash Bioethics Centre, at Monash University, where he works on ethical issues raised by new technologies. He has been an ARC Future Fellow, a Japanese Society for the Promotion of Science Visiting Fellow at Kyoto University, a Visiting Fellow in the CUHK Centre for Bioethics, in the Faculty of Medicine, at the Chinese University of Hong Kong, and a Visiting Fellow at the Centre for Biomedical Ethics, in the Yong Loo Lin School of Medicine, at the National University of Singapore. He has published widely, in both academic journals and the popular press, on the ethics of military robotics, social robotics, videogames, and AI. He is a co-chair of the IEEE Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.
My research draws on perspectives from communication theory and the philosophy of technology to suggest that perceptions of the agency of robots are enmeshed with their situatedness in the world, dynamic responses to the environment and, in particular, the ways in which they interact with people. Analysing specific instances of human-robot interaction, whether these occur in the context of creative art, in homes, on roads or at work, suggests that it may be helpful to recognise robots as having an agency that emerges during human-robot collaborations. When interacting closely together, humans and robots might be understood to be in a cyborg relation, but a more flexible approach is to consider them as human-robot assemblages, which are temporary and whose components can change as required. The operation of human-robot assemblages relies on a process of what could be called tempered anthropomorphism, which supports meaningful communication between human and robot while also ensuring people are continually reminded that the robot is a machine. This analysis suggest it is human-robot assemblages, as opposed to just robots, that need to be virtuous (not to mention the systems and institutions that surround them) to work towards a technological future that encompasses “the good life”.
Eleanor Sandry is a lecturer and researcher in Internet Studies at Curtin University and previously a Fellow of the Curtin Centre for Culture and Technology. Her research is focused on developing an ethical and pragmatic recognition of, and respect for, otherness and difference in communication, drawing on examples from science and technology, science fiction and the creative arts. She is particularly interested in exploring the communicative and collaborative possibilities of human interactions with robots. Her book, Robots and Communication, was published in 2015 by Palgrave Macmillan.
This work raises current concerns about the possible development of robot addiction, both physical and emotional, as we witness cognitive overload produced by multiple gadgets, social media and other technologies. Certain sectors of the population excessively use computers, internet, mobiles, tables, apps, videogames, VR viewers and other screen-based technologies . This “addiction” to technology is creating physical, psychological and emotional issues in users due to sedentarism, cognitive overload and lack of socialisation . Rapid advances in robotics foreshadow a daily life of service/social robots performing tasks on our behalf and interacting with humans in daily scenarios, generating the possibility of future physical and emotional dependency in the near future. An over-use of robots for physical and cognitive activities could result in modification of multiple behaviours affecting the way that humans experience daily life. Hence, some adjustments should be done in order to raise a healthy global population using robots and avoiding future negative consequences for users and society in general. Furthermore, addiction towards robots should be defined. Questions requiring further research are raised: to what extent a service/social robot should be used? To what extent should we encourage social, and emotional engagement with robots? How can we avoid negative interactions with robots as is suggested in much of science fiction? Media amplifies the possible interactive and emotional scenarios involving humans and robot relationships. People fall in love with robots, and being friends and enemies with robots are fictional topics depicted in movies, books, and tv-series. Under this representation is possible to imagine different future scenarios with “toxic relationships”, “emotional dependency” and “addiction to robots”. However, technology hasn’t yet achieved the level of sophistication required for natural human-robot interaction. At the moment, there is limited progress in the development of social robots capable of minimal and limited social interaction involving emotional, and psychological engagement with users under controlled conditions. Nevertheless, future short-term applications of social robots aim to use this technology in education, psychotherapy, caregiving and several other human interactive purposes. Considering the importance of those areas for human development, this proposal aims reflect over the adjustments required to use safely social robots in emotional, social and psychological terms avoiding future addition towards them. Similarly, ethical implications in robot-behavioural design must be prepared in order to provide moral guidance to future robot designers.
Eduardo B. Sandoval is a social robotics researcher. His work spans different aspects of social robotics, such as Reciprocity in Human-Robot Interaction (HRI), robots and education, and robots and fiction, among other topics. Mainly, He is interested in how people make decisions when they interact with robots and other interactive devices. Beyond traditional robotics, there is a growing interest in the idea of designing machines that are capable of meaningful social interactions with humans. His work incorporates insights from behavioural economics and social psychology in order to explore different approaches in social robotics. He claims that “as a result of working in social robotics I have a better understanding of the human condition”.
New measures were developed to explore the human-robot interaction dimensions of self-efficacy, incentives and intentions relating to use of a social humanoid robot. Exploratory Factor Analyses were applied to investigate the initial structure of the scales from a high-school student sample rating a live 2-minute interaction between a robot and a person. Confirmatory Factor Analyses were applied to confirm the factor structure from a new cross-sectional online survey sample using a 2-minute video stimulus of an adult and a robot discussing a health-related topic. This trial also involved a two-part study which applied the newly developed measures in different experimental designs to investigate how individuals rate a human-robot interaction across different presentations using a between-groups and within-group design. Trial findings and implications of this paper will be discussed, including future considerations and development of the measures.
Dr Nicole Robinson is a Robotics Researcher at the Australian Centre for Robotic Vision on the Humanoid Robotics project, an R&D project supported by the Queensland Government. She is currently leading clinical trials and experimental studies involving the use of humanoid robots in interpersonal interactions. Nicole’s research interests involve designing human behaviours in agents and robotic systems. She has previously conducted research in the healthcare field, including the translation of a psychotherapeutic program to be delivered by a humanoid robot.