Designing assistant classroom robots – a child-centered approach

Author

Mohammad Obaid

Abstract

Including children in the design of educational technologies that they use daily is essential and their involvement is important. However, it is unclear how this involvement should take place when designing a robotic assistant in a classroom environment. In this talk, I will present our approach to involving children in the development of a robotic assistant in a classroom environment. I will first describe the development of a robot design toolkit (Robo2Box) aimed at involving children in the design of classroom robots. Then, I will report on some of the studies we conducted to understand and evaluate our approach using the toolkit. Thereafter, I will outline some of the challenges and lessons learned from our approach to eliciting children’s requirements using the Robo2Box toolkit. Finally, I will conclude with possible design implications and future directions.

Bio

Dr Mohammad Obaid is a Lecturer at the UNSW Art and Design, University of New South Wales, Sydney, Australia. Dr. Obaid received his BSc., MSc. (First Class Honors) and Ph.D. degrees in Computer Science from the University of Canterbury, Christchurch, New Zealand, in 2004, 2007 and 2011, respectively. In April 2018, he received his Docent degree (Associate Prof. rank) from Uppsala University (Sweden). He worked at several international research centers including the Human Centered Multimedia Lab (HCM Lab), University of Augsburg (Germany), the Human Interface Technology Lab New Zealand (HITLab NZ), University of Canterbury (New Zealand), the t2i Lab, Chalmers University of Technology (Sweden), and the Social Robotics Lab, Department of Information Technology, Uppsala University (Sweden). Dr. Obaid is one of the founders of the Applied Robotics Group at the Interaction Design Division at Chalmers University of Technology (established in 2015). He is the (co-)author of over 60 publications within the areas of his research interests on Human-Robot Interaction and Human-Computer Interaction. In recent years, he has served in organizing committees and program committees of main HCI/HRI related conferences such as CHI, HRI, HAI, VRST, and NordiCHI.

Hypothetical Machines: Innovation, Labour, and Reward in an age of AI

Author

Liam Magee

Abstract

In his critique of AI, Pasquinelli writes “The ‘intelligence’ of neural networks is, therefore, just a statistical inference of the correlations of a training dataset… neural networks cannot escape the boundary of the categories that are implicitly embedded in the training dataset”. Pasquinelli’s commentary belongs to a growing but also historically well-established literature linking the epistemological limits of AI to questions of political economy. If machines can only reproduce well-defined tasks — however ingenuously — human activity must remain an irreducible residual in any economic calculus. At the same time, and against big tech and accelerationist enthusiasm for AI, machinic labour serves only to reproduce existing relations of production. He concludes “statistical inference is the distorted, new eye of the capital’s Master”. Whether the current AI connectionist paradigm is in fact constrained in the ways Pasquinelli and other critics diagnose remains questionable. Might there instead be scope, within the massive parallel operations conducted deep inside today’s neural networks, to produce novelty? If so, what implications would such novelties have for incentive-based economies — including academia — premised substantially upon innovation? What happens when, as legal scholars have begun to theorise, algorithmic outputs become patentable? This paper examines heralded instances of machine learning in gameplay, code generation and knowledge production that stretch at the limits of what is thought computable. Such examples suggest at sufficient scale, what begins as “statistical inference” may become indistinguishable from other, more privileged forms of human cognition. Specifically, the ability to generate scientific hypothesises, poetic metaphor or disruptive market innovation — examples of what Peirce referred to as “abduction” — would no longer be exceptionally human. The paper suggests that under current tendencies, this realisation will only further distance the owners of technologically-invested capital from disenfranchised subjects, with barely even their labour to sell. At the same time, full cognitive automation also deflates moral arguments for the merit-based differential distributions of rewards that underpin free-marketism. If algorithms come to dominate the commanding heights of cognitive capitalism, other systems for resource distribution may appear more compelling, morally and politically. Such prospects reflect back upon ethical questions raised in connection with AI. Echoing Marx, heterodox economists today (e.g. Dumenil & Levy) argue any talk of the “good life” must remain perniciously ideological in the presence of widespread domination. The paper concludes with desiderata for a virtuous AI sociality, including a retracing of rising inequality, that would form the ground for any ethical encounter between individual human and machine.

Bio

Liam Magee is a sociologist of science and technology, specialising in the application and impact of software on urban ways of life. His current research centres on machine learning, digital games and data analytics, and how these technologies interact and interfere with social systems such as cities, organisations, labour, environmental movements and financial markets.

Robots and recognition – what it would mean and could there be any?

Author

Heikki Ikäheimo (UNSW Sydney)

Abstract

In my presentation I want to clarify some of the basic philosophical issues concerning the question whether robots or ‘artificial agents’ could be subjects or objects of recognition. I will discuss what recognition actually is, or what its forms are, and how they are related to sociality and mindedness. On the view that I defend, recognition in a particular ‘purely intersubjective’ sense is fundamental both to sociality as we know and expect it, and to having a mind in the sense that we know and expect it. This does not preclude that robots or artificial agents could be appropriate objects of ‘recognition’ in other senses of the term (some less some more trivial), independently of their capacity to sociality, mindedness and intersubjective recognition.

Bio

heikki ikaheimo

Heikki Ikäheimo is senior lecturer at UNSW Sydney. His research areas include the theory of recognition, personhood, social ontology, and critical social philosophy. Including his publications are the monograph Anerkennung (De Gruyter 2014), and the co-edited volumes Recognition and Social Ontology (Brill 2011), Ambivalences of Recognition (Columbia UP, forthcoming), and Handbuch Anerkennung (Springer, forthcoming).

Social Cognition meets Ex Machina: Wittgensteinian Worries about Social Robotics

Author

Daniel D. Hutto

Abstract

We are in the midst of an info-revolution. It is a time of deep learning and big data. The robots are coming! Artificial intelligences are coming! Indeed, both are already here, and developing fast. This makes the practical puzzle of social robotics urgent: How can we create these new agents so that they interact fluidly with us? In considering this question, this presentation will look at lessons we can draw from the interactive turn that dominates enactive and embodied cognitive science and the best new theorizing about the basis of human social cognition. These considerations prompt the crucial question: to what extent do we need to ensure that social robots and AIs share our ‘form of life’ if we are to welcome and trust their company and merely have effective, but potentially unpredictable, interactions with them.

Bio

daniel hutto

Daniel D. Hutto is Senior Professor of Philosophical Psychology at the University of Wollongong, Associate Dean of Research of the Faculty of Law, Humanities, and the Arts, and member of the Australian Research Council College of Experts. He is co-author of the award-winning Radicalizing Enactivism (MIT, 2013) and its sequel, Evolving Enactivism (MIT, 2017). His other recent books, include: Folk Psychological Narratives (MIT, 2008) and Wittgenstein and the End of Philosophy (Palgrave, 2006). He is editor ofNarrative and Understanding Persons (CUP, 2007) and Narrative and Folk Psychology(Imprint Academic, 2009). A special yearbook, Radical Enactivism, focusing on his philosophy of intentionality, phenomenology and narrative, was published in 2006. He is regularly invited to speak not only at philosophy conferences but at expert meetings of anthropologists, clinicians, educationalists, narratologists, neuroscientists and psychologists.

The realism of computational intelligence in children rehabilitation applications

Author

Adel Ali Al-Jumaily (UTS)

Abstract

The talk will introduce the problems associated with the children rehabilitation biomedical applications based on computational intelligence and will emphasis on the realism of using computational intelligence and possible realistic machine learning. It will cover bio-signal/image processing and pattern recognition; it will highlight on the EMG based driven systems. It will include a novel working myoelectric controller for an autism, children mobility, and hand rehabilitation device that can deal with such issues. The proposed systems are based on computational intelligence techniques that included developing an accurate pattern recognition, which can work well in real time. It will also cover image pattern recognition for skin cancer and the realism approach.

Bio

Dr. Adel Al-Jumaily is Associate Professor in the University of Technology Sydney. He holds a Ph.D. in Electrical Engineering (AI). His research covers the fields of Computational Intelligence, Bio-Mechatronic Systems, Health and Biomedical Technology, Vision-based cancer diagnosing, and Bio-signal/image pattern recognition. Adel has developed a novel approach for Electromyogram (EMG) control of prosthetic devices for rehabilitation and contributed to signal/image processing, and computer vision. He has successfully developed many nature-based algorithms for bio-signal/image pattern recognition problems. Adel sits on the editorial boards of a number of journals and is chair or technical committee member for more than 60 international conferences. He is now Editor-in-Chief of one journal and an Associate Editor-in-Chief of two Journals. He is a senior member of IEEE and many other professional committees.

Robotic camera movement and the any-movement-whatever

Author

Chris Chesher

Abstract

Camera movement has been part of cinematic language since the earliest days of cinema. From the Lumiere’s view of Lyon from a train to Cuaron’s compositing of Sandra Bullock tumbling through space in Gravity, camera movement has performed viewers’ dynamic relationship to mise-en-scene, objects, characters and events. From Industrial Light and Magic’s 1975 Dykstraflex, computer-controlled cameras established a new level of precision and repeatability in camera movement. Such devices have become the most familiar mediators of a robotic aesthetic. They allow inhumanly rapid or slow movements that provide any-movement-whatever in the world of the scene. Seen particularly in science fiction and advertising, the spectacular robotic camera movement promises sensorial experience untethered from the body.

Bio

Dr Chris Chesher

Chris Chesher is Senior Lecturer in Digital Cultures in the Department of Media and Communications at the University of Sydney. His recent research has focused on robotics, smart speakers, the smart home and digital real estate.

Pinocchio Doctrine. Exposing the paradox of robotic nature

Author

Massimiliano Cappuccio

Abstract

What kind of entity essentially is a social robot, what is its true purpose, and how do we assess its significance for our lives? Are robots meant to replace/overtake, extend/augment, or imitate/represent human? To address these philosophical questions, I propose a speculative approach to human-robot interaction inspired by virtue ethics, recognition theory, and constructivist epistemology. I illustrate this approach allegorically through Mario Collodi’s famous children novel Pinocchio. Pinocchio is the literary prototype of a social robot and every social robot is, in some sense, an instantiation of Pinocchio. That is why the key elements of Pinocchio’s story tell us something about human-robot interaction: Pinocchio is a marionette that aspires to become a person; Pinocchio is created by Geppetto (the poietic ingenuity) for this purpose, but it is the Blue Fairy (empathy and social interaction) that makes the transformation possible; Pinocchio is a pathological liar, but cannot hide his deceptive intentions; Pinocchio can become a person only when, instead of deceiving others, develops an authentic concern for them. This “Pinocchio doctrine”, as I call it, gives voice to the intuition that, if we want to correctly define their ontological, axiological, and moral status, social robots should neither be conceived as slaves nor as companions. These two antithetical options presuppose the same paradoxical normative view of human-robot interaction, one that is rooted into oxymoronic dyads (ownership/autonomy, subordination/decision, instrumentality/personhood). To make sense of their value, I suggest social robots should be first of all understood as human creations and, more specifically, as proto-creatures – that is, a particular subset of creations meant (but not necessarily able) to recognize and be recognized by a social other. Even if the creature is originally tied to its creator, their bond is established only to be transcended: the ideal creator is the one who welcomes the independency of its creature, and an accomplished creature is the one who eventually achieves autonomy from its creator. The irreducible tension between these two aspects of the social robot explains why we are unable to ascribe rights to robots, which we own as our own tools, and yet we must give some moral consideration to them, as we inevitably tend to value their autonomy, even when incomplete. These considerations justify a constructivist and relational framework to articulate the ethical, epistemic, and aesthetic value of social robots. Relying on this conceptual framework, Pinocchio doctrine allows us to clarify the status of robots as moral patients and agents. Also, it indicates whether obligations, rights, and legal liabilities apply to social robots. Ultimately, it suggests a virtuous way to design robots, to behave with them, and interpret their role in our life.

Bio

Massimiliano Cappuccio (PhD, State University of Pavia) is a Research Associate at the University of New South Wales, where he is also Deputy Lead (Knowledge Exchange) of the Values in Defense & Security Technology (VDST) group. He has a secondary affiliation with United Arab Emirates University, where he is Associate Professor of Cognitive Science and Director of the Cog Sci Lab. His work as a cognitive philosopher and a philosophical psychologist addresses both theoretical and applied issues in embodied cognition and social cognition combining analytic, phenomenological, and empirical perspectives. His publications have covered topics like: motor intentionality and phenomenology of skill and expertise; unreflective action and choking effect; joint attention and deictic gestures; empathy and mirror neurons; social robotics and technology ethics; artificial intelligence theory; implicit knowledge and the frame problem. He is one of the organizers of the annual Joint UAE Symposium on Social Robotics, hosted by United Arab Emirates University and New York University Abu Dhabi, and the International Conference in Sport Psychology and Embodied Cognition, sponsored by Abu Dhabi Sports Council.

Robots and Racism Revisited

Author

Christoph Bartneck (UTS)

Abstract

We previously showed that radicalized robots are being treated with the same racial biases as humans by replicating the well-established shooter bias study by Correll et al. with robots. I will now present new insights into the study of racism within the framework on the shooter bias methodology. Furthermore, I will elaborate on the process of conducting research on racism in the human-robot interaction community.

Bio

Dr. Christoph Bartneck is an associate professor and director of postgraduate studies at the HIT Lab NZ of the University of Canterbury. He has a background in Industrial Design and Human-Computer Interaction, and his projects and studies have been published in leading journals, newspapers, and conferences. His interests lie in the fields of Human-Computer Interaction, Science and Technology Studies, and Visual Design. More specifically, he focuses on the effect of anthropomorphism on human-robot interaction. As a secondary research interest he works on bibliometric analyses, agent based social simulations, and the critical review on scientific processes and policies. In the field of Design Christoph investigates the history of product design, tessellations and photography. He has worked for several international organizations including the Technology Centre of Hannover (Germany), LEGO (Denmark), Eagle River Interactive (USA), Philips Research (Netherlands), ATR (Japan), and The Eindhoven University of Technology (Netherlands). Christoph is an associate editor of the International Journal of Social Robotics, the International Journal of Human Computer Studies and Entertainment Computing Journal. Christoph is a member of the New Zealand Institute for Language Brain & Behavior, ACM SIGCHI, The New Zealand Association Of Scientists and Academic Freedom Aotearoa. The press regularly reports on his work, including the New Scientist, Scientific American, Popular Science, Wired, New York Times, The Times, BBC, Huffington Post, Washington Post, The Guardian, and The Economist.