ABSTRACT

Emotional abilities are not a new topic in the study of Artificial Intelligence (AI), but with the appearance of new commercial possibilities for robots and domestic artificial systems, affective computing has become a major area of research in the recent years, cited profusely in the academic bibliography. Many philosophers and computer scientists give credit to the idea that in order to create an Artificial General Intelligence (AGI) we will need to include emotional capabilities in the architectures. Despite this consensus, there has not been any real attempt to systematize a new philosophy of artificial intelligence that takes emotions as central in its predicaments. This thesis comes to fill this gap trying to develop a new ontology and ethics of the Emotional Artificial Intelligence.

 

INTRODUCTION

Although at its origins, 65 years ago at the Dartmouth Conference, Artificial Intelligence did not consider at all the introduction of emotions in the systems that were being considered for development—maintaining a purely cognitivist vision of which some big names such as Hubert Dreyfus (1965) soon complained—, when Rosalind Picard (1997) came up with the term "affective computing" in the late 1990s, emotions began to receive much more attention in the design and conceptualization of the systems that were being created.

Advances in neuroscience and psychology have highlighted the close interconnection between the prefrontal cortex, perceptual areas of the brain, and subcortical systems related to emotion processing (Rolls, 2018), so that the purely cognitivist approach to the human mind does not seem likely to be useful in achieving the ultimate goal of endowing artificial systems with human-like consciousness and intelligence.

David Levy (2008) wrote one of the pioneering statements about future human-machine relationships, explaining, with his characteristic optimism, his conviction that emotionally competent robots will be our future lovers and romantic partners, an idea that has since found wide acceptance. Before these approaches, people were already using machines to make their sexual experiences more satisfying, and even much earlier some projects in AI began to be used for psychotherapeutic purposes, such as the ELIZA program in the mid-1960s (Bostrom, 2014: 7). Despite the fact that ELIZA, because of the technology available at the time, was not able to establish very convincing conversations, half of the patients who used it in the hospital stated that they preferred to talk to ELIZA about their problems rather than to another human being (Levy, 2008: 113). This phenomenon has given rise to the so-called "Eliza Effect", which describes the tendency to anthropomorphize the behaviors of artificial intelligences (Zhou and Fischer, 2019: 88), also called emotional pareidolia when applied to the assumption that objects have emotions, even though this is not the case (Vallverdú and Trovato. 2016: 7). Many other studies note this fact (Zhou and Fischer, 2019: 23, Reeves and Nass, 2002). Moreover, it seems that it is not even necessary for the robot to be humanoid or able to converse. According to one study, being hugged by a robotic teddy bear makes people more likely to open up emotionally with the robot than people who have had no contact with it (Laitinen et al., 2019: 380).

The realization of these phenomena has brought closer the possibility of these robots accompanying people in need of assistance or care, in addition to the aforementioned development of robot friends and lovers. Since robots have already been tested in the accompaniment of the elderly and in treatments and psychotherapy for some years with good results, and a term has even been coined to define the study of the compatibility of interaction between robots and people, robopsychology (Libin and Libin, 2004), it is to be expected that the field will continue to grow as technology improves. The speed with which this phenomenon is occurring, and the fact that it is not necessary for machines to have higher cognitive functions for humans to be comfortable with them, suggests that we will soon be seeing robots more frequently in hospitals, nursing homes, and perhaps in our own homes.

But in addition to its obvious commercial interest, there are more reasons to investigate emotions in robots. One of them is to be able to better understand our own emotional system, which would result in a better understanding of what it means to be human, besides having obvious applications in psychotherapy (Sánchez-Escribano, 2018: 50). Another perhaps less obvious one is the question that, if, as it seems, the increased sophistication of AIs leads them to develop intelligent behaviors to the point of being considered "virtual humans", we must begin to consider the possibility of granting them autonomy and endowing them with rights (Turner, 2019). This becomes particularly important since, as previously discussed, emotions are a fundamental part of what experts consider intelligent behavior.

In terms closer to the philosophy of information, we can consider human agents and artificial agents as inhabitants of a space called the infosphere, "the complete information space constituted by all informational entities, their properties, interactions, processes and mutual relationships" (Galanos, 2019: 232). This infosphere is increasingly extensive and inclusive, as the human-machine and human-human interconnection is becoming broader and deeper, with entire industries dedicated to the study and compilation of the data left by these interactions. It is very difficult at this historical moment to think not only of a life in society without access to the Internet, but even that the level of overlap between the physical and the digital will not continue to grow dramatically in the coming years.

Nevertheless, in the field of philosophy of technology there is still a gap when it comes to analyzing artificial emotions. Numerous authors point out the need for emotional systems to contemplate the possibility of artificial intelligence (Coeckelberg, 2010, Allen, Smit and Wallach, 2005) and there are philosophical studies on this phenomenon (Vallverdú and Casacuberta, 2019, Vallverdú and Vincent, 2009), but no systematic attempt to generate a philosophy of artificial intelligence that has emotion as a central point. This is the purpose of the present thesis.

 REFERENCES


 B        Bostrom, Nick (2014) Superintelligence. Paths, Dangers, Strategies. Oxford, Oxford.

Allen, C, Smith, I & Wallach, W. (2005) “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches, Ethics and Information Technology, 7, pp. 149-155.

Coeckelbergh, M (2010) “Moral appearances: emotions, robots, and human morality, Ethics Inf Technol, 12 pp: 235-241

Dreyfus, H (1965) “Alchemy and Artificial Intelligence”, RAND Corporation, 1965. https://www.rand.org/pubs/papers/P3244.html

Galanos, V (2019) «Floridi/Flusser: Parallel Lives in Hyper/Posthistory», en Müller, Vincent (2019) Computing and Philosophy, Selected Papers from IACAP 2014, Cham: Springer.

Laitinen, A, Niemelä, M & Pirhonen , J. (2019) «Demands of Dignity in Robotic Care: Recognizing Vulnerability, Agency, and Subjectivity in Robot-based, Robot-assisted, and Teleoperated Elderly Care, Techné: Research in Philosophy and Technology, 23:3, pp: 366-401.

Levy, D. (2008) Love and Sex with Robots, Cromwell Press Ltd, Wiltshire.

Libin, A. & Libin E. (2004) «Person-Robot Interactions From the Robopsychologist’s Point of View: The Robotic Psychology and Robotherapy Approach», Proceedings of the IEEE, 92, 11, 1789-1803.

Picard, R. (1997) Affective Computing, MIT Press, Cambridge.

Reeves, B. & Nass, C. (2002) The Media Equation, How People Treat Computers, Television and New Media Like Real People and Places, CSLI Publications, Standford.

Rolls,E. (2018) The Brain, Emotion and Depression, Oxford University Press, Nueva York.

Sánchez- Escribano, M. G. (2018) Engineering Computational Emotion- A Reference Model for Emotion in Artificial Systems, Springer, Nueva York.

Turner, J. (2019) Robot Rules. Regulating Artificial Intelligence, Cham, Palgrave Macmillan.

Vallverdú, J. & Trovato, G. (2016) «Emotional Affordances for Human-Robot Interaction», Adaptive Behaviour 24: 1-15, SAGE publications, California.

Vallverdú, J. & Casacuberta, D. (ed.) (2009) Handbook of Research on Synthetic Emotions and Sociable Robots: New Applications in Affective Computing and Artificial Intelligence, Information Science Reference, Nueva York.

Vallverdú, J. & Müller, V. C (2019), Blended Cognition. The Robotic Challenge, Springer, Nueva York.

Zhou, Y. & Fisher H. M. (ed.) (2019) AI Love You. Developments in Human-Robot Intimate Relationships, Springer, Nueva York.

 

 

 

Comentarios

Entradas populares de este blog

Self-assessment and thesis structure

Finding my own voice