On the first day of my Introduction to Philosophy course, I ask students whether an artificial intelligence can fall in love. Many students believe this to be possible. A sophisticated enough technology could, in theory, have emotions like those humans experience. There are, however, good philosophical reasons to doubt that an AI could ever experience love.
Developers claim that we are witnessing rapid progress toward artificial general intelligence (AGI), or a technology that has all the same (or more) intellectual capabilities as human beings. Since AGI is currently purely theoretical, however, it is debatable what list of capabilities a technology must have in order to qualify.[i] What, after all, are the intellectual capabilities that we are looking for?
The ability to experience love (or related attitudes, like empathy and care) should probably go somewhere on the list. Love is, after all, essential to human intellectual and artistic pursuits. Many of us have the feeling that it is among the most important of human affairs. Philip Larkin captured this well when he wrote: “What will survive of us is love.”[ii]
But there is one very good reason not to put love on the list of capabilities that an AGI must satisfy: if we do that, then AGI will likely never exist.
More limited AI, like chatbots, have already shown that they can make people believe that they are in love. The New York Times columnist Kevin Roose had an eerie and widely-publicized conversation with Microsoft’s AI “Sydney.”[iii] In a haunting tirade, the cadence of which is almost reminiscent of the Song of Solomon (one of the greatest love poems ever composed), Sydney repeatedly and ardently professed her affection to Roose, urging him to end his marriage and love her in return.
Apps like Replika are seeking to capitalize on the illusion that AI can love; it claims to deliver the first “AI companion who cares,” and features testimonials from users who say things like “he taught me how to give and accept love again.”[iv] But since Replika is a chatbot, an artificial intelligence trained on available datasets in order to simulate human conversation, we can be fairly certain that the emotions felt in these “relationships” are entirely one sided, felt solely on the part of the human users.
It is another question, however, whether a more sophisticated AGI could not just simulate–but experience–love. The answer to this question arguably lies in philosophical research about consciousness, more specifically in the longstanding debate about the phenomenological character of mental states.
The philosopher Frank Jackson put forward a well-known thought experiment, sometimes referred to as “Mary’s room,” which helps get to the heart of this debate.[v] Jackson asks readers to imagine Mary, a student of neuroscience, who is raised and educated in a room that is kept entirely black and white. Mary learns everything there is to know about the physical brain and its functions. She can explain exactly what goes on when, for example, a person perceives a red object. Jackson asks readers to consider what would happen if Mary were to be set free into the color world. Would Mary learn anything new?
Philosophers are divided about Mary. Physicalists have argued that when Mary sees her first red object, she doesn’t learn anything new. After all, she already has all the neuroscientific information. She knows all there is to know about seeing red. To claim that actually seeing red adds something to her knowledge base is mistaken; for physicalists, seeing is just another way of accessing what she already knows, the functional explanation of how color perception occurs.
Dualists, on the other hand, argue that Mary does learn something new. Mary acquires information about the conscious experience of redness: she learns “what it is like” to see red.[vi] Dualists believe that one cannot know about the texture of a conscious experience, whether that experience is of seeing red or falling in love, without having had it.
So what would each of these camps say about the possibility of a loving AGI? Dualists accept that there is something that “it is like” to have conscious experiences like love. Furthermore, what it is like for a human to see red, smell rain, or fall in love is irreducible to functional information about that experience (i.e. how the eye reacts, how the neurons fire, etc.). So, even if an AGI possessed and perfected a neuroscientific explanation of human love (something that we definitely do not have at present), it would still lack important information about the fundamental character of loving experience. This is because, for dualists, no amount of data could ever replace actually having that conscious experience.
But what would a physicalist say about an AGI that possessed a complete neuroscience – including a complete functional description of human love? Could it fall in love? Patricia Churchland is a physicalist who responds to Mary’s room.[vii] When considering whether Mary in the room could not just know–but experience–what it is like to see red, Churchland asks: “How can I assess what Mary will know and understand if she knows everything there is to know about the brain?”[viii] She further imagines us presenting Mary in the room with her first red object and asking her whether it is red: “perhaps she could, by introspective use of her utopian neuroscience, tell that she has, say, a gamma state in her O patterns, which she knows from her utopian neuroscience is identical to having a red sensation.”[ix] Mary is presented with a red object and she identifies it as red–why shouldn’t we think she is seeing red?
Perhaps then, by analogy, an AGI which knows all there is to know about love, which possesses a complete functional description of the human emotions, and which can engage in loving activities and identify them as such will be an AGI that loves.
But there’s one thing missing. We must not forget that Mary is human. When presented with her first red object she experiences that object. By contrast, we have no reason to believe that an AGI which has all the neuroscientific information about love, which also engages in and identifies loving activities will thereby start to experience what it is like to be in love. After all, there seems to be little or no connection between having information about something and experiencing it. Humans routinely experience things that we don’t know much about (like postpartum depression before it even had a name) and we know a great deal about many things that we don’t thereby experience (like mathematics or botany).
The thing that makes human beings special (or bizarre, depending on how you look at it) is that we not only have the ability to interpret the greyscale world of data, but we also have access to the color world of experience. And, when it comes to emotions like love, it’s impossible to understand them fully in black and white.
Notes:
[i] Consider, for example, the difference between Vincent Müller and Nick Bostrom’s “high–level machine intelligence (HLMI)” that “can carry out most human professions at least as well as a typical human” (“Future Progress in Artificial Intelligence: a Survey of Expert Opinion.” Fundamental Issues of Artificial Intelligence. Springer: 2016, 555–572) and Geordie Rose’s notion of an AGI “that can move around in the real world and interact directly with objects” (Harris, Jeremie. “Will AGI need to be embodied?” Toward Data Science Podcast: 2020, https://towardsdatascience.com/will-agi-need-to-be-embodied-a719db443b01).
[ii] Larkin, Philip. “An Arundel Tomb.” The Whitsun Weddings, Faber: 1964, line 42.
[iii] Roose, Kevin. “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled.” New York Times, Feb. 17, 2023: https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
[iv] Replika. “Our story.” Accessed May 25, 2023: https://replika.com/about/story.
[v] Jackson, Frank. “Epiphenomenal Qualia.” The Philosophical Quarterly. 32: 127, 1982: 130.
[vi] ibid. 132.
[vii] Churchland, Patricia Smith. Neurophilosophy: Toward A Unified Science of the Mind-Brain. MIT Press, 1986: 330-5.
[viii] ibid. 332.
[ix] ibid. 333.