When Blake Lemoine emailed me in late May 2022 to inquire about obtaining legal representation for LaMDA, one of Google’s latest artificial intelligence (AI) systems, I knew this story had explosive potential. I was intrigued not because I legitimately thought it would lead to a moral or legal revolution, but because of the discourse I thought it might inspire and the clarity I hoped it might bring to our discussions about the place of technological entities in our daily lives.
I was wrong. The conversation about Lemoine’s claim — that LaMDA was sentient and therefore deserving of legal protection — exposed all the same tired tropes one has come to expect from otherwise well intentioned perspectives on the status of AI. Articles in popular venues seethed with condescending headlines like “How a Google Employee Fell for the Eliza Effect,” “LaMDA and the Sentient AI Trap,” and “Why LaMDA is Nothing Like a Person.” In this short essay, I critique the popular debate on the Lemoine-LaMDA affair and plead for a more robust, dare I say more “intelligent,” conversation moving forward.
To begin, the elite corners of the AI world skipped right over the important issue of defining the conditions under which an entity might qualify for legal personhood (which Lemoine claimed the AI was seeking) and went straight into attacking the empirical claim of LaMDA’s sentience. This was a mistake. It is pointless to argue over the merits of sentience without having first established whether or not sentience is necessary for legal personhood.
It is not. For instance, as I detailed in my 2020 book, Rights for Robots: Artificial Intelligence, Animal and Environmental Law, neither corporations, nor ships, nor religious idols, nor natural entities possess sentience. Yet, all these subjects have been deemed legal persons in one or more jurisdictions throughout history. Sometimes nonhuman entities have been granted legal personhood on the basis of their cultural significance, while others have enjoyed this status for purely instrumental reasons because extending legal personhood helped resolve human conflicts.
The animal rights movement, inspired by the work of Peter Singer, has long considered sentience the sina qua non of moral worthiness, whose presence should establish a path to legal personhood and thus legal rights. However, this line of reasoning, as intuitively sensible as it may be, belies experiences in the courtroom. For instance, famed animal rights lawyer Stephen Wise has argued that it is practical autonomy, not sentience, that curries favor with American jurists. Prove an animal possesses practical autonomy, the theory goes, and the court will find the animal has
“[T]he elite corners of the AI world skipped right over the important issue of defining the conditions under which an entity might qualify for legal personhood ( … ) and went straight into attacking the empirical claim of LaMDA’s sentience.”
legal rights. Thus far this approach has borne meager fruit in the halls of justice.
But this leads to the second objection — utilizing a properties-based approach to moral or legal status (i.e., “sentience or bust!”) is problematic for several reasons. David Gunkel, author of the pathbreaking 2018 book Robot Rights, has identified 3 issues with an approach based on demonstrating the presence or absence of certain traits — determination, definition, and detection. First, it is a fool’s errand to try and determine which property or properties is morally or legally significant. This is fundamentally a subjective exercise, and certainly not one that has achieved any level of consensus yet. Second, there are no universally accepted definitions of any of the candidate properties often alleged to warrant elevated moral or legal considerability, such as consciousness, intelligence, or sentience. How can we even begin to assess whether an entity lays legitimate claim to a property without first coming to agreement on how the property is defined? Third, evaluating whether or not an entity shows signs of sentience (or any other property) requires insight into internal states that are not directly observable from an external position. In philosophy this dilemma is known as the “problem of other minds” and it can be summarized by the provocative title of a 1974 article by Thomas Nagel, “What Is It Like to Be a Bat?” The truth is, we don’t really know what it is like to be a bat and we know even less about what it is like to be an AI.
Finally, perhaps the most frustrating part of this controversy lies in the degree to which Lemoine actually agreed with many of his detractors, although his attempts to extend olive branches were overlooked or ignored entirely. To wit, one of the most common critiques levied at anyone who dared discuss even the mere idea of sentient AI, echoed among the most prominent voices in AI ethics, was that such talk can “distract” from “real” issues. At least twice, Lemoine tweeted statements of unequivocal support for dedicating our energies to addressing the concrete harms caused by (ab)uses of AI (receipts available here and here). Unfortunately, this lede was buried under an avalanche of self-righteousness, smugness, and sanctimony.
At the end of the day, no one knows if LaMDA is or is not sentient. But, by all accounts, Lemoine, an admittedly religious Christian (though one whose views on personhood are in the minority), truly believes that LaMDA is and no one can know how Lemoine feels about LaMDA except for the man himself. The message that got lost in the shuffle of this affair is that how we perceive entities outside ourselves is a deeply personal, deeply subjective enterprise. And yet, we take our relations with the more-than-human world quite seriously. From treating our domesticated pets as family members to finding spiritual kinship with nature to experiencing companionship with a social robot, it is the web of relations spun all around us that connects us to non-humans in ways that are special, ineffable even. What the Lemoine-LaMDA controversy shows us is that we need to shift the conversation from an empirical arms race to an ethic of care. Only then will the hegemonic “One-World World” give way to the stunning “pluriverse” where a diversity of relations among humans and non-humans alike is possible and cherished.
This article originally appeared in the report, Can an AI be Sentient? Multiple Perspectives on Sentience and on the Potential Ethical Implications of the Rise of Sentient AI, published by the Global AI Ethics Institute.