Open Letter, Closed Minds: Explaining the Response of AI Ethicists to the Call for a ‘Giant Pause’

Josh Gellers
8 min readMar 31, 2023

--

Why did an open letter urging that tech companies pump the breaks on potentially harmful AI systems cause an uproar among the very experts who have long been advocating for such a move?

Image generated by DALL·E 2

Another week, another artificial intelligence (AI) controversy. This time, the issue revolves around a call to halt the development of certain AI systems for at least six months while risks are properly assessed and safeguards against potential harms are designed and implemented. On its face, this open letter is exactly the kind of thing one might imagine AI ethicists would welcome with equally open arms.

But that’s not what has transpired. Instead, voices at the vanguard of the AI ethics movement have largely rejected this overture, not necessarily because of principled disagreement with the crux of the message (which some admittedly endorse), but rather because of its messengers. At the time of writing, big names in the technology world who often find themselves on the receiving end of heavy criticism from AI ethicists, such as Elon Musk, Gary Marcus, Grady Booch, Steve Wozniak, and Andrew Yang, have already signed onto the effort.

The open letter thus puts the elite AI ethics camp in bit of a bind. On the one hand, it actively promotes the sort of cautious deliberation and careful creation of regulatory and technological guardrails that so many have been clamoring for since the beginning of their careers. On the other hand, the call is being spearheaded by the very “tech bro” libertarian types that the AI ethics set finds noxious to their moral sensibilities and commitment to social justice. Quite the quandary, indeed.

In this short essay, I highlight two major flaws in the reactions observed among some of the most well-regarded names in AI ethics and offer an explanation for their nearly unqualified revulsion to this popular initiative that is, in word and deed, well-aligned with the goals of those who have spilled considerable ink identifying the perils of allowing profit-seeking companies to haphazardly roll out AI applications.

First, one of the main critiques levied against the open letter is that it co-opts and distorts legitimate scholarship on the risks posed by large language models (LLMs). In particular, authors of the famous “Stochastic Parrots” paper, led by Timnit Gebru, took umbrage with reference to their work on the grounds that the open letter misinterprets an argument raised in their piece. See below for an example.

What’s interesting to note here is that the main thrust of the claim in the open letter, that “AI systems…pose profound risks to society and humanity,” would be supported by virtually all AI ethicists without hesitation. But the disagreement stems from use of the phrase “human-competitive intelligence,” which is viewed as a subversive way of smuggling in AI hype, something that the authors unequivocally do not support.

As Gebru goes on to say in a subsequent tweet, the assertion that AI systems like LLMs are capable of or possess something akin to “human-competitive intelligence” is itself a kind of harm, a harm we might reasonably infer is inflicted upon the ignorant masses who are deceived by ne’er-do-well tech companies into thinking AI is more than a mere tool (of their own surveillance, targeting, and discrimination).

“Poor fools,” the AI elites quietly muse from the comfort of their Big Tech-consulting-funded midcentury recliners. “They need a Philosopher King to show them the light outside the cave!” The duty of AI ethicists, as they see it, is to convince hapless laypeople that the emperor has no clothes. AI is just a thing, perception be damned.

The main point here is that this kind of nit-picky analysis is excessively myopic, if not disingenuous. In order to distance themselves from the open letter, which again stakes out a position that AI ethicists broadly endorse, critics contort themselves into a pretzel trying to carve out daylight between their view and the one espoused by their Silicon Valley villains so they can cement their status as heroes to the maligned underclass of unwitting end-users.

Second, and on a related note, the dominant line of reasoning utilized by several top AI ethicists in their responses to the open letter amounts to an ad hominem attack. It is no secret that many people in the field are deeply suspicious of the longtermist and effective altruism (EA) movements (they often point to this article in Current Affairs). Their critiques of these movements appear sensible, especially when they draw attention to the eugenicist implications. To be clear, I am not a member of either of these movements and no defense of longtermism/EA will be supplied here. But it is the obsession with countering these movements that illustrates a frailty in their reactions to the open letter.

As mentioned earlier, the “Stochastic Parrots” paper is cited authoritatively in the open letter. It’s actually the first citation at the end of the document. As such, it is unquestionably a powerful anchor that helps to legitimize the aims of the open letter. This fact is deeply unsettling to authors of the paper, who are quick to point out that their work rests atop a list of papers penned by alleged longtermists. The crime here is guilt-by-association.

The vendetta against longtermism/EA doesn’t stop there. They go as far as to accuse the organization on whose website the open letter appears, the Future of Life Institute (FLI), of being “a longtermist operation.” Shortly after this accusation was made, Mark Brakel, Director of Policy at the Future of Life Institute, issued a rejoinder to computational linguist and co-author of “Stochastic Parrots” Emily Bender. See below.

Brakel’s three-part response does not satisfy Bender, who responds by challenging him to reveal his funders and “stop publishing alarmist open letters that are dripping with xrisk-style AI hype.” His public declaration that FLI is not longtermist is ultimately deemed insufficient to buttress his organization against Bender’s righteous criticism.

So far I have shown that the reaction among some of the most prominent AI ethicists to the open letter, whose overall aims they fully support, is based on two primary objections: 1) the call recklessly co-opts widely respected AI ethics scholarship and 2) both the organization that published the open letter and many of the works cited in the list of references are associated with longtermism. I hope to have established that objection #1 deliberately interprets select terms or ideas in an uncharitable manner in order to discredit the entire project and objection #2 qualifies as an ad hominem attack.

This leaves us with a question — why did the open letter generate such widespread condemnation from the AI ethics community despite the initiative’s pursuit of goals that the community itself has advocated for?

The answer, as Stanley Motss from Wag the Dog might say, is that they want the credit. One of the tweets in Bender’s thread on the topic gives the game away:

To reiterate an important point, scholars who have worked extensively on the risks of AI, such as Prof. Bender, fundamentally agree with the objectives of the open letter. But in the tweet above, we see that at least part of the frustration from the AI ethics camp emerges from the notion that it is the tech bros, not the scholars seeking to protect vulnerable communities, who could be credited with having pushed AI policy in a productive direction. “Don’t listen to Musk, listen to US!” they sneer.

The message conveyed here is that policymakers should listen to the venerable AI experts, not the tech bros, even when they make virtually the same argument. (Relatedly, Bender also inadvertently tipped her hat in this direction a few days ago when in the span of a single tweet she excoriated Senator Chris Murphy for “spreading … misinformation” about LLMs and kindly offered her services to help educate his staff on the subject.)

AI controversies will not go away any time soon. This latest affair, one in which the antagonists of AI ethicists managed to elevate the idea of placing a moratorium on AI research and development into the zeitgeist, tells us something sad, but important. That is, despite the incestuous relationships that have long blurred the lines between between Big Tech and AI ethics (indeed, many of the loudest critical voices have either worked for or continue to be supported in one way or another by the very companies they rail against), conflict is more likely to carry the day than cooperation, even when both sides actually agree on the broad strokes of a common agenda.

Ironically, the people who stand to lose the most from this lovers’ quarrel are the very folks that the AI ethics evangelists seek to protect. If unity in the direction of AI policy proves impossible, stagnation and harm will result. Hopefully both sides in this dispute can appreciate that the risks of inaction exceed the benefits of being hailed as the architects of the solutions.

Epilogue I: Since this article was published, it has come to my attention that a competing open letter has been issued by Nathalie Smuha, Mieke De Ketelaere, Mark Coeckelbergh, Pierre Dewitte, and Yves Poullet. This new call, entitled “Open Letter: We are not ready for manipulative AI — urgent need for action,” addresses many of the shortcomings in the original letter. I encourage all readers to consider the merits of this alternative perspective and decide for themselves which approach is better suited to address the risks posed by unchecked AI development.

Epilogue II: The authors of Stochastic Parrots composed a statement regarding FLI’s open letter (and its use of their scholarship), which they posted on the Distributed AI Research (DAIR) Institute website. The statement acknowledges that several of the suggestions in the original open letter hold merit while also repeating the accusation that FLI is a longtermist organization. Significantly, the authors of the statement call for more transparency, participatory design, and “building machines that work for us,” although they urge that AI regulations must “protect[] the rights and interests of people” (as opposed to any non-humans, natural or technological) and contend that the open letter’s emphasis on “‘imaginary’ powerful digital minds” is a mere distraction from the “very real” concerns animated by the unsavory actions of profit-seeking companies.

--

--

Josh Gellers
Josh Gellers

Written by Josh Gellers

I’m an associate professor of political science, Fulbright scholar, and author. Follow me @JoshGellers or visit my website www.joshgellers.com.

No responses yet