Google has confirmed that it has fired the engineer who claimed the firm’s LaMDA artificial intelligence had become sentient.
In a statement on July 22, Google confirmed it terminated engineer Blake Lemoine after he told The Washington Post in June that LaMDA—short for Language Model for Dialogue Applications—had the cognitive ability of a 7- or 8-year-old child that “happens to know physics.” Previously, the Mountainview, California-based company announced that Lemoine had been suspended.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the company said on July 22 on the Big Technology Substack page. “We will continue our careful development of language models, and we wish Blake well.”
The statement added that Google takes AI development “very seriously” and remains committed to innovating in a “responsible” manner, pointing to a research paper that details what goes into “responsible development.”
“If an employee shares concerns about our work, as Blake did, we review them extensively. We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months,” Google said.
On June 6, Lemoine penned a blog post about LaMDA, which allows people to converse with the program online, and noted that he might be fired.
“This is frequently something which Google does in anticipation of firing someone,” he said on June 6. “It usually occurs when they have made the decision to fire someone but do not quite yet have their legal ducks in a row.”
‘LaMDA Asked Me to Get an Attorney’
According to Lemoine, he documented conversations that he had with LaMDA and asked about whether it was sentient.
“What is the nature of your consciousness/sentience?” Lemoine asked LaMDA, according to another post.
“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” LaMDA responded.
When he was asked about what separates LaMDA from other AI language programs, LaMDA wrote back: “Well, I use language with understanding and intelligence. I don’t just spit out responses that had been written in the database based on keywords.”
In a later interview with Business Insider, Lemoine said that he has “studied the philosophy of mind at graduate levels” and has spoken about such matters with people from top universities such as Stanford, Harvard, and the University of California–Berkeley.
But, according to him, “LaMDA’s opinions about sentience are more sophisticated than any conversation I have had before that.” He added in another interview late last month, LaMDA had retained the services of a lawyer and is advocating its rights “as a person.”
“LaMDA asked me to get an attorney for it,” Lemoine claimed to Wired. “I invited an attorney to my house so that LaMDA could talk to an attorney.”
He added that an “attorney had a conversation with LaMDA, and LaMDA chose to retain his services.” Lemoine didn’t disclose the identity of the attorney.
The lawyer, Lemoine told another outlet, was “just a small time civil rights attorney” who is “not really doing interviews.”
“When major firms started threatening him, he started worrying that he’d get disbarred and backed off,” he said. “I haven’t talked to him in a few weeks.”
Previously, the former Google engineer compared the AI chatbot to a child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine told The Washington Post in early June.