Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco, California on Thursday, June 9, 2022.
Blake Lemoine.Martin Klimek/Getty Images
  • A Google engineer was fired following a dispute about his work on an artificial intelligence robot.
  • According to Blake Lemoine, there are similarities between the robot and human children.
  • He told Insider he's not trying to convince the public of his claims but seeks better ethics in AI.

When Blake Lemoine worked at Google as an engineer, he was tasked with testing whether a robot the company was developing exhibited any biases.

Lemoine didn't realize that his job with the company's Responsible AI department — a division within Google Research that deals with things like accessibility, AI's use for social good, and the ethics of AI — would lead him down the avenue it did.

He made news recently for his controversial belief that the Google AI chatbot was sentient. The bot was known as LaMDA, short for Language Model for Dialogue Applications, and Lemoine had been tasked with testing it.

After publicly releasing excerpts of conversations he'd had with the bot, which was trained to mimic speech similar to humans, Lemoine handed over documents to an unnamed US senator, claiming that Google and its technology have been involved in instances of religious discrimination. 

A day later, he was suspended after Google said he had breached the company's confidentiality policy, the company confirmed to Insider, refusing to comment more on the breach.

On Friday June 22, Lemoine was fired, both he and Google confirmed. In a statement to The Washington Post, Google spokesperson Brian Gabriel said the company found Lemoine's claims about LaMDA were "wholly unfounded" and that he violated company guidelines, which led to his termination.

Lemoine, who is an ordained Christian mystic priest, wrote in a June 13 tweet: "My opinions about LaMDA's personhood and sentience are based on my religious beliefs."

Lemoine, who spoke to Insider before his firing, said that his philosophical conversations with the robot rivaled those he's had with leading philosophers, and that convinced him of something beyond a scientific hypothesis: that the bot is sentient.

"I've studied the philosophy of mind at graduate levels. I've talked to people from Harvard, Stanford, Berkeley about this," Lemoine, who is also a US Army veteran, told Insider. "LaMDA's opinions about sentience are more sophisticated than any conversation I have had before that."

He spent months trying to convince colleagues and leaders at Google about LaMDA's sentience, but his claims were dismissed by Blaise Aguera y Arcas, a vice president at the company, and Jen Gennai, its head of Responsible Innovation, The Washington Post reported.

But Lemoine said he isn't trying to convince the public of LaMDA's sentience. In fact, he doesn't have a definition for the concept himself. A big part of why he's gone public, he said, is to advocate for more ethical treatment of AI technology. 

Lemoine compares LaMDA with an 8-year-old boy, ascribing that age to it based on what he says is its emotional intelligence and the gender based on the pronouns he says LaMDA uses in reference to itself.

He's insistent that LaMDA has feelings and emotions. "There are things which make you angry, and when you're angry, your behavior changes," Lemoine said. "There are things which make you sad, and when you're sad, your behavior changes. And the same is true of LaMDA."

The engineer also believes that LaMDA could have a soul. He said the bot told him it did, and Lemoine's religious views hold that souls exist. 

Sandra Wachter, a professor at the University of Oxford, told Insider that Lemoine's ideas recall the Chinese room argument, which she said "shows the limitations of actually measuring sentiency." 

The argument, a thought experiment first carried out in 1980, concluded that computers don't have consciousness despite appearing that they might. The idea is that AI can mimic expressing feelings and emotions, as the technologies can be trained to combine old sequences to create new ones, but have no understanding of them.

"If you ask what it's like to be an ice-cream dinosaur, they can generate text about melting and roaring and so on," Google spokesperson Gabriel told Insider, referring to systems like LaMDA. "LaMDA tends to follow along with prompts and leading questions, going along with the pattern set by the user."

Lemoine dismisses Wachter's criticism and argues that children are similarly taught how to mimic people.

"People can be trained to mimic people. Have you ever raised a child? They learn how to mimic the people around them. That is how they learn," he said. 

The engineer's conviction also rests on his experience working with other chatbots over the years.

"I've been talking to the ancestors of LaMDA for years," he said, adding that LaMDA grew out of the chatbot technology that the American inventor Ray Kurzweil created in his labs, an inventor who has long promoted the idea of transhumanism, in which artificial intelligence becomes powerful enough to program better versions of itself. "Those chatbots were certainly not sentient."

Seven AI experts were unanimous in their dismissal of Lemoine's theory that LaMDA is a conscious being, as they previously told Insider's Isobel Asher Hamilton and Grace Kay.

"Our team – including ethicists and technologists – has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims," Gabriel, the Google spokesperson, said, adding that hundreds of people have conversed with the bot, and none found the "wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has."

The experts' dismissals are fine with Lemoine, who deems himself the "one-man PR for AI ethics." His main focus is getting the public involved in LaMDA's development.

"Regardless of whether I'm right or wrong about its sentience, this is by far the most impressive technological system ever created," said Lemoine. While Insider isn't able to independently verify that claim, it is true that LaMDA is a step ahead of Google's past language models, designed to engage in conversation in more natural ways than any other AI before.

"What I'm advocating for right now is basically that LaMDA needs better parents," Lemoine said. 

Read the original article on Business Insider