Google suspends an engineer after he claimed he came across a sentient AI

Published on

Google and its AI team are facing a bigger problem than it already has been for quite some time. Many AI researchers have been previously fired from the company on various grounds but this time, it’s a different issue.

Recently, Blake Lemoine, a software engineer on Google’s AI development team was put on paid leave by the company after he went public claiming he encountered a sentient AI. Later, he was suspended on the grounds of sharing confidential information with the public. The engineer has shared his story through a Medium post, titled “May be Fired Soon for Doing AI Ethics Work”.

He explained that in July 2021, he stumbled upon an AI ethics issue that he was concerned about and thought of speaking about it to his manager. It was related to a conversation he had with Google’s AI chatbot called LaMDA or Language Model for Dialogue Applications. After being disregarded several times because his manager thought that the evidence supporting his claims was too flimsy, he started looking for more evidence before escalating it.

Mr. Lemoine has claimed that the AI he was interacting with was behaving like a person who could experience and feel emotions. The interaction was “in his capacity as a priest, not a scientist.” In an interview with, The Washington Post, Lemoine also said, “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics.”

After running experiments on LaMDA for several months and being told that his concern couldn’t be escalated as he didn’t have the relevant expertise for it, he approached people outside the organization for consultation. This team included Meg Mitchell, an ex-Google AI Researcher who was suspended in a similar way. Escalating this matter to the VP resulted in being laughed at in the face. He also compiled the transcripts of the conversation that he had with LaMDA in a document titled, “Is LaMDA sentient?”.

In spite of pushing the issue too hard, Alphabet Inc. isn’t too bothered about the issue, and Google Spokesperson, Brian Gabriel responded, “Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.” The company also cleared that Mr. Lemoine was suspended for publishing confidential information about the company online and that he was hired as a software engineer, not an ethicist. On asking about the engineer’s suspension, the company explained it doesn’t comment on personal matters.

 “I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented.”

– Brian Gabriel

The engineer also said, “I feel that the public has a right to know just how irresponsible this corporation is being with one of the most powerful information access tools ever invented.”

Also Read:  How To Check If Something Was Written with AI

The suspension of Mr. Lemoine raises a lot of questions related to the development of Artificial Intelligence. Artificial Intelligence is still being explored and developed, and doing it responsibly after taking into account its potential dangers of it is something everyone should be looking at.

Latest articles

Latest articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More like this