In June 2022, an AI researcher at Google claimed that it has come across an AI chatbot of the company that had gained sentience. Blake Lemoine, an engineer, shared his story and conversations with Google’s AI bot LaMDA i.e. Language Model for Dialogue Applications.
He alleged to the company that even after addressing this issue to Google, he was laughed at and dismissed time and again. After having enough, the engineer sought help from people and experts outside the organization as he didn’t have the relevant expertise. At that time, the company put Mr. Lemoine on paid leave because he disclosed confidential information to members outside of the company.
“We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.”– Google said in a statement
Now, Big Technology has reported that the company has fired the engineer. Google also shared a statement on the situation. The company said, “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly. So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”
It also clarified in the statement that LamDA went through 11 distinct reviews and the company also released a research paper that intricated the details about developing responsible AI.
Although Google has fired many AI researchers previously and many thought this might be one of those scenarios again, an ex-AI researcher also commented on the claims that Mr. Lemoine made. Margaret Mitchell, a researcher who was fired for speaking up against the lack of diversity in the company, tweeted:
A number of researchers and experts believe that it is too soon for these claims to be true even if we take the technology available today into account. Computerphile, a YouTube channel also has a video that explains how this can’t be true.
While it was a pressing issue, it certainly reminded everyone of the need that tech companies should especially develop AI responsibly.