Google has fired a software engineer who has claimed with increasing vehemence that one of the company’s AI algorithms has a mind and a soul. The Washington Post reports this and also quotes from conversations between Blake Lemoine and the chatbot called LaMDA. They would have convinced the 42-year-old that LaMDA is comparable to a 7- to 8-year-old child.
According to the report, because nobody at Google shared this belief, Lemoine turned over documents about the chatbot to a US Senator and claimed that Google discriminated against his religious beliefs. A day later he lost his job last week.
Convinced by an algorithm
LaMDA stands for “Language Model for Dialogue Applications”, Google presented the algorithm last year at the I/O developer conference. The algorithm was trained on massive amounts of dialogue to simulate human conversations with all their nuances. Google had assured that the answers to human questions are meaningful and specific. Lemoine is going beyond that now, he’s been working with the AI since last fall and should make sure it’s safe to use. In many conversations with LaMDA he also talked about religion and the AI talked about their rights and personality. After all, she even convinced him of her point of view on Isaac Asimov’s robot laws.
According to the report, internally Lemoine has worked to ensure that the group obtains permission from the chatbot before experiments are carried out with it. He had reached the highest management level with his demands, but he was probably unable to convince anyone. Instead, they repeatedly questioned his health. University of Washington linguist Emily Bender points out that we created machines that can string words together without a mind, but we haven’t learned not to imagine a mind behind them. Even the terminology such as “learning” or “neural networks” creates a false analogy to the human mind.
A Google spokesman admits in this context that there has recently been an increased debate about a possible awareness of AI. But it’s about long-term developments, it doesn’t make sense to humanize current conversation models: “They are not sensitive.” They imitated human conversations and would have so much data that they didn’t need consciousness to feel real. For Google, the case is not the first controversy surrounding AI technology. In the dispute, the group had dismissed two leading employees of the team for ethics of artificial intelligence. Lemoine now seems to feel the same way. According to the Washington Post, prior to his furlough, he sent a conversational document to 200 Google employees asking them to take good care of LaMDA. Nobody answered.
#chatbot #LaMDA #developed #consciousness #Google #furloughs #employees